tag:blogger.com,1999:blog-69660854624645959872024-03-14T10:53:27.413+02:00StuffUnknownnoreply@blogger.comBlogger35125tag:blogger.com,1999:blog-6966085462464595987.post-83842231979974888532020-08-21T21:36:00.013+03:002020-08-21T21:58:00.712+03:00DIY laptop: proper backlight<p>The backlight driver is built with a TI LP8545 boost converter with an external FET for higher output voltage. This part is quite old and was used in original Retina MBPs A1398 back in 2013. Newer Retinas use LP8548, a custom variant of LP8545 that TI is making specifially for Apple, for which there is next to no public information, much less a datasheet. What's more, in newer MBPs this boost converter (located in MBP's body) is controlled by the display controller (located on the narrow PCB hanging off the LCD panel) which in turn gets its orders from the video card or iGFX. I decided against trying to get this setup working, so I used the older part, which I control by I2C from the motherboard with a homebrew SMBus "driver" that maps PCH's SMBus registers into userspace (no i2c-tools on Windows). There were a <a href="https://e2e.ti.com/support/power-management/f/196/p/930139/3449477#3449477">couple of hurdles</a> to clear before I got the converter working. First, when I first powered it up, I could talk to it via I2C, but every time I tried to turn on the booster it raised an over-current fault and stopped responding to commands. This was peculiar because no load was connected. I checked the PCB for shorts, but everything was fine. I suspected that the problem was caused by the fact that in LP8545's factory settings the external FET is disabled, so I soldered a jumper between the FET's source and drain, and sure enough with the jumper closed the booster turned on and everything was working as expected. I could then enable the external FET and remove the jumper. But this was obviously a bench-only procedure. It is supposed to be possible to update the settings in LP8545's EEPROM — the datasheet explains how to do it — but each time I tried the steps, the settings were not updated. Thankfully, a TI engineer told me that LP8545 raises the over-current protection fault not only in case of over-current, but also when the output voltage is too low for more than 50ms. I was then able to exploit this 50ms grace time and enable the external FET immediately after starting the converter even with the FET jumper open. The other problem was that once I enabled the external FET, the output voltage inexplicably dropped several volts. It did step up and down when I changed the output voltage setting, but even at the highest setting it did not go as high as with the FET disabled. The datasheet did not mention such behavior. This was a real head-scratcher and TI was no help either. Was the part partially busted? Did I make a solder bridge under it? Eventually I proposed that the part was working fine — the boost converter <i>was</i> functioning in a consistent manner — and decided to experiment by changing the feedback resistor divider. Bingo! Apparently, when the external FET is enabled, LP8545 switches the target feedback voltage from (10+VBOOST)V to (11+0.5VBOOST)V, where VBOOST is the setting in register A5h. I made the resistor divider more top-heavy to account for this and the lights went up. I even made a Windows startup task to program the LP8545 so I don't have to do it manually. There is still the nasty issue that as soon as Windows starts up with the Retina attached, all monitors cycle on-off once per second unless Intel UHD Graphics Control Panel is running (!?) It doesn't have to be visible, and Ubuntu doesn't show this behavior, so it's almost certainly a software problem. Perhaps the reason is that ACPI BIOS has no idea that it now has an internal display, but I'm not up to tinkering with the BIOS yet. For now I can just set the UHD panel application to launch at logon.</p><p>Next step is the keyboard PCB.</p>
<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgprF7E8PgTHjzYf03Y79Twsekt4a2SAznzVCmMMxn7cYUFUSzcef9rvC7qf4kybhCV2e_7T-AIO8nbQa5xFY-m5K9F5DBa3auRuG3GMDo5hR9FRu79cPrXgR7KCx4qRZ6yzxqPQfCkev7r/s2800/20200821_193040_cropped.jpg" style="display: block; margin-left: auto; margin-right: auto; padding: 1em 0px;"><img border="0" data-original-height="900" data-original-width="2800" height="129" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgprF7E8PgTHjzYf03Y79Twsekt4a2SAznzVCmMMxn7cYUFUSzcef9rvC7qf4kybhCV2e_7T-AIO8nbQa5xFY-m5K9F5DBa3auRuG3GMDo5hR9FRu79cPrXgR7KCx4qRZ6yzxqPQfCkev7r/w400-h129/20200821_193040_cropped.jpg" width="400" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Running with on-board backlight driver.</td></tr></tbody></table>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-26271384635117626232020-08-06T14:58:00.003+03:002020-08-06T16:52:25.353+03:00DIY laptop: new PCBs with micro-coax cable<p>Having successfully tested the LCD with the interposer PCB, I routed a lid PCB which is designed to fit into Thinkpad's display enclosure (with some very minimal dremeling of the latter) and connects to the body by a 40-pin 1:1 micro-coax cable using Cabline-CA connectors and a separate 4-pin connector for LCD and backlight power. This PCB carries the backlight boost converter, based on an LP8545, the status LEDs and the lid sensor. I decided on this setup because 50-pin connectors and cables are rare and expensive, and I didn't want to add an extra microcontroller or a port expander to this board just to service the GPIO stuff. The two cables go through the same opening as Thinkpad's.</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9v1roGgoSPik4StfwofF9VhQuU-H7lxDXXakdhBnhi3VUIog-TtlD29iQrIPHhsvBULreGQidxUV1oxdBWPfWeW2EsX1c3zeawpKeXbHJyH7BN6rjeKQLdG3qVwSxLX-7ImdLd3oeIA47/s439/20200617_214023_small.jpg" style="display: block; margin-left: auto; margin-right: auto; padding: 1em 0px;"><img border="0" data-original-height="290" data-original-width="439" height="211" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEi9v1roGgoSPik4StfwofF9VhQuU-H7lxDXXakdhBnhi3VUIog-TtlD29iQrIPHhsvBULreGQidxUV1oxdBWPfWeW2EsX1c3zeawpKeXbHJyH7BN6rjeKQLdG3qVwSxLX-7ImdLd3oeIA47/w320-h211/20200617_214023_small.jpg" width="320" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Donor controller PCB from a busted A1707 and my lid PCB.</td></tr></tbody></table><p>For development purposes, I didn't route and order the keyboard PCB straight away, but instead went for a small PCB carrying just the card-edge ACES connector to the motherboard, my two cable connectors, and a breakout pin header. After many hiccups and delays, I was able to assemble enough to test the DisplayPort connections (this time soldering and desoldering everything, including the I-PEX parts, myself, with the aid of a used МБС-1 microscope, which is an early Soviet clone of Carl Zeiss SMXX), and voila:</p><table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWtSY7OKJzVVCjx-3ZvELhSRk7RPJ9sq44zSg8-mA0480SIQ1838iYAX1ZZ5g4WGnF84Ro_jiUzIXqc2jJeNLwH7hUXo1X8_kEnT7FUkxWrfKuR6XGJiQFwzStSSvdy1Y9L7TCPUKhlsjp/s720/20200806_104853_small.jpg" style="display: block; margin-left: auto; margin-right: auto; padding: 1em 0px;"><img border="0" data-original-height="238" data-original-width="720" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiWtSY7OKJzVVCjx-3ZvELhSRk7RPJ9sq44zSg8-mA0480SIQ1838iYAX1ZZ5g4WGnF84Ro_jiUzIXqc2jJeNLwH7hUXo1X8_kEnT7FUkxWrfKuR6XGJiQFwzStSSvdy1Y9L7TCPUKhlsjp/s640/20200806_104853_small.jpg" width="640" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Up and running (with external backlight power for now).</td></tr></tbody></table>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-72584999745038645232020-01-15T12:43:00.000+02:002020-01-15T12:43:48.841+02:00DIY Laptop: LCD backlight<div dir="ltr" style="text-align: left;" trbidi="on"><p>
The only remaining LCD-related problem in this project is the backlight. Having previously bought a complete A1707 display assembly with a cracked LCD (only $90, basement bargain price!) I tested it with the interposer PCB. In newer MBPs the "apple" on the back of the display assembly is not translucent, so I had to look for the image in reflected light (not very convenient). To test the assembly's backlight, I needed 50-60V for LCDBKLT. Cheap bench power supplies don't go that high, so I made me a boost converter with an LM2577. Initially, I hoped that just connecting LCDBKLT would do the trick, as I had connected PWM and BKLT_EN inputs, but the screen obstinately stayed dark. I surmised that the LCD controller doesn't want to turn on the backlight because it can't talk to the LP8548 boost converter, which A170x and later MBPs use to generate the LCDBKLT voltage, and which resides on the main board. However, a comparison of A1398 and A1707 schematics and display connector pinouts led me to suspect that I could work around this behavior by connecting to the LED strings directly.</p>
<div style="float: left; margin-right: 2em;"><pre> [J0803]
G ?? K1 K2 K3 K4 K5 K6 ?? G
N N
D ?? ?? AA AA AA AA ?? ?? D
(LCDBKLT)</pre><center><font size="-3">Backlight connector pinout</font></center></div>
<p>I probed the pins of the LED connector (a custom 16-pin variant of JAE WP25D) on a busted LCD controller board and sure enough, four pins are directly connected to LCDBKLT, and six others behave as if they are connected to ground through MOSFETs (body diodes conduct when applying reverse voltage, i.e. when GND is more positive than Kn). I soldered a wire to K6, connected it to ground through a large 30k potentiometer and an ampermeter, powered up, and was greeted with this:</p><center style="margin-bottom: 1em;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiArV34X_8AJJJsJfGi3N3eg7RuPRWqnsKzOtDmCM5RHzcxneCFe6w0uIj4_D5Bbgvft7ROK887xCKmkN4xubdBOzFN_XLjlokWR4AlBJ9MPXdRgC1PSxotRkuCu7vtCYlgdGom0eN4kmW9/s1600/20200114_221049_sm.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiArV34X_8AJJJsJfGi3N3eg7RuPRWqnsKzOtDmCM5RHzcxneCFe6w0uIj4_D5Bbgvft7ROK887xCKmkN4xubdBOzFN_XLjlokWR4AlBJ9MPXdRgC1PSxotRkuCu7vtCYlgdGom0eN4kmW9/s320/20200114_221049_sm.jpg" width="243" height="320" data-original-width="414" data-original-height="545" /></a>
<span style="margin: 1em"></span>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_plXyS7-nOWq8gTe8Kba7a6iyWNXRXHJbZpEd8XtXyEq7UlWyoBx6oykj4sQpiAF4fnPsy9xekSGk5ddtpaaNQmJMwzXpT0V37HLjI67xVsbzoYcq69Ji7EDk2fobbttqVI2HR8qZ2sJX/s1600/20200114_222140_sm.jpg" imageanchor="1" ><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEg_plXyS7-nOWq8gTe8Kba7a6iyWNXRXHJbZpEd8XtXyEq7UlWyoBx6oykj4sQpiAF4fnPsy9xekSGk5ddtpaaNQmJMwzXpT0V37HLjI67xVsbzoYcq69Ji7EDk2fobbttqVI2HR8qZ2sJX/s320/20200114_222140_sm.jpg" width="243" height="320" data-original-width="414" data-original-height="545" /></a>
<center><font size="-3">The breadboard with the big fat capacitors is the boost converter. The interposer PCB is sticking out of the motherboard.</font></center></center>
<p>Success! Varying the potentiometer resistance and LCDBKLT voltage, the current and voltage on the K6 pin, and the brightness of the screen, behaved exactly as one would expect from LED I-V curve, so Kn pins are almost certainly individual strings' cathodes. On the left-hand photo I have about 1mA flowing through 2 out of 6 strings (it turned out I'd accidentally made a solder bridge between K5 and K6) and the right-hand photo has 100mA. It is clear how much brighter the screen is with the larger current. Incidentally, this is about 2x more current than is indicated on the schematic, but I observed no ill effects other than a nasty buzzing noise from my boost converter: LM2577's working frequency is only 52kHz.</p>
<p>With these results in hand, I can design the backlight power supply and the display and keyboard PCBs. I plan to use the same LP8545 LED driver used in A1398, because in contrast to LP8548 it has datasheets available and is carried by the usual distributors. I'll run DP data lanes through a micro-coax cable, and route the rest of the signals and power through a FPC, so that my display PCB will have only a handful of components: basically connectors, status LEDs, and whatever I need for the lid sensor. All the heavy lifting will be done by an STM32F0 MCU on the "keyboard" PCB, which besides the keyboard and trackpad connectors will also carry the backlight power supply, the card-edge eDP connector, and maybe a SATA connector for the 2.5" HDD: I'd rather plug it into something solid than have a mess of cables in the case. I'll break out pads for the SMBus connection to the power PCB, but I can proceed with the former two PCBs, the case, heatsink etc., without touching the power.</p></div>
Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-1863530982400304792019-11-24T18:03:00.001+02:002019-11-25T15:05:56.189+02:00DIY Laptop: LCD, part 2<div dir="ltr" style="text-align: left;" trbidi="on"><p>
Before starting on the messier parts of the project (heatsink, case etc.) I wanted to be sure that the LCD could actually be made to work: could I get the A1707/A190x LCD to display an image? For that, I would need to connect DisplayPort data lanes to motherboard eDP output in addition to power, AUX and control signals. In early summer, I made several attempts to solder the wires of Quadrangle's unterminated ACES connector to the flexible connector I used to check AUX (see <a href="https://atykhyy.blogspot.com/2019/04/to-pick-up-on-diy-laptop-project-post.html">previous post</a>). These were not successful: the wire gauge was a little too large to fit the tiny solder pads, and even when I somehow squeezed them in, Linux kernel's i915 driver showed me that the DisplayPort main link was not going up. It did show that the first stages (clock recovery and equalization) were succeeding, which was peculiar because, as I'd belatedly realized, there was no way ~8Gbps DisplayPort signal could be transmitted through 0.5m of unshielded spaghetti wires of unknown impedance. I obtained a dmesg boot log from a live A190x, and learned that Retina's TCon (LCD controller) is set up to skip DisplayPort clock recovery and equalization stages, probably because the native connector is so short. These observations forced me to rethink my approach: I decided to ditch the wires and make a interposer PCB. Instead of a separate ACES connector part, I could make a DIY card-edge connector by printing the contacts on one side and filing away the other. (Cue Russian "file to fit" joke.) The I-PEX receptacle I could lift from a dead LCD.
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSR-MtRs4r8m208dFiwICU1GzQ00WugoJNJB_c7DVS1M1iA-G2pXn4e0GhKy3t2LiEYzy2CVPCm5bN86Swqm8rYNaNYg5fZcKg-OfI761GJKsVH27yQYvNQtTc6S9cYPTswe42Rrqva598/s1600/aces2retina.jpg" imageanchor="1" style="float: right; margin-left: 1em; margin-top: 1em; margin-right: 1em;"><img alt="aces2retina PCB" border="0" data-original-height="442" data-original-width="1056" height="165" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhSR-MtRs4r8m208dFiwICU1GzQ00WugoJNJB_c7DVS1M1iA-G2pXn4e0GhKy3t2LiEYzy2CVPCm5bN86Swqm8rYNaNYg5fZcKg-OfI761GJKsVH27yQYvNQtTc6S9cYPTswe42Rrqva598/s400/aces2retina.jpg" title="aces2retina PCB" width="400" /></a>
This took a while to design because I was worried about screwing up the impedance again and went as far as simulating it in an EM simulator, and a few weeks more until the PCB came back from OSHPark. In the interval I bought myself a starter SMD rework station and gave it a whirl, but it was obvious that I couldn't hope to resolder the delicate 0.35mm pitch I-PEX connector with my basic equipment and even more basic skills. A friend's repairman friend, M. Grekhov of <a href="https://notebookoff.net/" rel="nofollow">НоутбукOFF</a>, graciously agreed to help me out with the I-PEX, though, and the rest of the parts (a couple of LEDs to indicate LCD power and HPD) I could manage myself. The complete item worked perfectly the first time I tried it, which was most gratifying. I could see the cloned Windows desktop on the Retina if I shone a flashlight behind it. Besides the DisplayPort data and AUX, this PCB breaks out the backlight and I2C connections, so I'll be able to use it to design a backlight power supply and control circuit.</p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-11862570083571077662019-04-13T22:00:00.001+03:002019-04-13T22:11:48.027+03:00DIY laptop: LCD<div dir="ltr" style="text-align: left;" trbidi="on">
<p>To pick up on the <a href="http://atykhyy.blogspot.com/2017/12/diy-laptop.html">DIY laptop project post</a> from a year ago. Since then, I gradually developed an aversion to the idea of buying a p5x.
Besides, it would be fun to build a piece of hardware for a change from software. My standard was still 15" Thinkpad T61/T500.
Coffee Lake and 300-series chipsets having come out, I searched for Thin Mini-ITX motherboards with a 300-series chipset and an embedded DisplayPort output.
There aren't many, and there are no consumer-oriented ones so far, but some AsRock and MiTAC industrial motherboards fit the bill.
I bought a MiTAC PH12FEI through their German distributor, an unterminated ACES 88441-040 cable from Quadrangle,
and the standard Thin Mini-ITX cooler, HTS1155LP, to use until I get around to making a thinner one
(HTS1155LP is 26mm thick and the keyboard would have to go on top of it, making the body too thick for my taste).
I decided on reusing stock parts as much as possible, so I bought a dead T500 (minus LCD screen) for $20.
It would also provide the keyboard and trackpad connectors, which aren't otherwise readily available.</p>
<p>As it happens, A1398's Retina LCD is a centimeter too big to fit into T500's LCD housing.
I briefly considered a completely custom lid, which would also save about 5mm in height,
but hinges and hinge mounting would be a problem. Also that LCD model is now quite old.
On the other hand, A1707 is narrower and there were indications that its LCD would possibly fit.
Therefore, I obtained a cracked but functional A1707 LCD, p/n LSN154YL03, from a <a href="https://2mac.ua/">repair shop</a> for $15,
and it fits perfectly - there is not even 0.5mm of space anywhere.
The crucial question for the whole project was whether I could get it to work with the motherboard.
Panelook shows that this LCD, like the older one, has a 40-pin eDP interface.
<a href="http://mikesmods.com/mm-wp/?p=431">Mike of Mike's Mods</a> had already shown that the older LCD
does not need any special initialization, and I could reasonably expect that this would be the case with the newer one.</p>
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJkUXvCoSlmg5-4qMJr6n_KzvTDzCr0Ut_KrQZIOta9fFQn-es0DyuiDXZNsmQ-7RTxml0AsmUZPJmnKEVv_mZpEBwIQRDMQaAFh04lGavP1ePJSGf1GD317nJffxAzP-HCcxYJmn4HPVD/s1600/20190413_181124_sm.jpg" imageanchor="1" style="clear: right; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" data-original-height="516" data-original-width="829" height="199" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJkUXvCoSlmg5-4qMJr6n_KzvTDzCr0Ut_KrQZIOta9fFQn-es0DyuiDXZNsmQ-7RTxml0AsmUZPJmnKEVv_mZpEBwIQRDMQaAFh04lGavP1ePJSGf1GD317nJffxAzP-HCcxYJmn4HPVD/s320/20190413_181124_sm.jpg" width="320" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">This was absolutely no fun to solder.</td></tr>
</tbody></table>
<p>I could figure out most of the pinout from labeled test points on the controller board and from the pattern of ground lines. I was saved the trouble of figuring out which pair of differential wires was AUX, and the polarity, because schematics are now easily found on the <strike>global garbage dump</strike> internet.
The LCD connector is, however, different from the older model, probably due to space restrictions in the smaller and thinner A1707.
It is quite distinctive, and a search by pin pitch and count quickly turned up the part - it is I-PEX NOVASTACK 35-HDP 42p+4p, p/n 20698-042E-01,
or an equivalent. It goes without saying that this part is not carried by the usual distributors.
I resorted to scavenging them from replacement LCD "cables" (actually a small flexible PCB with two of these I-PEX plugs on opposite sides) sold on eBay.
I figured that at the minimum I had to connect power, HPD, AUX and EDP_PWR_EN lines to get the LCD up and running.
The controller apparently needs +5V <i>and</i> +3.3V, but I could take both from the motherboard's LCD power voltage jumper.
As for EDP_PWR_EN, I connected it to one of the CABLE_ID pins of the motherboard's eDP connector which always has +3.3V on it.</p>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2ukY1qX7RlWx1f-Yx2bTuAnVvKcF6_V7vCgbnpWotpPOrf8hmLAoR2P8feVxTkHm73lmto7qxi84JVGXE2sVlGcl24HFNhNA2x6kSru8k1o0ex5hyphenhyphenqPDVirRMnT8rNNE36of1FVkDsexC/s1600/20190413_211238_sm.jpg" imageanchor="1" style="clear: right; float: right; margin-bottom: 1em; margin-left: 1em;"><img border="0" data-original-height="727" data-original-width="908" height="256" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh2ukY1qX7RlWx1f-Yx2bTuAnVvKcF6_V7vCgbnpWotpPOrf8hmLAoR2P8feVxTkHm73lmto7qxi84JVGXE2sVlGcl24HFNhNA2x6kSru8k1o0ex5hyphenhyphenqPDVirRMnT8rNNE36of1FVkDsexC/s320/20190413_211238_sm.jpg" width="320" /></a></div>
<p>First I carefully removed one of the plugs with a hot plate and tried soldering wires directly to the plug's pins,
but the plugs' plastic is very weak mechanically and instantly melts if accidentally touched by the soldering iron.
I had better luck with soldering wires to the flexible PCB with the remaining plug.
I had to use superglue to hold down the three AUX wires after they were soldered, because (another lesson learned) a careless swipe can easily tear off a contact pad off the flexible PCB. It took a few hours, but the connector worked and Windows detected the LCD as 2880x1800, device name APPA031,
meaning I can proceed with the project. The next big hurdle is the heatsink mod.</p></div>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-65281209643415340052017-12-09T14:44:00.002+02:002018-05-07T10:17:02.055+03:00DIY laptop<div dir="ltr" style="text-align: left;" trbidi="on">
<p>
Some time ago, laptops have displaced desktops as primary machines for many, perhaps most, users who don't go in for high-end gaming, and I am no exception. Now my trusty Thinkpad T500 has been getting a bit long in the tooth, and recent models coming on the market have been less than inspiring — I had hopes for Thinkpad Retro, but it turned out to be just a very expensive T470 with a better keyboard — so I was toying with the idea of a <a href="http://www.tomsguide.com/forum/65602-35-notorious-laptop">DIY laptop</a>. This post is a dump of my research (such as there was) and considerations pertaining thereto.
</p>
<p>
My list of requirements for a DIY laptop is
<ol>
<li>good 15" display</li>
<li>good keyboard</li>
<li>powerful: good upgradeable CPU, lots of memory</li>
<li>be presentable and approach laptop-class size and thickness</li>
</ol>
</p>
<p>
1. For the first point, there is nothing better than the LP154WT1-S* or LSN154YL0* "retina" LCDs that are installed in MacBook Pro 15.4" A1398 models. Like my T500, these are 16:10, a rarity in an age when every other maker has given up and is putting 16:9 (or even wider) movie panels into their laptops. I (and a lot of other people) need the laptop for work, not for goddamn movies! Anyway. These panels have a standard 30-pin eDP interface, but use a rare connector part <strike>need a custom board for the lighting controller/power supply</strike>. Fortunately for the DIY laptop project, a couple of hardware geeks have <a href="http://web.archive.org/web/20160419054529/http://mikesmods.com/mm-wp/?p=261">figured it out</a> and shared schematics, part numbers and <a href="http://web.archive.org/web/20160413000517/http://dp2mbpr.rozsnyo.com:80/">so on</a>. Replacement panels can be bought on panelook and eBay; a bare panel costs $160-$180 and would need diffusers and backlight LEDs, while a complete display assembly is now north of $300 and isn't usable as such anyway due to branding. However, diffusers and backlights from MBPs with totaled LCDs aren't difficult to find. <a href="https://www.ifixit.com/Teardown/MacBook+Pro+Retina+Display+Teardown/9493">MBP's complete display assembly</a> is 7mm thick in the thickest place — the bulge with the logo — so 5mm is probably a reasonable target thickness for a DIY display assembly.
</p>
<p>
2. Replacement classic 7-row Thinkpad keyboards with trackpoint are excellent quality, cheap and easily obtainable, and being so popular other hardware geeks have <a href="https://www.kosagi.com/forums/viewtopic.php?id=93">figured out</a> how to connect them to computers. The trackpoint has a PS/2 interface, but the keyboard itself needs a microcontroller with firmware to read the key matrix and generate appropriate messages. This calls for a custom board, but any DIY laptop would need one, so it's not a big deal. The biggest problem with the keyboard is that it has to be screwed to the case, and the geometry can get tricky. More on this later.
</p>
<p>
3. This basically means socketed server-grade CPUs and slotted memory, and rules out soldered Atom CPUs, Celerons etc. Discrete graphics cards are probably out of the question, but modern integrated graphics should be good enough if you aren't editing video or playing high-end games.
</p>
<div style="float:right;margin:10px;">
<img src="https://i.imgur.com/zYRKQLe.jpg" style="max-width:306px;max-height:353px;" alt="mjolnir.jpg"/><br/>
<small>Cool, but not really what I'd like to pack around.</small>
</div>
<p>
4. The requirement of laptop-class thickness (say less than 40mm) drastically limits one's choices. A motherboard with DIMM memory is already a hair above that limit by itself; add 5-6mm for the display, 5-6mm for the keyboard and a couple millimeters for the case, and you're looking at something over two inches thick. Barring <a href="http://www.bcmcom.com/custom_motherboard_design.htm">completely impractical options</a>, this means Thin Mini-ITX motherboards with SoDIMM memory. The motherboard with all components (including memory) is constrained by standard to be at most 20mm thick, and a few millimeters must be allowed for bottom side stand-offs. Unfortunately, the Thin Mini-ITX standard seems to be <a href="http://potato.2ch.net/test/read.cgi/jisaku/1466838132/l50">dead in the water</a>. There are few products on the market that support eDP and Skylake/Kaby Lake (Socket 1151) CPUs simultaneously, and nothing at all for AMD; about the only consumer-grade choice is Asus Q170T and its variants (the V2 upgrade, which is apparently not yet sold anywhere, has a bonus "disable ME" jumper — good to have even though <a href="https://github.com/corna/me_cleaner">firmware geeks</a> have learned how to turn it off on the firmware level like the big boys in the spy agencies do). For cooling a CPU with interesting (read: more than 15W) TDPs, passive heat sinks are out. Given the stringent height limit, the most reasonable solution I have discovered is Intel HTS1155LP, targeted at half-unit rack servers. Its thickest part — its off-board heatsink, connected to the CPU plate with three heat pipes — measures 26mm. If this turns out to be too thick, heatsinks can be modded with some care.
</p>
<p>
The basic elements of the design are thus: 16:10 15" display, Q170T motherboard with Socket 1151 (up to i7-8xxx), up to 32GB DDR4-2133 SoDIMM DRAM and a M.2 NVMe SSD, HTS1155LP heat sink, one or two custom boards for display connectors, keyboard controller, battery controller/regulator and incidentals. The tricky part is to find a suitable geometry of all the key components. After some consideration and scribbling some layouts, I have settled on one where the motherboard's I/O shield faces backwards, and the heat sink sits under the left palm rest. (Other options either sacrifice 6-7mm of thickness as the keyboard goes on top of the heat sink, or sacrifice 20% of the heat sink, which I'm not willing to do.) A millimeter of cork or similar thermal insulation on top and bottom of the heat sink should suffice to keep surfaces at reasonable temperatures as long as the fan is working. The hot-air outlet is on the left side. Since the design uses the Thinkpad keyboard, it might as well use the Thinkpad palm rest (with its PS/2 trackpad), the top bezel, and the matching display bezel from a dead "donor" laptop. The keyboard is located mostly above the motherboard, where it can be supported by HTS1155LP's motherboard mount via some struts. Other keyboard mounting holes can be arranged to fall outside motherboard footprint. The bottom of the body (as well as the top of the display assembly) can be cut out of 1mm sheet aluminium, folded, welded together and spray-painted with plasti-dip or similar finish at an auto body repair shop. Body thickness is then 30-31mm — 1mm/case/+1mm/insulation/+26mm/heat sink/+1mm/insulation/+1-2mm/palm rest/. With a 5-6mm display assembly, this is, of course, nowhere as thin as today's ultrabook butter-knives, but neither is it monstrous; T-series Thinkpads measure around 34mm depending on model. As for peripherials, one can add them to taste — front ports, SD and smart card readers, NFC, wi-fi, broadband modem, etc. For the battery, laptop batteries are based on standard 11865 3.7V 3400mAh Li-ion elements anyway, so one can add as many as fits one's desired weight. Six will give 75Wh for 300 grams.
</p>
<!--p>
Having got to this point, I compared the price of the basic parts and was disappointed to see that such a DIY laptop would be rather more expensive than, say, a Lenovo workstation laptop with more-or-less equivalent configuration, even without accounting for the time and effort needed to build the former and for the corporate or student discounts available for the latter. I do like me a good keyboard and a work-oriented display aspect ratio, but I am a software guy, hardware is not my metier. So I will, too probably, not without a sigh, settle for a P5x, and post these notes just to have something to show for this project.
</p-->
</div>
<!--
Lenovo P50: keyboard off-center because numpad, 16:9 display, 4 memory slots, CM236 chipset
$1500 for reasonable configuration (10 years' inflation I suppose) may be cheaper with "corporate discount", look on thinkpad reddit
Xeon only makes sense with ECC memory, i7-6770HQ is only 10% slower than 6820, latter not cost efficient
580 graphics is only available in BGA package, all socketable (1151) CPUs have HD 530
both MBP panels and P50 (no info on P51) panels use 2-lane 30-pin eDP connectors, should be compatible electrically
however the physical dimensions are different and a custom(ized) bezel would be required if it can be made to fit at all
it's 12mm taller (332x208, but not clear which dimension this is) than P50; to go by photos, P50 has enough space
but it's also narrower by about the same amount; P50 outline is 359.5 wide and viewing area 344 wide, if MBP panel has that extra 15mm it might fit
http://www.panelook.com/LP156WF6-SPK1_LG%20Display_15.6_LCM_overview_27076.html
http://www.tomsguide.com/forum/65602-35-notorious-laptop
https://www.cpubenchmark.net/cpu_list.php
https://forums.lenovo.com/t5/ThinkPad-P-and-W-Series-Mobile/Wish-list-for-P51/td-p/2276361
https://www.reddit.com/r/thinkpad/comments/4xxvvz/thinkpad_retro_what_are_you_guys_looking_forward/
---------------------------------------------------------------------------------------------------
MacBook Pro 15,4" Retina A1398 16:10 display
$160 only LCD, need backlight, diffusers etc., may be able to pick up at laptop repair shops e.g. from totaled lcd's
$260 replacement display assembly (MBP lid with hinges), includes backlights, camera and antennas
needs backlight driver ic and pin conversion from Apple proprietary 30pin(?) connector to eDP in any case
the panel connector is on the left side
[Hacking the Macbook Pro Retina LCD, Part 1.2: Controller Addendum](http://web.archive.org/web/20160419054529/http://mikesmods.com/mm-wp/?p=261)
https://www.ifixit.com/Teardown/MacBook+Pro+Retina+Display+Teardown/9493
2016 model has the same type of panel, but it's not listed on panelook yet
NB: no in-production laptop panel larger than 13.5" is anything but 16:9
the only 13.5" one is $600 (!) Microsoft Surface 3000x2000 panel (eDP), all other non-movie panels are even smaller
Thin Mini-ITX motherboard with suitable output $150-180 ($80 ones don't have eDP, may be ok for prototype)
NB: Gigabyte TN motherboards don't have eDP at all, only LVDS
board+top components (including i/o ports) max 20mm (see below), pin headers are 9-10mm tall above PCB
if aluminium case, put mb upside-down and mate heat sinks to case if this is permissible
almost certainly heat will not spread enough through the thin case, have to use heat pipes; no sense inverting the mb in this case
how thick does it have to be for strength? inner struts? need to consult specialists...
can be cast complete with side holes and hinge attachment blocks - these have to be pretty strong
all mb ports will be on one side, including audio jacks
only 2 SO-DIMM slots, probably they can't fit more
best to have NVMe SSD and M.2 Wifi/BT options available
if vertical SATA connectors (9mm above PCB), a couple might be usable with 90 degree plugs
may need to re-solder power input to add battery
minimal memory $17, minimal cpu $35
any 1151 cpu needs some sort of heat sink too, e.g. http://hotline.ua/computer-kulery-i-radiatory/titan-dc-155a915zr/ w/o cooler,
but even this one will probably be too tall to fit into 20mm... ok for prototype though
http://potato.2ch.net/test/read.cgi/jisaku/1466838132/l50 2ch.net Thin Mini-ITX Skylake motherboard list
http://www.bcmcom.com/bcm_product_MX110HD.htm they also make custom motherboards
ASUS Q170T
has an ATX 19V connector in addition to round-pin, may use that if connections and height permits, or re-solder
has headers for 2 USB3 ports, can use that for card reader
eDP on the bottom
retarded position of NVMe slot: the card goes above (!) the chipset heatsink, which is nothing but a copper plate
cooler: http://www.buildablade.com/faqs.htm#What_heatsinks_and_fans_work_with_Build-a-Blade
Dynatron K199 - nothing sticks out, a bit too thick at 28.5mm above the mb
Intel HTS1155LP - total heatsink height 26mm, off board. Geometry is tricky:
ports on left =>
heat pipes sticking out on the side opposite i/o shield =>
mb upside-down => blows out back, keyboard sits on top
mb not inverted => blows out front (not good)
heat pipes sticking out on the side nearer audio jacks => 17cm mb + 0.5cm clearance + 8cm cooler front-to-back, needs empty duct space
mb not inverted => blows out front left, heat sink in front
mb upside-down => blows out back left
ports on top works only with latter pipe orientation; blows out back, no sense inverting mb, needs empty duct space back left, blower front left
http://www.minhembio.com/bilder/bild/?pic_id=438387.jpg
could put extra pcb in duct space?
could put fan in back and angle the duct to front left... hm
replacing heat pipes with suitably bent ones is a more hard-core option but gives a lot of flexibility (heh)
e.g. maybe possible to route pipes towards i/o shield above the pch and then left
USB battery controller? run-of-the-mill LGEBE11865 3.7v 3400mAh batteries (12.5Wh) are 49g each, 18x65mm, ~$9
original ThinkPad extended battery is 10.8V 71Wh
charge-USB port? probably can't make it USB3, not that that makes much sense
need an ampermeter to measure how much current the mb draws in various states
use original ThinkPad keyboard, there's nothing better and plenty of those around, quite cheap too; it also has the trackpoint
[Thinkpad keyboard and trackpoint interface](https://www.kosagi.com/forums/viewtopic.php?id=93)
uses a proprietary connector and needs an MC to read the matrix and expose the USB devices, but so what,
must have a separate PCB for LCD and batteries anyway; geometry of the flexible connector may be tricky
also geometry of support is very tricky, need to "push up" to the top at the middle at least; if this hole turns up above mb I'm pretty much screwed
put mb at right edge ports to back, and route custom pipes via audio jack side to left edge - vent out of back left?
can use MBP trackpad, [it's USB](http://blog.hawkwood.com/archives/743) and replacement parts can be bought for ~$20
for a complete bells-and-whistles build: NVMe SSD, WiFi/BT mini pci-e card, newish USB 2G/3G/4G(?) modem, SD+UFS card reader,
indicator lights, speakers, extension slots(?) 2.5" hdd slot(?)
lid: fold edges inside 180* to avoid too-sharp edge
almost certainly the body will be several cm wider than LCD, would need a separate bezel
use a thin layer of regular silicone sealant instead of rubber gasket
case: bottom plate + top with folded-in edges and hole for keyboard (edges bent down to fit into keyboard tray)/trackpad -- not inordinately difficult to make
need hinge blocks
weight: T500 > 2.5kg
height: difficult to make body thinner than 30mm, which makes it impossible to reuse a 15" Thinkpad body (might use top bezel and palmrest though)
but with MBP panel maybe possible to claw back some height on the lid, with the total height not much more than T5xx
MBP display assembly is max 7mm thick (including the dome with Apple logo), T5xx is 10mm (more for bezel edges, but that doesn't count)
MBP panel size is same as T5xx, can use display bezel which mates with the body bezels - good idea!
-----
optical pen/mouse idea: everybody makes them like terminally obese pens,
but what if I made a ball joint into a transparent slider thing containing the optics instead?
must be a damn good ball joint, smooth as a dip's fingers
use IR to avoid visual, but make a visible dot on the bottom surface where the pen's end would be
may be able to insert a lead into it somehow, or maybe brown/char it with focused IR
the only drawback -- not usable if the surface isn't flat
---------------------------------------------------------------------------------------------------
Thin Mini-ITX standard
http://www.formfactors.org/developer%5Cspecs%5CMini_ITX_Spec_V2_0.pdf
15mm--- --
.--+--------------.
16mm | | back panel |
excl. | | |
PCB ---| | 20mm |
| | incl. PCB |
| | |
| | |
`--+--------------'
---------------------------------------------------------------------------------------------------
Micro-ATX standard for bottom height constraints referenced from Mini-ITX standard
http://www.formfactors.org/developer%5Cspecs%5Cmatxspe1.2.pdf
Required secondary (bottom) side motherboard height constraints for all areas (A-C, as shown in Figure 7) are
defined as follows (measured from the bottom planar surface of the motherboard PCB):
• <=0.010” – Mounting hole standoff areas – no components. Restriction applies within 0.400” square area
centered on each required mounting hole location defined in Section 2.2. Nominal allowance is provided
only to accommodate slight reflow solder excess.
• <=0.098” – All board circuit components (including leads) that are electrically conductive and intolerant of
direct connection to chassis ground (e.g., through-hole leads, surface mount resistors)
• <=0.120” – Board components that are non-conductive or otherwise tolerant of direct connection to chassis
ground (e.g., connector guide/stake pins)
• <=0.200” – Devices attached to the motherboard for the sole purpose of structural retention or stiffening
A chassis and its related elements (e.g., stiffening ribs, base pan, structural supports fasteners, etc.) must allow
>=0.250” clearance to the bottom planar surface of the motherboard PCB. This does not including mounting
hole standoffs, which may extend to and contact the PCB at the mounting holes within the prescribed 0.400”-square areas.
---------------------------------------------------------------------------------------------------
panelook resolutions, select large between 4:3 and 16:9
foreach (var l in s.Split (new char[] { '\n', '\r' }, StringSplitOptions.RemoveEmptyEntries))
{
var i = l.IndexOf ('>') ;
var j = l.IndexOf ('×', i) ;
var k = l.IndexOf ('<', j) ;
try
{
var x = int.Parse (l.Substring (i + 1, j - i - 1)) ;
var y = int.Parse (l.Substring (j + 1, k - j - 1)) ;
if (y >= 1050 && x * 3 >= y * 4 && x * 9 < y * 16)
Console.WriteLine ("{0}x{1}", x, y) ;
}
catch {}
}-->Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-29250599087598259992012-04-25T15:26:00.005+03:002012-04-25T15:26:35.497+03:00How to present to an executive<a href="http://blogs.technet.com/b/gray_knowlton/archive/2010/09/22/how-to-present-to-an-executive.aspx">Good article</a> by Gray Knowlton, hope it will come in useful some day.Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6966085462464595987.post-11084265849885020082012-04-24T19:23:00.000+03:002012-04-24T19:23:49.148+03:00Three excellent articles about the state of IT<p><a href="http://paulgraham.com/ambitious.html">Frighteningly Ambitious Startup Ideas</a> (Paul Graham)</p>
<p><a href="http://www.theatlantic.com/technology/print/2012/04/the-jig-is-up-time-to-get-past-facebook-and-invent-a-new-future/256046/">The Jig Is Up: Time to Get Past Facebook and Invent a New Future</a> (The Atlantic)</p>
<p><a href="http://www.businessweek.com/print/magazine/content/11_17/b4225060960537.htm">This Tech Bubble Is Different</a> (Bloomberg Business Week)</p>
<p>Choice quote: "The best minds of my generation are thinking about how to make people click ads," [Hammerbacher] says. "That sucks."</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-2603949607384073292012-03-26T13:44:00.000+03:002012-03-26T13:44:46.030+03:00Remedial startupology link collection<a href="http://www.payne.org/index.php/Startup_Equity_For_Employees">Startup Equity for Employees</a> (with <a href="http://news.ycombinator.com/item?id=1059020">comments at Y Combinator</a>)<br />
<a href="http://www.scribd.com/doc/55945011/An-Introduction-to-Stock-Options-for-the-Tech-Entrepreneur-or-Startup-Employee">An Introduction to Stock Options for the Tech Entrepreneur or Startup Employee</a><br />
<a href="http://onstartups.com/tabid/3339/bid/88/17-Pithy-Insights-for-Startup-Employees.aspx">17 Pithy Insights for Startup Employees</a><br />
<a href="http://blog.dinkevich.com/first-employee-of-startup-you-are-probably-getting-screwed/">First employee of startup ? You are probably getting screwed !</a><br />
<a href="http://cdixon.org/2009/08/27/the-one-number-you-should-know-about-your-equity-grant/">The one number you should know about your equity grant</a> (with comments delineating typical scenarios)<br />
<a href="http://www.salary.com/advice/layouthtmls/advl_display_nocat_Ser56_Par123.html">Option Grant Practices in High-Tech Companies</a><br />
<a href="http://capgenius.com/2011/04/24/foundervesting/">Consider Repurchase Rights for Founders Stock</a><br />
<a href="http://capgenius.com/2011/03/06/splitting-pie/">Splitting up the Pie: Considerations for setting initial equity ownership among founders</a><br />
<a href="http://stevendiebold.com/how-to-allocate-equity-in-a-startup/">How to Allocate Equity in a Startup?</a>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-79725655438499022512012-01-20T15:19:00.000+02:002012-01-20T15:19:41.568+02:00Russian programmersRussian programmers got a nice plug <a href="http://marginalrevolution.com/marginalrevolution/2012/01/why-are-some-programmers-paid-more-than-others.html#comment-157540565">over at Marginal Revolution</a> in the comment section:<blockquote>My experience with Russians are that they are the best hackers. They trust themselves, they are self-contained and they work with meager resources and really sweat the details. I’ve worked with a few and I would love for all of my code to be eyeballed by a Russian programmer who’d make it work in less memory. I like the fatalism built into Russian engineering, makes stuff robust.</blockquote>Being one, I heartily concur. Thanks!Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-20899914716319332892011-07-03T12:57:00.000+03:002011-07-03T12:57:09.626+03:00LINQ Expressions and Reflection.Emit: an uncomfortable union<p>The other day I needed a component for a project of mine, which would rewrite methods into state machines like the CTP C# compiler does for <code>async</code> methods. The rewriter would accept a callback to identify the sites where a continuation must be created, and another one to emit glue code for the site using two primitives: <code>SAVE_STATE()</code> which returns the continuation delegate and <code>RESTORE_STATE()</code> which returns the value passed to the continuation. This approach permits the users of the rewriter to avoid the overhead of saving and restoring state when the continuation turns out to be unnecessary (in <code>async</code> terms, when the awaited thing is already complete). The new CTP compiler implements this optimization. My component would eventually have to work with IL methods either via Reflection.Emit or Cecil, but I thought it would be interesting to make it work with LINQ Expressions first. Besides, the DLR has a similar rewriter for <code>yield</code> which I could scavenge for useful hints. The DLR rewriter uses a nested lambda to create the 'environment', shifting the work to the LINQ Expression compiler (EC). Probably because of permission issues, EC does not create new closure classes and instead uses a thinly veiled <code>object[]</code> to store closure locals. I did not want this, I wanted to generate a proper closure class. After all, EC <i>can</i> compile a lambda expression into a <code>MethodBuilder</code>!</p><p>Although I more-or-less made it work, I must report that LINQ Expressions don't work all that well with Reflection.Emit:<ol><li><p>A <code>LambdaExpression</code> cannot have parameters of unbaked type. EC appears to have no problems with this, but <code>Expression.CreateLambda</code> uses generic lambda factories and an unbaked type cannot serve as a type argument. One must have the lambda accept a suitable base class or <code>object</code>.</p></li><li><p>It is impossible to generate a call to any method which has not been 'baked' because LINQ Expression constructors insist on validating a method's parameters and call <code>MethodBase.GetParameters()</code>. This extends to object construction as <code>Expression.New</code> validates constructor parameters. These two problems are very obnoxious and make any serious use of LINQ Expressions with Reflection.Emit impossible.</p><p>As a side note, I find it puzzling and inconvenient that <code>MethodBuilder</code> does not expose its parameter list.</p></li><li><p>EC has no problems emitting constants referring to a <code>TypeBuilder</code> or a <code>MethodBuilder</code>, but one must tell <code>Expression.Constant</code> the correct types, viz. <code>Type</code> and <code>MethodInfo</code>. Otherwise EC happily emits casts to <code>TypeBuilder</code> which blow up at runtime. Just a minor gotcha, but still.</p></li><li><p>There is no <code>DelegateCreationExpression</code>. This is strictly speaking not related to Reflection.Emit, but it is annoying to have to generate a call to <code>CreateDelegate</code>, complete with casts and stuff, instead of a <code>ldftn</code>-<code>new</code> combo.</p></li><li><p>EC cannot compile lambda expressions into member methods even if the method signature is compatible. The target method has to be static, period.</p></li></ol></p><p>This was a useful excercise and working directly with IL should not be much more complicated.</p>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6966085462464595987.post-80363003516175243722011-03-16T13:02:00.000+02:002011-03-16T13:02:20.335+02:00Bakunin fulminates"Etatism and anarchy" (1873)<blockquote>Let us respect [social] scientists for their achievements, but for the sake of their reason and moral integrity we must not give them any privileges and only recognize their right, common to all, to freely preach their convictions, thoughts and knowledge. We should give power neither to them nor to anybody else, for whoever has power becomes, by an immutable law of sociology, an oppressor and exploiter of the society.</blockquote><blockquote>Woe unto mankind should theoretical speculation become the only source of guidance for society, should science alone take charge of all social administration. Life would wither, and human society would turn into a voiceless and servile herd. The domination of life by science can have no other result than the brutalization of mankind.</blockquote>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-32122915947471161182011-03-10T13:58:00.002+02:002011-03-22T16:43:28.241+02:00February link clearance<h4>One-paragraph</h4><p><a href="http://blogs.uslhc.us/some-tough-problems-to-solve"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> A white female LHC engineer bemoans the lack of black PhDs in physics in a rather over-the-top way — 'threatens the viability of scientific research'? Give me a break. But the statistic cannot be simply brushed aside. With all the Affirmative Action programs, powerful Equal Opportunities commissions etc. everyone who wants to go for a PhD in physics should have the opportunity, so what's up? Could it be that they just don't want it? Scientific research is boring much of the time, needs hard work and a particular mindset<a href="#" title="I know through having been a researcher in theoretical plasma physics for several years, and I don't have it">*</a>, and it does not pay any too well in either money or status, so why should they? Especially given all the Xyz Studies departments. Check out <a href="http://en.wikipedia.org/wiki/White_studies#Criticisms">White Studies</a>, by the way.</p><p><a href="http://www.bbc.co.uk/news/business-12347219"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> BBC article about robot carers in Japan contains very condescending remarks aimed at Japan's anti-immigration policy. It appears that many Anglophone Brahmins resent Japan for getting along tolerably well without mass immigration and want it to open up (this is noticeable even in Lonely Planet Japan), while <a href="http://www.tomnoir.com/2011/03/end-of-japan.html">condescension towards Japan</a> is a regular feature of both <a href="http://www.nytimes.com/2011/01/28/world/asia/28generation.html">articles</a> and <a href="http://community.nytimes.com/comments/www.nytimes.com/2011/01/28/world/asia/28generation.html?sort=oldest">comments</a>; sometimes these produce poisonous outbursts which then get moderated. Comment section to the above-mentioned article contains interesting discussion of job markets etc.</p><h4>One-liners</h4><p><a href="http://i.imgur.com/lO9OV.jpg"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Average female faces by country/region. Source: <a href="http://pmsol3.wordpress.com/">The Postnational Monitor</a>. Also <a href="http://pmsol3.wordpress.com/2009/10/10/world-of-facial-averages-east-southeast-asia-pacific-islander/#comment-7023">great mod text</a>.</p><p><a href="http://www.youtube.com/user/markthoma?feature=mhum"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Mark Thoma's lectures on economics theory, econometrics etc.</p><p><a href="http://economix.blogs.nytimes.com/2011/02/10/ship-of-knaves/"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Problems with bank executives' incentive structures — <i>the breakdown in corporate governance [...] is complete.</i></p><p><a href="http://en.wikipedia.org/wiki/The_Machine_Stops"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> E. M. Forster's 1909 (!) short story "The Machine Stops" — much of the internet bears an uncanny resemblance to the story's tele-lecturers, who spend lives 'growing spiritually' by exchanging tenth-hand regurgitated information.</p><p><a href="http://www.overcomingbias.com/2010/05/chase-your-reading.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> It is more efficient to read books pursuing some idea than expecting to get ideas, even if your original idea turns out to be worthless later it serves as a scaffold for acquired knowledge. Very good!</p><p><a href="http://www.nature.com/news/2011/110301/full/471020a.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> A very large epidemiological study of a 1946 British cohort.</p><p><a href="http://en.wikipedia.org/wiki/Enron_Corpus"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> The Enron Corpus — 600000 emails generated by 158 Enron employees, went public during the Enron trial. Used to train systems for e-discovery, communication analysis etc.</p><p><a href="http://logec.repec.org/scripts/paperstat.pf?h=repec:eee:jmacro:v:31:y:2009:i:1:p:173-190"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Journal of Macroeconomics article on economic growth in the Gilded Age US got cowened.</p><p><a href="http://crookedtimber.org/2011/01/18/the-end-game-for-the-euro-german-rules-and-bondholder-revolts/#more-18601"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Predictions on how Germans can't fix Eurozone's financial troubles — very realistic thinking and understanding of how European financial sector really works.</p><p><a href="http://www.theonion.com/articles/stephen-jay-gould-speaks-out-against-science-papar,266/"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> The Onion makes fun of Tyler Cowen's call for more status for scientists.</p><p><a href="http://www.gamasutra.com/view/feature/3627/mmo_class_design_up_with_hybrids_.php"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Excellent article about MMORPG design — class system, levels, balancing etc.</p><a href="http://online.wsj.com/article/SB10001424052748703584804576144192132144506.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> People who have trouble focusing attention may be more creative, especially those who have high IQ.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-13895640237609199372011-02-20T16:17:00.003+02:002011-02-24T21:31:59.117+02:00Book report: Ivan Ovsinsky, "A new system of agriculture"<h4>Овсинский И.Е. "Новая система земледелия" (Киев, 1899)</h4><h4><i>Ivan Ovsinsky "A new system of agriculture"</i></h4><blockquote><font size="+2">“</font>A careful consideration of the tilling and fertilization recipes leaves one amazed as to how illogical and expensive they are. Happily, a significant proportion of farmers is ignorant of Liebich's theory and continues farming the land in the same way their forefathers did. Otherwise only the lucky few able to afford hitching 3 pairs of oxen to a German plow and to sprinkle their fields with powders (fertilizer) would continue farming.</blockquote>Ovsinsky was an early proponent of no-till, no-fertilizer agriculture. His fields in the south of Ukraine amazed visitors with rye 10 feet high and wheat green and fresh in the middle of droughts, to say nothing of his yields (stable 4-5t/ha, double the average of that time). In this short book, he presented his system for the general public. His main ideas are as follows.<br />
<br />
1. Plants balance expenditures on seeds with expenditures on vegetative growth. To shift this balance towards seeds, the farmer has to apply moderate environmental pressure and force plants to struggle for existence. For grains, where pruning, pitching and suckering are not an option, the prescription is to sow more densely than usual, only avoiding clumping-up of seeds, and to put more space between rows. This makes plants compete and induces them to produce heavier seed to occupy this vacant space.<br />
<br />
2. Soil and air together contain massively more nutrients than is removed with the harvest. Healthy soil makes this available to plants naturally. Farmers are only forced to provide plants with easily soluble nutrients by applying artificial fertilizer because deep tilling destroys soil health. Ovsinsky quotes <a href="http://en.wikipedia.org/wiki/Pierre_Paul_Deh%C3%A9rain">Deherain</a> as saying, at the end of a listing of "horrible" quantities of mineral nutrients present in the soil, "This brings us to the untenable conclusion that fertilizers are useless and not necessary."<br />
<br />
Healthy soil has a porous top layer and a developed system of hollow channels (created both by the decay of old roots and by action of worms) though which air and water circulate. Plants can also reuse these channels to grow deeper roots than would otherwise be the case. Simultaneously the soil retains its capillary properties. The top layer protects the soil from drying out and heating up. The temperature differential between air and subsoil helps condense moisture and dew, which also contains in itself more nitrogen (as ammonia) than is removed with the harvest. Soil biochemistry uses abundant air and moisture to break down organic remains and oxidize ammonia to nitrate in the topsoil and mobilize phosphate and other minerals from rock particles in the subsoil.<br />
<br />
<i>Every</i> part and factor in this complex arrangement works together and is essential for the whole. Deep tilling, originally conceived to bring more nutrients to the surface, upsets all this completely. In particular, irrigation becomes necessary because compacted soil cannot capture and retain moisture properly. Remedies and modifications applied afterwards may restore one or two factors, but the lack of balance means that the results are not robust and uninspiring. Deep tilling has the pernicious property, which seems common to primitive technological solutions to complex problems, that, once begun, more and more of it is required simply to keep up.<br />
<br />
Accordingly Ovsinsky forswears deep tilling (i.e. deeper than 2 inches, the thickness of the top layer) although he does not eliminate tilling entirely, using it for weed control. He adjusts topsoil porosity in the spring or even in autumn to make the spring sun heat up soil faster. He knows about "green manure", but apparently does not apply it as systematically as <a href="http://en.wikipedia.org/wiki/Masanobu_Fukuoka">Fukuoka</a> did. Although he mentions that crops suppress the vegetation of weeds once they are mature enough, he does not use clover and mulch for weed control, relying instead on shallow tilling. Neither does he appear to think in terms of sustainable land use — small wonder given that he writes at the turn of the XX century.<br />
<br />
Ovsinsky, even if his methods are not always consistent, represents a large step in the right direction. Modern farming is carried out by the 'lucky few', and this has been a great boon for all other sectors of the economy. Ovsinsky shows us that we don't have to lay waste to our soil for it.<br />
<br />
<small><a href="http://www.zemledelets.com/zemledelie/index.html">Russian text</a> is available online free of charge. 90pp.<br />
HT: <a href="http://www.novayagazeta.ru/data/2011/018/10.html">Novaya Gazeta</a></small>Unknownnoreply@blogger.com1tag:blogger.com,1999:blog-6966085462464595987.post-23136198071342683992011-02-19T09:22:00.006+02:002011-02-19T11:27:23.823+02:00January link clearance<h4>One-paragraph</h4><p><a href="http://www.marginalrevolution.com/marginalrevolution/2011/01/the-ethics-of-economics.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> How can Ed Glaeser write about 'treating the seller of kidneys with respect' and 'capable of choosing for himself or herself even in difficult circumstances' 150 years after Marx explained all about economic coercion? Would he treat the choice of <a href="http://www.attackingthedevil.co.uk/pmg/tribute/mt2.php">this girl (scroll to end)</a> with respect, too? It is all very well to value any contract when the valuer has never had to 'enter into an inequitable arrangement out of fear of starvation, or economic ruin', or at least if the inequitableness always stayed small enough. Also check out Eric's comment: <i>The point is that while economics may be in theory non-normative, it often doesn't stay that way in practice.</i><br />
Two asides:<ol><li>Relying on charity, including government charity, does not count as an out from economic coercion if the charity is the Dickensian kind, <a href="http://www.theatlantic.com/business/archive/2007/10/the-poor-are-not-children-except-for-the-ones-who-are-actual-children/2180/">as is very often the case</a>.</li>
<li>In contrast to XVII-XIXc. western Europe and Britain, economic coercion was much less acute in America because of easy availability of land to farm, prairie to ranch and forest to fell. This might have blinded American economists to economic coercion.</li>
</ol></p><p><a href="http://arxiv.org/abs/0704.2291"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Why astronomers should not get sucked too much into Dark Energy-related projects:<ol><li>won't advance astrophysics on a broad front, instruments not likely to be useful for much else (experiment vs observatory);</li>
<li>large collaboration culture of fundamental physics experiments will scare away young talent looking to make an original contribution;</li>
<li>negative impact on astronomy's image as "ambassador of physics" because subject too abstract and removed from everyday experience.</li>
</ol>Interesting statistics on bibliographical changes in astrophysics papres over 30 years: citations/article x4, authors/article x2, partly because of 'the use of citations as measure of performance'. Note: the dynamic is similar to the degradation of programmer performance measures, only the timescale is much longer.</p><h4>One-liners</h4><p><a href="http://www.leastprivilege.com/HttpListenerAuthenticationAndASPNET.aspx"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> How to set up authentication in <code>HttpListener</code></p><p><a href="http://blogs.msdn.com/b/junfeng/archive/2006/03/28/563627.aspx"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Exact requirements for an <code>IAsyncResult</code> implementation — could this be expressed in code?</p><p><a href="http://reznichenko-d.livejournal.com/197941.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Что редактор советского "Огонька" Коротич писал про Америку до и после перестройки</p><p><a href="http://leprastuff.ru/data/img/20110125/78044355f26c6e87ac19a695cd03de10.jpg"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> iPad imagined in 1988, but comes 10 years late (rus)</p><p><a href="http://www.novayagazeta.ru/data/2011/004/29.html"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Steve Jobs vs Bill Gates — hippie vs serious boy (rus)</p><p><a href="http://www.bbc.co.uk/news/world-europe-12213195"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> French gentleman's mansion, locked for 100 years, opens as a museum</p><p><a href="http://news.bbc.co.uk/2/hi/programmes/hardtalk/9372832.stm"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Swedish anti-immigration party politician tries hard to avoid saying that Muslim immigrants are more prone to commit crimes</p><p><a href="http://dx.doi.org/10.1016/j.intell.2010.12.002"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> IQ negatively associated with criminality at individual level (review) and at county level. Association "not confounded by a measure of concentrated disadvantage that captures the effects of race, poverty, and other social disadvantages of the county."</p><p><a href="http://dx.doi.org/10.1016/j.intell.2010.11.003"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> IQ positively associated with attractiveness (full text), attractiveness measure is binary but appears legit</p><p><a href="http://www.ncbi.nlm.nih.gov/pubmed/21169524"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> GxE in cognitive ability development (!)</p><p><a href="http://www.pnas.org/content/early/2010/12/08/1012046108.abstract" style="border:0;padding:0"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> People believe they have more free will and are more in control of their actions than others</p><p><a href="http://blogs.denverpost.com/captured/2011/02/07/captured-the-ruins-of-detroit/2672/"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Ruins of Detroit — check out the clueless comments</p><p><a href="http://www.breitbart.com/article.php?id=D9L2CPTG0&show_article=1"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> Kan's speech at Davos — Japan's society is probably less atomized than any Western one, but listen to him talk about strengthening social bonds.</p><p><a href="http://cheeptalk.wordpress.com/2011/01/28/cutting-off-communications-in-egypt"><img src="http://bits.wikimedia.org/skins-1.5/vector/images/external-link-ltr-icon.png" style="border:0;padding:0"></a> This has to be part of the theory of the modern state: a government cutting off communications from fear of public protests produces a strong and <i>public</i> signal that makes protests more likely to occur and to succeed by overcoming protesters' communication problems for them at a stroke.</p>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-42194418798577168132011-01-01T03:49:00.004+02:002011-01-02T04:07:18.856+02:00How I wasted New Year's EveApparently I am blessed with the kind of brain susceptible to <a href="http://xkcd.com/356/">nerd sniping</a> (by the way, the answer to that problem is $4/\pi-1/2$; see <a href="http://arxiv.org/abs/cond-mat/9909120">[Cserti]</a>, an elegant approach using discrete Green functions). On New Year's Eve, an ill-considered click brought this problem before my eyes:<br />
<blockquote style="background-color:#EEE;padding:5pt;">Both ends of a thin flexible rod are joined to the same pivot. What is the angle between the rod's ends?</blockquote>I had not used much of the mathematical technique learned in the university for ten years. I don't like writing. I have other, arguably more important things to do. Oh well.<br />
<br />
This problem seems made for a demonstration of variational calculus. Accordingly, I need an expression for the potential energy of a bent rod to vary. The rod being thin, I can apply simple bending theory, which says that the flexural energy of a piece of rod is proportional to the square of its curvature. The shape the rod makes does not depend on the rod's material or length as long as the assumptions of simple bending theory hold. Also the shape will obviously be symmetrical about the line bisecting my angle of interest (I will make this line my $y$ axis). A final observation is that the rod's curvature at the pivot is zero, because the pivot rotates freely and nonzero curvature generates momentum. At this point physics ends and mathematics, as applied by physicists, begins.<br />
<br />
Let $\theta$ be the angle between the tangent to the curve and the $x$ axis, and $s$ the natural parameter (i.e. arc length). Using the Frenet's formula $\dot{\bf{t}}=\kappa\bf{n}$ ($\bf{t}$ the tangent unit vector, $\bf{n}$ the normal unit vector, $\kappa$ the curvature), I obtain that $\kappa=\dot{\theta}$, so the main term to vary is $\dot\theta^2$. In addition, I have to constrain the curve's ends to meet; the curve being symmetrical about the $y$ axis, I worry only about the separation of the curve's ends along the $x$ axis: $\int\cos\theta ds=0$. Accordingly I shall vary $$\tag{*}{(*)}\int_{-1}^{1}\dot\phi^2+½\mu^2\cos2\phi ds$$ where I have introduced $\phi=\theta/2$, arbitrarily made the curve's length equal $2$ and where $\mu$ is the Lagrange multiplier. Equating the variation of (*) to zero yields the ODE for $\phi$: $$\dot\phi^2=\mu^2k^{-2}(1-k^2\sin^2\phi)$$ with $k$ a constant of integration. The curve's symmetry and my choice of axes give me the boundary condition $\phi(0)=0$, and I denote $\phi(-1)=\alpha$. The solution to this ODE is the elliptic integral of the first kind: $\mu s=k\,\mathrm{F}(\phi,k)$. Since the curvature at $s=\pm1$ is zero, $k=1/\sin\alpha$. Now I bring into play the end-meeting constraint: $$0=\int_{-1}^{1}\cos2\phi ds=(\mu/k)\int_{-\mu/k}^{\mu/k}2\mathrm{cn}^2u-1 du,$$ whence <a href="http://www.amazon.com/dp/0122947576">G&R</a> <tt>5.134.2</tt> produces $$\mathrm{E}(\alpha,k)=(1-½k^2)\,\mathrm{F}(\alpha,k).$$ At this point I have enough equations to solve for $\alpha$ and need not bother about $\mu$ — not surprising considering that $\mu$ is really a dimensional quantity which arises because I fixed the length of the curve. The solution is implicitly determined by $$2\sin^2\alpha\,\mathrm{E}(\alpha,1/\sin\alpha)=(2\sin^2\alpha-1)\,\mathrm{F}(\alpha,1/\sin\alpha).$$ Mathematica fails to find $\alpha$ numerically from this equation, however, so I solved the two equations together to find $\alpha\approx65°21'$. $\alpha$ being one half of the tangent angle to one end of the rod, the angle between the rod's ends is $4\alpha-\pi$, approximately $81°25'$. ■Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-40299357461481004272010-12-15T18:43:00.000+02:002010-12-15T18:43:43.848+02:00MSBuild goodies1. <a href="http://msdn.microsoft.com/en-us/library/dd722601.aspx">Inline tasks</a>: no more creating and managing a separate assembly just to execute a piece of non-trivial code during build process.<br />
2. <a href="http://blogs.msdn.com/b/visualstudio/archive/2010/04/02/msbuild-property-functions.aspx">Property functions</a>: if the piece of code is merely a couple of method calls, it can go anywhere <code>$()</code> can go with the <code>$([Class]::Method())</code> syntax. There are all kinds of limitations and the list of officially whitelisted methods is rather short — e.g. it does not include <code>AssemblyName.GetAssemblyName</code> — but whitelist checks can be disabled by setting the environment variable <code>MSBUILDENABLEALLPROPERTYFUNCTIONS</code> to 1.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-41319279741705999592010-11-10T11:29:00.000+02:002010-11-10T11:29:11.140+02:00Bootstrapper package for Windows Imaging ComponentDevelopers who target .NET 4 and choose to create a proper bootstrapper for their installer have been bit repeatedly by installation failures on Windows XP SP2 and Windows Server 2003. .NET 4 fails to install on these admittedly outdated, but still widespread OSes because it depends on the <a href="http://support.microsoft.com/kb/947898">Windows Imaging Component</a>, which appears in XP SP3, Vista, 7 and 2008 Server and is installed with .NET 3.5 SP1. These OSes increasingly being the majority, Microsoft <a href="http://www.hanselman.com/blog/TowardsASmallerNET4DetailsOnTheClientProfileAndDownloadingNET.aspx">decided</a> to leave WIC out of the .NET 4 installer in order to reduce the download size, a laudable intention. Since they actually <a href="http://msdn.microsoft.com/en-us/library/ee942965.aspx" title=".NET Framework Deployment Guide for Developers">documented this</a>, there is no cause to complain; but why not also provide a bootstrapper package for WIC in the Windows SDK since it began to include packages for .NET 4? Developers usually direct the bootstrapper to download Microsoft files from the home site, so setup size would not increase appreciably.<br />
Anyway. I created a <a href="http://pastebin.com/FfkW7TUN">WIC bootstrapper package</a> and tested it on XP SP2, XP SP3 and 7. I also added a dependence on WIC to the .NET 4 packages. No warranty; use at own risk but comments welcome!Unknownnoreply@blogger.com4tag:blogger.com,1999:blog-6966085462464595987.post-48317493825824362702010-06-24T06:50:00.014+03:002010-11-10T12:09:15.503+02:00How to use SSL3 instead of TLS in a particular HttpWebRequestMy application has to talk to different hosts over https, and the default setting of <code>ServicePointManager.SecurityProtocol = TLS</code> served me well. <a href="http://stackoverflow.com/questions/3028486">The other day</a>, though, I had some NetWare hosts which (as <code>System.Net</code> trace log shows) don't answer the initial TLS handshake message but keep the underlying connection open until it times out, throwing a timeout exception. It seems that Netware's policy regarding unrecognized/invalid requests is not to respond or give any error messages, presumably to reduce attack surface. Very understandable, but this behaviour does not give .NET's built-in TLS-to-SSL3 fallback mechanism a chance to kick in.<br />
I really didn't want to have to degrade the security protocol setting to SSL3 in the whole application for the sake of a few musty Netware hosts, but this <code>ServicePointManager</code> setting is global and there is no way to force a downgrade through <code>HttpWebRequest</code>. Luckily, 'global' has more than one meaning in the .NET world; <code>ServicePointManager</code> settings are actually per-appdomain. This enabled me to work around the problem by creating a separate appdomain set up to use only SSL3, making my data collection object <code>MarshalByRefObject</code> (<code>WebClient</code> and <code>WebRequest</code> are marshal-by-ref too, but better to reduce the number of cross-appdomain calls and avoid marshaling anything more complicated than a string) and creating it there. Worked perfectly combined with a timeout-based detection scheme.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-89352509971144510282009-08-28T12:43:00.016+03:002010-11-10T10:01:39.834+02:00Having your InternalPreserveStackTrace and eating itThis post stems from <a href="http://blogs.msdn.com/clrteam/archive/2009/08/25/the-good-and-the-bad-of-exception-filters.aspx">a discussion</a> of stack trace problems at the <a href="http://blogs.msdn.com/clrteam/default.aspx">CLR team blog</a>.<br /><h2>The problem</h2>When an existing exception is thrown in the normal way with <code>throw e</code>, any stack trace that was recorded in it is overwritten and destroyed. This complicates debugging and logging — the stack trace seen by a top-level handler (which, in a long-running application, must log it and somehow restore the application to operation) is practically useless. Throwing existing exceptions — ones which were previously caught and stored or serialized — is a necessity when doing custom cross-thread invoke, e.g. a custom thread pool. Custom remote call solutions also suffer from this problem.<br /><h2>The known <strike>hack</strike>solution</h2>Microsoft's Remoting team encountered the same problem, but they had the advantage of being able to modify the CLR. They introduced the internal <code>Exception._remoteStackTraceString</code> field, which is not overwritten by CLR when an exception is thrown. <code>Exception.StackTrace</code> prepends the contents of this field to the normal stack trace. They also introduced two internal methods on <code>Exception</code>, <code>PrepForRemoting</code> and <code>InternalPreserveStackTrace</code>, which squirrel away the existing stack trace into this field. However, all these members are internal, so they cannot be reliably called by third-party code with similar needs.<br />It seems that <a href="http://web.archive.org/web/20071215093134/dotnetjunkies.com/WebLog/chris.taylor/default.aspx">Chris Taylor</a> was the first to discover these internal members. He <a href="http://web.archive.org/web/20080106084602/http://dotnetjunkies.com/WebLog/chris.taylor/archive/2004/03/03/8353.aspx">published</a> a hack which preserves stack trace in an exception by accessing <code>_remoteStackTraceString</code> with Reflection. A <a href="http://weblogs.asp.net/fmarguerie/archive/2008/01/02/rethrowing-exceptions-and-preserving-the-full-call-stack-trace.aspx">more mature version</a> of this hack by <a href="http://weblogs.asp.net/fmarguerie">Fabrice Marguerie</a> calls <code>InternalPreserveStackTrace</code> (again using Reflection). Later, <a href="http://bradwilson.typepad.com/blog/">Brad Wilson</a> <a href="http://bradwilson.typepad.com/blog/2008/04/small-decisions.html">ranted</a> on this subject. Brad also mentions that the Reflection team did not use stack trace preservation, but instead introduced the pesky <code>TargetInvocationException</code> (which most everyone has to unwrap and throw the inner exception ASAP to propagate the original exception).<br /><h2>Back to the present</h2>When I mentioned this hack in the discussion at the CLR team blog, CLR team's Mike Magruder pointed out its essential brittleness/hackiness. Mike is, of course, right; I am sure no-one who uses this hack is happy about messing with <code>mscorlib</code>'s internals; but the problem has to be dealt with. Mike's criticism prodded me into looking for a more portable solution.<br /><h2>It</h2>My solution exploits the fact that cross-AppDomain calls need to preserve stack traces of exceptions propagating across the AppDomain boundary. Cross-AppDomain calls seem to use the serialization infrastructure to get non-trivial data across, so when <code>Exception</code>'s <code>SetObjectData</code> constructor sees the <code>CrossAppDomain</code> flag in the supplied <code>SerializationContext</code>, it prepares the exception for subsequent throwing — by setting the crucial <code>_remoteStackTraceString</code> field in essentially the same way as <code>InternalPreserveStackTrace</code>, although <code>SetObjectData</code> forgets to insert a newline after the old stack trace. It remains, then, to call an exception's <code>GetObjectData</code> and <code>SetObjectData</code>, tricking it into believing that it is being serialized across the AppDomain boundary.<br />The primitive version of my solution relied on <code>BinaryFormatter</code> to do the heavy lifting:<br /><pre style="overflow:auto;line-height:normal;font-size:75%;">
<font color="#00f">static</font> <font color="#49b">Exception</font> WithPreservedStackTrace (<font color="#49b">Exception</font> e)
{
<font color="#00f">var</font> context = <font color="#00f">new</font> <font color="#49b">StreamingContext</font> (<font color="#49b">StreamingContextStates</font>.CrossAppDomain) ;
<font color="#00f">var</font> formatter = <font color="#00f">new</font> <font color="#49b">BinaryFormatter</font> (<font color="#00f">null</font>, context) ;
formatter.FilterLevel = <font color="#49b">TypeFilterLevel</font>.Full ;
<font color="#00f">using</font> (<font color="#00f">var</font> stream = <font color="#00f">new</font> <font color="#49b">MemoryStream</font> ())
{
formatter.Serialize (memory, e) ;
memory.Position = 0 ; <font color="#080">// rewind stream</font>
<font color="#00f">return</font> (<font color="#49b">Exception</font>) formatter.Deserialize (memory) ;
}
}
</pre>This works like a charm, but all the unnecessary extra work done by <code>BinaryFormatter</code> galled me, so I poked around RedBits code some more and evolved the following version, which uses the arcane <code>ObjectManager</code> class:<br /><pre style="overflow:auto;line-height:normal;font-size:75%;">
<font color="#00f">static void</font> PreserveStackTrace (<font color="#49b">Exception</font> e)
{
<font color="#00f">var</font> context = <font color="#00f">new</font> <font color="#49b">StreamingContext</font> (<font color="#49b">StreamingContextStates</font>.CrossAppDomain) ;
<font color="#00f">var</font> manager = <font color="#00f">new</font> <font color="#49b">ObjectManager</font> (<font color="#00f">null</font>, context) ;
<font color="#00f">var</font> serinfo = <font color="#00f">new</font> <font color="#49b">SerializationInfo</font> (e.GetType (), <font color="#00f">new</font> <font color="#49b">FormatterConverter</font> ()) ;
e.GetObjectData (serinfo, context) ;
manager.RegisterObject (e, 1, serinfo) ; <font color="#080">// prepare for SetObjectData</font>
manager.DoFixups () ; <font color="#080">// ObjectManager calls SetObjectData for us</font>
<font color="#080">// voila, e is unmodified save for _remoteStackTraceString</font>
}
</pre>This still wastes a lot of cycles compared to <code>InternalPreserveStackTrace</code>, but has the advantage of relying only on public functionality. Purists who really want to avoid calling <code>InternalPreserveStackTrace</code> can use this workaround :3
<br/><br/><b>Update:</b> usage samples which <a href="http://stackoverflow.com/questions/57383/in-c-how-can-i-rethrow-innerexception-without-losing-stack-trace">I posted</a> on StackOverflow:
<pre style="overflow:auto;line-height:normal;font-size:75%;">
<font color="#080">// usage (A): cross-thread invoke, messaging, custom task schedulers etc.</font>
<font color="#00f">catch</font> (<font color="#49b">Exception</font> e)
{
PreserveStackTrace (e) ;
<font color="#080">// store exception to be re-thrown later,
// possibly in a different thread</font>
operationResult.Exception = e ;
}
<font color="#080">// usage (B): after calling MethodInfo.Invoke() and the like</font>
<font color="#00f">catch</font> (<font color="#49b">TargetInvocationException</font> tiex)
{
PreserveStackTrace (tiex.InnerException) ;
<font color="#080">// unwrap TargetInvocationException, so that typed catch clauses
// in library/3rd-party code can work correctly;
// new stack trace is appended to existing one</font>
<font color="#00f">throw</font> tiex.InnerException ;
}
</pre>Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-11085681634556837312008-12-15T00:12:00.011+02:002010-11-10T10:10:09.474+02:00Dynamic method dropDynamic methods drove me crazy for the last two days. A completely innocent-looking, verified IL which created delegates from dynamic methods sometimes blew up with all kinds of exceptions: null-reference exceptions in weird places, stack overflows (or hang-ups if not running under Developer Studio) and the enigmatic <code>FatalExecutionEngineError</code>. Other times the code executed without a problem. It was difficult to establish any pattern in these failures. windbg+SoS revealed nothing beyond the fact that the IL was being generated correctly, and excavations of RedBits/Rotor code did not help much either. I went over the whole mess in my mind while listening to a jazz performance, and realized that the flaw was in my delegate-creation IL code, which went like this:<br /><br /><i>push target object</i><br /><code>ldftn </code><i>dynamic method</i><br /><code>newobj Procedure..ctor(object, native int)</code><br /><i>store new delegate</i><br /><br />This code creates a working delegate, but the delegate's internal <code>_methodBase</code> field is initially null. The bald function pointer does not work as a GC reference, and if there are no other references to the dynamic method, it is liable to be garbage-collected, and the delegate containing the stale function pointer naturally but silently becomes a nest of nasal demons.<br /><br /><code>Delegate.CreateDelegate</code> overloads don't fill <code>_methodBase</code> either. <code>DynamicMethod.CreateDelegate</code> does fill it explicitly, but it is impossible to get the <code>DynamicMethod</code> object from its token without calling mscorlib's internal methods. I understand the security reasons behind this decision, but it's damned uncomfortable.<br /><br />The only solution to the problem I have found so far is to call the new delegate's <code>get_Method()</code> function to fill <code>_methodBase</code> and establish a GC-visible reference to the dynamic method. Whew!
<br/><br/><b>Update:</b> <a href="https://connect.microsoft.com/VisualStudio/feedback/details/389514/delegate-to-a-dynamic-method-created-from-another-dynamic-method-does-not-gc-reference-the-target-method">reported this issue to Microsoft</a>; they decided to close it as a 'known limitation'. Well, it <i>is</i> known — now :3Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-15353369201544880132008-02-24T16:22:00.002+02:002008-02-24T16:23:58.382+02:00Detroit Public School Repository<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRsm_1270eiMZyDX7wH8Jx_7inrPLbP-PU_7RSDGx3Bobx0xizQ7m3N9f-jSAn4nhotZ7j58ICaOi1lCRhbFfgUBoh9Tm9eCM_zgeLtarFQ6WnnJHIirXIcBLgTAwOu0uxUj-ro-FF7USU/s1600-h/2053074686_28cb88f919_m.jpg"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiRsm_1270eiMZyDX7wH8Jx_7inrPLbP-PU_7RSDGx3Bobx0xizQ7m3N9f-jSAn4nhotZ7j58ICaOi1lCRhbFfgUBoh9Tm9eCM_zgeLtarFQ6WnnJHIirXIcBLgTAwOu0uxUj-ro-FF7USU/s400/2053074686_28cb88f919_m.jpg" alt="" id="BLOGGER_PHOTO_ID_5170552128059851170" border="0" /></a><a href="http://www.flickr.com/photos/sweetjuniper/sets/72157603302647339/">Superb photos</a>, somewhat reminiscent of Prypiat schools and kindergartens.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-26219065180589930822008-01-24T01:47:00.000+02:002008-01-29T10:13:14.578+02:00WM_WINDOWPOSCHANGED.NETA list view control is handy, but sometimes it needs a little black magic to work properly.<br /><br /><code>SetStyle (ControlStyles.OptimizedDoubleBuffer, true) ;</code><br /><br />in the constructor takes care of the general flicker problem (in Win32, you need to request 6.0 common controls with a manifest and set the <code>LVS_EX_DOUBLEBUFFER</code> style). It seems that suppressing background painting in addition to double-buffer is overkill. Message sniffing and disassembly of <code>comctl32.dll</code> indicates that the background processing is different in 6.0 common controls with «double-buffer» vs. earlier versions. I quote «double-buffer», because I am not sure how much double-buffering is actually involved.<br /><br />However, if you use the list view in report mode and want to have your columns adjust to the list view's width, you need more magic. The simple solution — setting column width(s) in <code>OnResize</code> — leads to nasty horizontal scrollbar glitching when you shrink the list view. Shuffling the column-resizing code around the various resizing handlers did no good whatsoever. A couple of hours of tedious debugging uncovered the proximate source of this glitch: after the list view gets resized and receives the <code>WM_WINDOWPOSCHANGED</code> message, it passes the message to the list view's original window procedure in <code>comctl32.dll</code>, which faithfully paints the offending horizontal scrollbar, because the columns are now too wide and the <code>OnResize</code> handler wasn't called yet. After this, another, nested <code>WM_WINDOWPOSCHANGED</code> occurs, and this one finally calls <code>UpdateBounds</code>, which calls the <code>OnResize</code> handler. And then the handler of the original <code>WM_WINDOWPOSCHANGED</code> calls <code>UpdateBounds</code>, and thus your handler, again! Why does it work this way? It is beyond me. Conclusion: it seems to be impossible to resize columns properly in response to any resize event.<br /><br />The remedy is simple enough: run the column-resizing code before the list view is resized if the columns need to be narrower, and after it is resized if the columns need to be wider. The place to do this is the <code>SetBoundsCore</code> function, which is, luckily, virtual:<br /><pre>protected override void SetBoundsCore (int x, int y, int width, int height, BoundsSpecified specified)<br />{<br /> int newClientWidth = EstimateNewClientWidth (width, height) ;<br /> if (newClientWidth < ClientSize.Width)<br /> {<br /> // Reduce column width first to prevent horizontal scrollbar flicker.<br /> FixColumnWidths (newcx) ;<br /> base.SetBoundsCore (x, y, width, height, specified) ;<br /> }<br /> else<br /> {<br /> base.SetBoundsCore (x, y, width, height, specified) ;<br /> FixColumnWidths (ClientSize.Width) ; // the new client width!<br /> }<br />}<br /></pre><br />The remaining tricks are in the <code>EstimateNewClientWidth</code> function, which has to guess whether there was a scrollbar before the resize and whether there will be one after it:<br /><pre>private int EstimateNewClientWidth (int width, int height)<br />{<br /> if (Items.Count == 0) return ClientSize.Width ;<br /><br /> // Estimate the new client size from the old one and the window size change.<br /> int oldcx = ClientSize.Width ;<br /> int oldcy = ClientSize.Height ;<br /> int newcx = oldcx + width - Bounds.Width ;<br /> int newcy = oldcy + height - Bounds.Height ;<br /> int hdrcy = GetHeaderHeight () ;<br /><br /> Rectangle rect0 = GetItemRect (0, ItemBoundsPortion.Entire) ;<br /> Rectangle rectN = GetItemRect (Items.Count - 1, ItemBoundsPortion.Entire) ; <br /><br /> bool needScrollBar = rect0.Top < hdrcy || rectN.Bottom > newcy ;<br /> bool haveScrollBar = rect0.Top < hdrcy || rectN.Bottom > oldcy ;<br /><br /> // Correct the new client size for vertical scrollbar changes.<br /> if (haveScrollBar)<br /> {<br /> if (!needScrollBar) newcx += SystemInformation.VerticalScrollBarWidth ;<br /> }<br /> else<br /> {<br /> if ( needScrollBar) newcx -= SystemInformation.VerticalScrollBarWidth ;<br /> }<br /><br /> return newcx ;<br />}<br /></pre><br />I found no pure <code>.NET</code> method to get the current height of the list view's header control, so I have to resort to <code>P/Invoke</code>:<br /><pre>[StructLayout (LayoutKind.Sequential)]<br />struct RECT<br />{<br /> public int left ;<br /> public int top ;<br /> public int right ;<br /> public int bottom ;<br />}<br /><br />[DllImport ("user32.dll", EntryPoint = "GetWindowRect")]<br />static extern int GetWindowRect (IntPtr hWnd, ref RECT rect) ;<br /><br />[DllImport ("user32.dll")]<br />static extern IntPtr SendMessageW (IntPtr hWnd, int message, IntPtr wParam, IntPtr lParam) ;<br /><br />private int GetHeaderHeight ()<br />{<br /> RECT rect = new RECT () ;<br /> GetWindowRect (SendMessageW (Handle, 0x101F /*LVM_GETHEADER*/, (IntPtr)0, (IntPtr)0), ref rect) ;<br /><br /> return PointToClient (new System.Drawing.Point (rect.left, rect.bottom)).Y ;<br />}<br /></pre><br />And that's it for the .NET case ^.^<br /><br />In <code>Win32</code>, there is a more elegant solution for <code>EstimateNewClientWidth</code>, which does not work under <code>.NET</code>: use <code>SetWindowPos</code> with <code>SWP_NOREDRAW</code> to do a fake resize on the list view, measure its new client width, and fake-resize it back. And instead of the <code>SetBoundsCore</code>, the column-resizing code goes into the parent window's <code>WM_WINDOWPOSCHANGED</code> handler, which is where you resize the list view anyway.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-19899120579566338632008-01-09T22:25:00.000+02:002008-01-10T01:06:40.888+02:00"ee"In 1947, Orwell wrote in «The English People» that the English language gravitates towards simpler grammar and syntax. If this trend continues, he wrote, English would have more in common with isolating languages of East Asia than with Indo-European languages. I will write about one of the steps in this direction, and have some fun in the process.<br /><br />The other day I was reading something somewhere, and bam! the word <i>attackee <small>(800)</small></i> leapt out of the page (NB: where available, I give Google hit counts, excluding obvious misspellings, in parenthesized small numerals). I recalled that in a contract I had recently translated there was a <i>payee <small>(1,3M)</small></i>, but it did not strike me then as anything unusual. After all, programmers commonly use <i>callee <small>(61K)</small></i> and sometimes <i>pointee <small>(36K)</small></i>, personifying functions and objects. But <i>attackee</i> somehow seems a peculiarly ugly word, its sole raison'd'etre being that «X being attacked» and «X under attack» are too long while «defender» has somewhat different meaning. <i>attackee</i> tells us that the <i>-ee</i> suffix, having originally appeared in the French loanword <i>employee <small>(138M)</small></i>, has firmly established itself as an independent lexical unit. If <i>attackee</i>, I thought, why not more? Let's have more fun with <i>-ee</i>! Me and my sister almost laughed our backsides off today thinking up all sorts of <i>-ee</i> words and Googling them.<br /><br /><b>Fun I</b><br /><span style="margin: 0pt 0pt 20px 20px; float: right;"><br /><img src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhmSFAxoXdKcmhbBmDMBkM1qzPVFKTP19V7mqi6Cpu_X8TBXG3hpAaiNgXXjqVd4wmJw2uKsx8ZGTbLjmQqebaVmdnvXVwQ1vIxNoMQ65it4bdVujmPPIKGMkrfYsqLqF6GC8Ax7VltJetJ/s400/Untitled-1.jpg" alt="" id="BLOGGER_PHOTO_ID_5153611628499461410" border="0" /><br /><br /><center><b>Picrelated</b>: the <i>huntee</i> is not visible at all.</center><br /></span><br />Violent transitive verbs seem to eeify particularly well. Some examples: programmers without respect for language sometimes use <i><a href="http://www.toontowncentral.com/forums/showthread.php?t=65043">trapee</a></i>; <i>sandbaggee <small>(3)</small></i> is at least funny; <i>kickee</i>, <i>rapee</i> (the Urban Dictionary <a href="http://www.urbandictionary.com/define.php?term=rapee">has it</a>) and <i>killee</i> are too contaminated with accidental foreign matches to give accurate hit counts; <i>murderee <small>(6K)</small></i> featured in the Independent (<a href="http://findarticles.com/p/articles/mi_qn4159/is_20070819/ai_n19473778">I'm not kidding you</a>!); then there's <i>fuckee <small>(21K)</small></i> with its gross and <a href="http://wordlust.blogspot.com/2005_09_01_archive.html">sometimes misspelt retinue</a>; and the king of them all, <i>pwnee <small>(3K)</small></i> — I'll be using this one! «pwnee detected» might even have a chance on the various <a href="http://en.wikipedia.org/wiki/2ch">chans</a>.<br /><br /><b>Fun II</b><br /><br />Besides the venerable <i>employee</i>, economists have <i>payee <small>(1,3M)</small></i> and <i>mortgagee <small>(1M)</small></i> and such gems as <i>buyee <small>(10K)</small></i> and <i>bankruptee <small>(1K)</small></i>, while lawyers hold their own with the likes of <i>contractee <small>(73K)</small></i>, <i>insuree <small>(8K)</small></i>, <i>harassee <small>(4K)</small></i> and <i>slanderee <small>(70)</small></i>. <i>creditee</i> is too confused with a declension of the French crediter to ascertain its usage; other <i>-ee</i> words suffer from French grammar as well. In the political sphere there is <i>electee <small>(10K)</small></i>, and jokers have invented <i><a href="http://www.cartoonstock.com/newscartoons/directory/v/votee.asp">votee</a></i>. Generally speaking, just about any economic or legal term with the <i>-er</i> suffix can sprout an <i>-ee</i> sibling and, sooner or later, usually does, for the greater benefit of terminological uniformity.<br /><br /><b>Fun III</b><br /><br />On a more (or rather less) forgiving note, some terminally deaf bonehead thought up the abominable words <i>forgivee <small>(2K)</small></i> and even <i><a href="http://everything2.com/index.pl?node=Lovee">lovee</a></i> — they say 1913 Webster had it! I can't imagine using these words for anyone close or important to me. Their natural habitat is probably restricted to badly written self-help books and similar trash.<br /><br /><b>Fun IV</b><br /><br />Finally, for some random fun try <i>postee</i>, <i>bloggee</i>, <i>googlee</i>, <i>trollee</i>, <i>quittee</i> and <i>decidee</i> (<a href="http://www.huffingtonpost.com/arianna-huffington/president-bush-decider-o_b_19406.html">hello George</a>), but don't try <i><a href="http://www.realitytvworld.com/news/apprentice-3--firee-danny-kastner-desperately-seeks-extend-fame-1002123.php">firee</a></i> or <i>dumpee <small>(5K)</small></i>! Remember, I warned you!<br /><br /><b>Conclusion</b><br /><br />eeified transitive verbs surely add to the vibrancy of the English language, and may sometimes help with terminology; but please do remember that not all of them sound and feel equally good.Unknownnoreply@blogger.com0tag:blogger.com,1999:blog-6966085462464595987.post-45289797032444112612007-11-05T17:09:00.001+02:002007-11-05T17:16:37.713+02:0016:10 amenities<a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgv9AQFWdsTNay-f1HoN7g-mw9t4vrn8LbvSeV_4hYfe9RULOLlFB-KzfF64Rpq-Vq34tXQIs57n4Oc1CJaBvwKOECQN7Ollu0biNMLOIcEV3tFfVNMwe4W5_DndCsTzw9FZwoej952Des8/s1600-h/far_start.JPG"><img style="margin: 0pt 10px 10px 0pt; float: left; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgv9AQFWdsTNay-f1HoN7g-mw9t4vrn8LbvSeV_4hYfe9RULOLlFB-KzfF64Rpq-Vq34tXQIs57n4Oc1CJaBvwKOECQN7Ollu0biNMLOIcEV3tFfVNMwe4W5_DndCsTzw9FZwoej952Des8/s400/far_start.JPG" alt="" id="BLOGGER_PHOTO_ID_5129374985005897810" border="0" /></a><br /><a onblur="try {parent.deselectBloggerImageGracefully();} catch(e) {}" href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6EQf97PlWJvJ4FYye6WplfALyLYMyhRBs-YULs56S1wpi0-mbv0-TO5Dc7U5JHU1Dwskfy7iRcWu9OyQ8NK5frtUz0UPdBAgLKniS7tPKXvNzpf5QYSqtJTRPtohmlxnxCIt8NVRjNk9q/s1600-h/far_both.JPG"><img style="margin: 0pt 0pt 10px 10px; float: right; cursor: pointer;" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEh6EQf97PlWJvJ4FYye6WplfALyLYMyhRBs-YULs56S1wpi0-mbv0-TO5Dc7U5JHU1Dwskfy7iRcWu9OyQ8NK5frtUz0UPdBAgLKniS7tPKXvNzpf5QYSqtJTRPtohmlxnxCIt8NVRjNk9q/s400/far_both.JPG" alt="" id="BLOGGER_PHOTO_ID_5129374628523612226" border="0" /></a>Two big Far's fit nicely side-by-side on my 2007WFP. Now I have two Far's on my start menu, a right Far and a left Far, and two keyboard shortcuts. The only drawback of this arrangement is that panel operations between the two Far's don't work as visually expected. I had to resource-hack two copies of Far.exe for full effect, though. Gaheh ^..^Unknownnoreply@blogger.com0