So, one of my employers ended up with a fried hard disk, for the second time in a row. The main reason is that the PC this HD is contained in sits in a corner with little-to-no airflow.
In order to recover the drive, I am actually taking a different approach from my last recovery effort, mainly by necessity. This disk is seriously damaged–lots of bad sectors, and its partitions are not readable by any NTFS driver, be it Microsoft’s or the open source one. This makes simply using the wonderful R-Studio tool I used last time currently impossible, due to the fact that it won’t even see the drive properly within Windows, and will hang all over the place.
Indeed, what I needed to do is drop down a layer of abstraction: away from filesystems, and into blocks and sectors. Unfortunately, in the Windows world this drop down is difficult, so I had to use my Linux laptop to make this jump.
I found a wonderful tool to help me out called dd_rescue, which is basically a dd with the added features of continuing on error, allowing one to specify a starting position in the in/out files, and the ability to run a copy in reverse. These features allow one to really work around bad sectors and even damaged disk hardware to get as much data as possible out.
Unfortunately, the use of this tool was encumbered by my laptop’s relatively simple bus design. Apparently, if I stuck two devices on my USB bus (like two HDs I was using for this process), the bus would slow to a crawl, and the copy would move along at an unbearble 100kB/sec. I tried utilizing firewire and USB together, but got only marginal improvements. What befuddles me is that in the end, the fastest combination I could come up with is reading from the Firewire enclosure with my laptop and writing to the firewire enclosure of my desktop across the LAN utilizing Samba. Very strange indeed. Now my performance is more like 6MB/sec, factoring in all the breaks dd_rescue takes when it encounters errors. I have 6GB of the more critical partition written, but it’ll probably take a couple hours to have a big enough chunk that I can test R-Studio’s recovery of it.
The only reason I’m even writing about this is because I find it hilarious how many layers of abstraction I am breaking through to do a relatively low-level operation. Think about it:
- My broken IDE drive is converted to Firewire by a Firewire-IDE bridge.
- My Firewire PCMCIA adapter is allowing my notebook to take in that connection.
- The Linux kernel is allowing firewire to be accessed via various ieee1394 ohci drivers.
- The Linux kernel is abstracting the firewire disk as a SCSI disk, using emulation.
- The SCSI disk is being read by dd_rescue and written to a file, which exists in the path /mnt/smb/image/sdb5.
- That path seems local, but is actually a mount point. That mount point seems physical but is actually handled by a Samba driver.
- The writes by dd_rescue to that image file are being sent through the kernel’s TCP/IP stack, and flying through my switch, and being accepted by Windows XP’s network stack.
- Windows XP is writing that data to an NTFS drive, which is itself connected by a Firewire-IDE bridge (and therefore all the above steps’ equivalents for Windows apply).
I am surprised with that many layers, that this copy is even working. I really should have just taken a machine apart and connected these drives directly by IDE, to save myself a few layers.
One thought on “Fried hard disk ruins weekend”