Forensic RAID Recovery
RAIDs (Redundant Arrays of Independent Disks) are a good way to prevent data loss in case of hardware defects like a broken hard disk, while at the same time improving I/O performance. However, due to the introduction of an additional abstraction layer (i.e. the RAID layer) between the hard disks and the operating system, it becomes harder to reconstruct the file system data from the set of disks in case the RAID controller fails, as data is distributed among the disks. A similar case occurs in the field of forensic computing (or IT forensics), where accessing data on previously seized and imaged hard disks is the base of many investigations. The challenge here is to recover the RAID system from the single disk images by verifying redundancy information and reconstructing failed or missing disks.
In the course of the lecture ‘Forensic Hacks’ at Friedrich-Alexander-University by Dr.-Ing. Andreas Dewald, Sabine Seufert and Christian Zoubek implemented a recovery tool for different RAID levels (RAID 0, RAID 1, RAID 5). Hereby, the goal was to automatically estimate parameters used by the raidcontroller like the raidsystem, stripesize and the corresponding stripemap.
That tool is called “Raid faster – recover better” (rfrb v1.0.0) and uses several entropy-based heuristics to determine those parameters. Furthermore, we focused on performance to increase read/write throughput to ensure that large RAID images can be recovered in a reasonable time.
Alongside the corresponding paper ‘Generic RAID Reassembly using Block-Level Entropy’, rfrb has been published at the 2016 DFRWS EU conference (http://www.dfrws.org/2016eu/).
We put our presentation slides from the conference here: https://www.ernw.de/download/DFRWS-EU-2016-Forensic-RAID-Recovery-Slides…
The full paper is also publicly accessible here: http://dx.doi.org/10.1016/j.diin.2016.01.007