Yes, one can't read it anymore, but one can still reconstruct what was written.
Definitely NOT "anyone", maybe
"one" with a MF microscope, I know noone having one, do you?
Now I don't think what I am asking to be much
, just a single report by a reputable source that this has been done ONCE on a modern hard disk.
I tried to ask in a forum of professionals dealing with data recovery and police work, and beside some more theory, noone could report to have ever
recovered or having ever
seen a single file recovered from a zeroed out single pass disk:http://www.forensicf...m...065&start=0
You may want to re-read the previously linked sources, where the Author of the theory himself says:
1) that this was just a theory that was never directly tested by him
2) that the theory would not apply, or would not apply in any useful manner on modern hard disks
Compare the original article (1996):http://www.usenix.or...mann/index.html
With latest version:http://www.cs.auckla...secure_del.html
this is the relevant part:
Data overwritten once or twice may be recovered by subtracting what is expected to be read from a storage location from what is actually read. Data which is overwritten an arbitrarily large number of times can still be recovered provided that the new data isn't written to the same location as the original data (for magnetic media), or that the recovery attempt is carried out fairly soon after the new data was written (for RAM). For this reason it is effectively impossible to sanitise storage locations by simple overwriting them, no matter how many overwrite passes are made or what data patterns are written. However by using the relatively simple methods presented in this paper the task of an attacker can be made significantly more difficult, if not prohibitively expensive.
In the time since this paper was published, some people have treated the 35-pass overwrite technique described in it more as a kind of voodoo incantation to banish evil spirits than the result of a technical analysis of drive encoding techniques. As a result, they advocate applying the voodoo to PRML and EPRML drives even though it will have no more effect than a simple scrubbing with random data. In fact performing the full 35-pass overwrite is pointless for any drive since it targets a blend of scenarios involving all types of (normally-used) encoding technology, which covers everything back to 30+-year-old MFM methods (if you don't understand that statement, re-read the paper). If you're using a drive which uses encoding technology X, you only need to perform the passes specific to X, and you never need to perform all 35 passes. For any modern PRML/EPRML drive, a few passes of random scrubbing is the best you can do. As the paper says, "A good scrubbing with random data will do about as well as can be expected". This was true in 1996, and is still true now.
Looking at this from the other point of view, with the ever-increasing data density on disk platters and a corresponding reduction in feature size and use of exotic techniques to record data on the medium, it's unlikely that anything can be recovered from any recent drive except perhaps a single level via basic error-cancelling techniques. In particular the drives in use at the time that this paper was originally written have mostly fallen out of use, so the methods that applied specifically to the older, lower-density technology don't apply any more. Conversely, with modern high-density drives, even if you've got 10KB of sensitive data on a drive and can't erase it with 100% certainty, the chances of an adversary being able to find the erased traces of that 10KB in 80GB of other erased traces are close to zero.
(bolding/underlining by me)