Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How secure is merely discarding (TRIMing) all of a SSD's blocks? (utcc.utoronto.ca)
114 points by goranmoomin on March 8, 2023 | hide | past | favorite | 114 comments


Somethings not mentioned in the article, but are pretty relevant.

Modern SSD controllers encrypt all data by default. Not to increase security, to improve wear levelling. Encrypted data should be effectively random data (that’s what good encryption aims to produce). So encrypting the data ensures that patterns in data written to the SSD don’t end up creating uneven wear patterns in the flash chips.

As a consequence, reading the raw data of an SDD would just produce garbage. You would need to also extract the encryption key from the SSD controller. Given this encryption isn’t meant to be secure, it’s quite possible to extracting that key is quite easy. But I believe that secure erase on SSDs is achieved by simply rotating the encryption key used for wear levelling, so maybe it’s reasonable well protected.

I would expect an SSD to quite aggressively erase TRIMed blocks where possible. Erasing a block of flash memory is about 10x slower (2ms) than writing to flash memory (200microseconds), so there’s quite a large performance hit once a drive runs out of empty blocks and needs to start erasing blocks in the write path.

Overall I suspect that simply TRIMing most SSD is secure enough for most applications. At least secure enough for anyone that even asks the question “is TRIMing secure enough”. For everyone else, you either don’t care at all, or your really care, in which case you’re using multiple layers of security such as full disk encryption, secure SSD erase, and shredding drives already.


The problem with this is that erase-to-program delay is a major factor in bad write quality.....you really don't want to erase a block and wait a long time before programming it. Where "long" is subjective and there are a lot of details here but the general rule is that the longer you wait after an erase to write, the less accurate your write is (colloquially, your zero decays as it sits)......so in practice while you do erase-ahead, you don't erase too far ahead of your writing and you certainly don't erase at TRIM time in most cases.


Didn’t know that. Don’t suppose you’ve got any links that describe the physical process that results in the correlation between write quality and erase-to-program delay?


Do you want research or do you want an analogy? I can give you both. On the research front:

    "An Experimental Analysis of Erase-to-Program Interference in Multi-Level Cell NAND Flash Memories" by F. X. Zhang, et al., in IEEE Transactions on Electron Devices, vol. 63, no. 4, pp. 1621-1628, April 2016.

    "Erase-to-Program Disturbance in NAND Flash Memory: Characterization, Modeling, and Mitigation Techniques" by M. H. Kim, et al., in IEEE Transactions on Very Large Scale Integration (VLSI) Systems, vol. 25, no. 9, pp. 2381-2392, Sept. 2017.

    "A Study of Erase to Program Disturbance in 3D NAND Flash Memory" by T. Wang, et al., in IEEE Transactions on Electron Devices, vol. 64, no. 7, pp. 3153-3159, July 2017.

    "Characterization and Modeling of the Erase-to-Program Disturbance in Multi-Level Cell NAND Flash Memories" by R. Micheloni, et al., in IEEE Transactions on Electron Devices, vol. 56, no. 11, pp. 2384-2392, Nov. 2009.


Don’t suppose you could still provide the analogy. It’ll probably make it easier for me to understand the research as I only have a basic understanding of the physics at play here.


How hard would it be to write custom firmware for an M.2 SSD?

I'm not even sure where to start


A good one? Extremely hard. Pretty sure 90% of and SSD firmware is just secret sauce magic to deal with the mess and noise the physical world creates when you try to cram so much entropy into such a tiny physical space.


What do you mean less accurate? Data is binary. It either is or it isn't.


Data is binary when you writing software. There’s absolutely nothing binary about data when you’re in the messy world of electronics, where fundamentally everything is analog.

Reading data from multi-level flash chips, or from spinning rust, has more in common with an FM radio, or mobile phone radio, than it does with anything you’ll see in an IDE. It’s all very complex signal processing, trying to extract small signals from lots of noise, in that world you know what your data probably looks like. You then need checksumming to figure out if it’s correct, and if it isn’t you need to try again, or declare the data lost.


But the "encryption" need not be cryptographic strength for this, so it might be reversible even without the key. Consider an old cypher with a small key for example.


Certainly true, but again, anyone that really cares about data destruction is never going to trust TRIM to destroy their data. At a minimum they’re going to use a mechanism that’s designed to destroy data (Secure Erase), and more likely they’re going to use multiple methods to destroy that data.

If you’re in a situation where you’re thinking that TRIM might be good enough, then you clearly don’t care that much about the data potentially being recovered by sophisticated attacker. You just don’t want it to be trivial to recover that data. In that case, even old cyphers and small keys is probably good enough.

But once again, if your really care about destroying your data, use something’s that’s explicitly designed to destroy data!



Everyone is using AES now.


But where's the key?


> Modern SSD controllers encrypt all data by default.

Contrary story from 2018/9:

> The security researchers explain that they were able to modify the firmware of the drives in a required way, because they could use a debugging interface to bypass the password validation routine in SSD drives. It does require physical access to a (internal or external) SSD. But the researchers were able to decrypt hardware-encrypted data without a password. The researchers write that they will not release any details in the form of a proof of concept (PoC) for exploit.

* https://borncity.com/win/2018/11/06/ssd-vulnerability-breaks...

> Microsoft is aware of reports of vulnerabilities in the hardware encryption of certain self-encrypting drives (SEDs). Customers concerned about this issue should consider using the software only encryption provided by BitLocker Drive Encryption™. On Windows computers with self-encrypting drives, BitLocker Drive Encryption™ manages encryption and will use hardware encryption by default. Administrators who want to force software encryption on computers with self-encrypting drives can accomplish this by deploying a Group Policy to override the default behavior. Windows will consult Group Policy to enforce software encryption only at the time of enabling BitLocker.

* https://msrc.microsoft.com/update-guide/en-us/vulnerability/...

> Although Windows ships with its own full volume encryption tool called BitLocker, SSDs that claimed to offer their own hardware-based encryption were trusted by the tool and left alone.

> Now though, after a recent update to Windows 10, Microsoft will assume that connected SSDs don't actually encrypt anything.

* https://www.techradar.com/news/microsoft-will-now-encrypt-ne...


None of that actually seems to contradict the claim that SSDs encrypt everything by default. You're just pointing out various ways that the encryption keys are not kept secure. If you haven't configured your system to make use of the drive's encryption capabilities in a secure way, the drive can (and all evidence points to does) still pass the data through the encryption for whitening without attempting to use the encryption process to provide security.


Do you have any pointers on where I can read this? It’s kind of the first I’m hearing that there’s encryption happening to side west leveling somehow and it’s not clear from a high level description why that would since west leveling is I thought a factor of write locations whereas encryption would only give you protection against bad bit patterns in what you write (but bit patterns afaik don’t impact wear on an SSD).


Not got links to details, but you can think about wear levelling at two levels.

The first and most obvious, is wear levelling between blocks of flash memory. Here the SSD controller tracks how often it erases blocks of memory which are probably tens to hundreds of bytes large, and tries to keep those counts nice and even. You’re probably already aware of that type of wear levelling.

But wear levelling across multiple blocks means that there’s nothing tracking how often each cell within a block of memory is being written to and erased, because that depends on exactly what data is written into the block. So if you’re writing data that has a repeating pattern that’s a multiple of your block length, then specific cells within that block will be written to more often than others. This impacts both the wear of the individual cells, but probably also write quality as well, because have two cells next to each other with vastly different stored values almost certainly adds noise when reading again (nature hates a gradient).

You can’t track wear on a per-cell level, because that would require an extreme amount of memory just to store counters, which could be better used to store data. But what you can to is transform the data your writing into “random” data to remove any patterns and produce even wear. Additionally the randomness will also reduce the variance in stored values between adjacent cells, improving write quality.

So encrypting the data provides a bunch of very useful adjacent benefits for a very small cost. Encryption algos like AES are deliberately designed to very high performance on extremely low power, memory constrained devices, and typically have a pretty low transistor count for dedicated crypto silicon. So using strong AES encryption in an SSD is pretty much a no-brainer. You get improved reliability (because better wear levelling), improved performance (because writes are more successful), and feature you charge extra money for (on device full disk encryption).


I'm not sure I follow. Am I correct in understanding that you're saying that if a block has some pattern, then overwriting that block with the same pattern will result in no extra wear on that block but neighboring blocks might overwrite and therefore would have worse wear. Thus encryption smears it out and ensures that you have consistently worst case wear always and that somehow improves reliability?

I'm assuming the reason this works is because the underlying block that gets written isn't the same one that has the old data and thus the probability of overwriting with the same data is low because the encryption key for a given write is somehow derived from block offset + length to make sure that a duplicate pattern for a given block will still encrypt differently?

If I understood that properly, this feels like a lot of complexity for unclear gain (not seeing how it improves write leveling vs at best downgrading the best case / average case to the worst case) and I haven't found any papers online. Is this mechanism something you actually first hand building or something you remember reading from some tech newspaper? Would love a primary source here.


> Am I correct in understanding that you're saying that if a block has some pattern, then overwriting that block with the same pattern will result in no extra wear on that block

> I'm assuming the reason this works is because the underlying block that gets written isn't the same one that has the old data and thus the probability of overwriting with the same data is low because the encryption key for a given write is somehow derived from block offset

No, this has nothing to do with writing the same data to the same blocks, but rather how you handle a continuous stream of data being written (and eventually overwriting older data) to the drive over very long periods of time. You should assume that physical location of your data bears zero correlation to the address to write it to. The SSD controller will always redirect it to least worn available block.

In SSD it’s reasonable to assume that the flash memory can be erased and written at a more granular level than you would want to track for wear levelling purposes (because extra granularity cost more money). So writing repeated patterns of data to a disk that align with the wear levelling block size will result in some parts of your flash memory being written and erased more often. But this unevenness is too granular for the wear levelling algorithm to track and compensate for. Encryption removes the need to track more granular wear patterns because it ensures that every single bit of your flash memory is going to see very similar write patterns.

Additionally there’s an entirely separate aspect, which is that NAND flash is a physical storage medium, and economics mean manufacturers want to squeeze every last ounce of storage out of their silicon. This means pack flash cells physically close to each other, and storing multiple values in each cell. The consequence is that activity in one cell can cause changes in adjacent cells, if those changes are large enough value of a cell will change corrupting the stored data.

By encrypting the data you can eliminate the possibility of extreme edge cases occurring which have a high risk of data corruption. At better place to observe these types of issues is to look at RAM, and Row Hammer attacks. The basic principles are the same, even if the precise physical interactions are a little different.



Did you link the right paper? There's nothing in there that talks about how encryption would help wear leveling that I could see.


This is why you always want to use full disk encryption (FDE). Then it doesn't really matter whether some blocks are recoverable after TRIM. It also doesn't really matter whether the Secure Delete function works. It also doesn't matter whether the very expensive security-buzzword-compliant secure deletion snakeoil software really works.

Also, TRIM is only possible with drives that are still in working order. Very often you can't TRIM or secure delete a drive that is defective and returned by RMA. In that case you either need a very expensive no-disk-return support contract. Or you just use FDE.


TRIM and FDE isn't perfect though as it leaks information about what parts have been written etc.

And if you turn off TRIM you get a massive performance decrease.


Why would someone turn off TRIM?


To mask the extent to which a drive has been filled, or to figure out how many blocks have been deleted since the last TRIM. I have yet to discover any use for this minuscule "leak", even when considering nation-state adversaries. It's much easier to find other channels to leak data through.

The disadvantages are massive, such as significant slowdown when the drive gets filled. Yet the creators of dm-crypt are paranoid about enabling it by default:

https://wiki.archlinux.org/title/Dm-crypt/Specialties#Discar...

I couldn't understand ref 3: https://web.archive.org/web/20160709174950/http://www.saout....

Ref 4: "you have an information leak where filesystem-discarded blocks (by TRIM) can be identified by an attacker with low effort"

https://web.archive.org/web/20160709183108/http://www.saout....


> Flash storage, unlike traditional hard drives, can really be completely and irrecoverably erased, with no lingering magnetic ghosts that a sufficiently determined person could in theory reconstruct.

I think people vastly overestimate the possibility to recover any data from a wiped HDD.

See for example this article by SANS [1]. To recover a single bit you have a better chance of flipping a coin to see if the bit was a zero or a one then you have when you are actually trying to reconstruct it with lab equipment.

[1] https://web.archive.org/web/20140621130410/https://digital-f...


I think you are right in that it is quite impossible to read data that has been overwritten from a sufficiently modern HDD.

But all modern HDDs retire sectors that exhibit a higher correctable error rate and replace them by reserve sectors. Those retired sectors then still contain readable data (maybe with some bit errors) if you know the appropriate factory mode commands. Overwriting the entire disk also won't overwrite retired sectors, except if the disk has an explicit function for it and you trust that function to work properly.

In short, you always need full disk encryption.


Or a low level format.


> See for example this article by SANS [1].

According to NIST 800-88 Rev. 1 a single overwrite is fine:

* https://en.wikipedia.org/wiki/Data_remanence#Feasibility_of_...


This can't be true. Any method must be at least as good as a coin toss. If it's worse you can just bit flip the result and end up better than the coin.


Bits are correlated: we care about the order of the bits, not just their population count.

Furthermore, <50% is only an antipredictor for a single coin flip. If you have two bits - two coin flips where you care about the order - then there's four values, and you have to be <25% to be an antipredictor. Even then, an antipredictor only removes one possibility[0], so flipping the bits will only slightly improve your terrible chances of recovering the data.

[0] https://en.wikipedia.org/wiki/Monty_Hall_problem


Unbelievably, the paper actually makes that claim:

> That is, if you toss a coin, you have a 50% chance of correctly choosing the value. In many instances, using a MFM to determine the prior value written to the hard drive was less successful than a simple coin toss.

More believably, the paper was written by Dr Craig Wright.


The paper does not make that claim. According to the paper the probability of recovering the correct value of a single bit is 96% for a pristine drive and 56% for a used drive, both better than a coin toss. Of course the probability gets worse if one tries to recover more bits (but still better than n coin tosses for n bits).


Right, if you flip a hundred coins, about half of them land heads and the other half tails. The correct coins we'll call oracles, and the incorrect ones liars.

Now you perform a hdd recovery that is successful 70% of the time. That is still worse than the performance of the oracle coins which were correct, and if you invert the performance of the liars, they also beat the performance of the recovery.

If you have two coins, one which is always correct and one which is always incorrect, you can flip one of them to decide whether the other one is correct (and pick the opposite).

Check mate, statisticians!


The article isn't super clear, but I think what it's saying is that the data recovery method they were testing (when looking at recovering a single bit) was as good as 92%, and as bad as 32%, depending on how the data was overwritten and other factors like the specific drive. Thus you likely can't just flip the result to get better odds because in a real scenario you wouldn't know which category you fall into (whether it's really good, or really bad), and overall if you don't know that information then the odds are close to 50%.

Edit: And of course even a high of 92% is abysmal for a single bit, since the combined odds of multiple bits quickly goes to zero for any sizable amount of data.


I don't think that follows? Just because a better method exists (the inverse) doesn't make the original perform better.

Imagine the following method: take whatever the bit was, and set it to the opposite of that. 0% < 50%. Of course you can invert and get 100%, but that's a different method. It doesn't make the original any better.


Also it's not obvious that it works the other way.

The only requirement of the flash reset operation is that it consistently causes bits to read as 1 and that the normal write operation allows them to read as 0 again; it doesn't actually follow from that that an erased-but-not-reprogrammed block will have cell charge levels that are indistinguishable from each other based on the pre-erasure charge level.


All those articles are very theoretical, in reality 1 pass of dd /dev/zero makes things unrecoverable, for recent SSD since trim is enabled by default you can't recover anything, I deleted a document by mistake on an SSD and it was impossible to get back, the moment I pressed delete on windows the trim command was sent and everything was erased on the ssd.

Pro recovery compagny that are not shady know that SSD recovery with TRIM is impossible.


So ... you didn't read the article then?


The article is light on details and assume a lot of things. he doesn't prove anything, show us that after succesful trim you're able to recover anything.

From my experience it's impossible.


Or you encrypt the entire drive. Then the secure erasure is a matter of throwing away the keys, and you don't have to bother with thinking through your threat model.


Throwing away the keys is surprisingly hard to do.

Say you have one root key and you derive per-file or per-block keys using various metadata (inode numbers, generation numbers, block addresses, whatever). You can't throw away derived keys though!

Or say you generate random keys per-file and you store those somewhere (it's annoying to have to store them). Now to throw those keys away you need to re-write the blocks containing them. If there's any behind-the-scenes copy-on-write going on then re-writing a block doesn't necessarily lose its old contents.

The only thing that's easy to throw away -maybe- is the top-level / root keys. But that means deleting the entire filesystem.


> The only thing that's easy to throw away -maybe- is the top-level / root keys. But that means deleting the entire filesystem.

Well, yeah, that's why I said you encrypt the entire drive. If you do things that make it hard to throw your keys away, then it's gonna be hard to throw your keys away.


> You can't throw away derived keys though!

You can, AES-XTS does just this. Per-block keys can be derived on the fly from the root key and the block address. When the root key is gone, everything is gone.


Right, but clearly I meant that you can't arrange to delete individual files and lose their derived keys. The reason is that the bits you need to derive the keys may still be somewhere on disk, and one could -having access to the root keys- find them and "undelete" and re-derive the keys and recover the file contents.

Having to wipe an entire filesystem to securely delete a file is not useful.


Two issues with that - you have to make sure you dispose of the key(s), and this doesn't let you securely erase just part of the drive.


3. Technically someone could uncover a flaw in whatever encryption you used.


Once you have an attacker with that kind of technical skill, you are in a totally different world.


Not necessarily -- if the drive was found and decrypted after the exploit was leaked and made public (perhaps YEARS after it being a 0-day), it wouldn't necessarily be a skilled attack.


I'm surprised people actually think TRIMming is in any way a "secure delete" operation.

It's basically the "quick format" of SSDs: "hey, all those blocks here, I don't care about them anymore, use them as if they were free..."

Without OS-side encryption, Secure Erase is the only reliable option (SSD encrypts everything with an internal key which then gets deleted by the secure erase command, leaving only random data behind)


> It's basically the "quick format" of SSDs: "hey, all those blocks here, I don't care about them anymore, use them as if they were free..."

Due to implementation details, SSDs can only use a block if it is clear. Thus trimming a block and waiting for long enough would be sufficient to erase the data if the driver actually enforced that the block will be reused.

So, it's not good enough, but this is not obvious at all.


But there is a logical to physical mapping table so that data might be spread across multiple physical blocks.


As others have mentioned, use encryption. But that has gaps as someone can hold onto the drive and wait for impossible to break to become somewhat expensive to break to broke by mistake through bad encryption implementation.

Put the encrypted drive into a hydraulic press and flatten it into a thin sheet, then run that sheet through a chop saw, or just use tin-snips if your hydraulic press was powerful enough. A hydraulic press and a chop saw are a few hundred bucks whereas drive shredders can range from $30k to $200k.


Delete all the files then "dd" from the random device until the drive is full? Must we always jump straight to the hydraulic press?


As mentioned in the article, that's not reliable. Drives have lots of overprovision blocks and there's no way of consistently forcing them to be used; and drives might mark blocks as too questionable/worn to reuse and never reset them.

Also a standard failure mode of SSDs is refusing to accept writes, so you'll have to handle that case.


Encryption plus a single rewrite is likely many times more sufficient than ever needed.

Even if there is a massive 10% block reserve, you looking at a random 10% of an entire encrypted drive.

Figuring out how to just decrypt those blocks is a very hard challenge, then figuring out the SSD block mapping is going to be very challenging, and then for a chance to maybe get something valuable from that 10%.

Multiple very difficult challenges for only a chance of maybe recovering something useful.

Related, I just had a hard drive head crash. I was able to use dd_rescue to read out 40% of the drive before it fully gave out. It's amazing, but at least NTFS stores data everywhere. I recovered a few small plain text file, but not a single image was recoverable. It was pretty shocking to me.


Must we always jump straight to the hydraulic press?

Not always. I suppose it depends on your information security case. If your goal is to secure your kids laptop from the average thief that is heading to the pawn shop, maybe "secure erase" of encrypted drives may be sufficient.

If on the other hand one is protecting intellectual property, customer data and so on, then perhaps that average thief was hired by someone with significant resources. I would wager that a percentage of the people reading this comment are using a work laptop and have some IP on their drive that in the wrong hands could cost billions. Even some of the emails I've seen could do significant damages.


> the random device

/dev/zero would be sufficient.

Also easier to confirm things worked (if desired) since you know what to expect on read-back.


You're assuming the drive doesn't do something smart when it detects it's being filled with zeroes. It could at least in theory decide to do the same thing it does when you TRIM a sector.


I suspect it won’t be long (if it isn’t already the case) that defacto run-length encoding is built into SSD firmware.

Write 5 gig of zeros? OK! You just got more drive reserve capacity!


Hydraulic press is too obscure, .45 ACP works just as well.

But it depends on your risk profile, for most people a wipe-and-reuse is going to be fine, especially if the "reuser" is already a trusted entity.


A firearm is a silly tool for this. A hammer (optionally combined with good nails) can do substantially the same thing to all relevant materials (aluminum, glass, ceramic, silicon, plastic), but cheaper and more safely.


I think a springloaded center punch to each of the chips would do a pretty good job of it.

https://www.harborfreight.com/spring-loaded-center-punch-621...


Silly? Maybe. Fun? You bet!


I suppose if one could obliterate most of the chips with 3 or 4 well placed shots then your solution is certainly more cost effective than mine should one not have a future use for a press. This sounds like a video idea for DemolitionRanch.


I guess used microwave would be easiest and cheapest option for SSDs


A microwave may not even damage the chips internals; you'd have to actually test and verify it.


That's a nice solution in the US I guess, not so much for us in Europe :)


I think an angle grinder would be more accessible than a press.


Open them, heat treat the disks in a bonfire, use the magnets as fridge magnets (those are quite strong, can fix entire books to the fridge instead of just notes).


Last I checked, SSDs don't have those (see thread title).


This is a much more expensive and less effective solution than a coffee tin and some gasoline.


> ...hydraulic press was powerful enough.

Can we repurpose a die press to this task and if so, would a 100 ton AvE die press [1] be powerful enough?

[1] https://www.youtube.com/watch?v=d_LSPX8cNaE


I suppose provided that we are not punching and instead crushing the SSD chips to dust that would work. Punching holes would leave some chips.


Also see:

https://wiki.archlinux.org/title/Solid_state_drive/Memory_ce...

https://www.usenix.org/events/fast11/tech/full_papers/Wei.pd...

Edit: see section 3.2.2 in the PDF (and the corresponding table 2):

> In most cases, overwriting the entire disk twice was sufficient to sanitize the disk, regardless of the previous state of the drive. There were three exceptions: about 1% (1 GB) of the data remained on Drive A after twenty passes

This is about normal overwriting, however. The fastest random overwriting program on Linux I know of is `shred`


HDPARM secure-erase uses the OPAL standard which ensures unused bad blocks, that the drive normally ignores, are erased too. TRIMMING only cleared blocks normally accessible for reading/writing. It’s not as low level… But as someone has mentioned you can just enable encryption and forget the keys…most all SSDs are encrypting from new…user and master keys just have to be set.


The absurdity here that you can't do it because your BIOS won't let you, hdparm can do this! You can literally just boot a Live USB!

https://wiki.archlinux.org/title/Solid_state_drive/Memory_ce...

And anyone who thought TRIM would be a reliable way to get rid of data to begin with didn't have an understanding of TRIM!


https://en.wikipedia.org/wiki/Thermite

Securely erases all known forms of electronic data storage. Cheap, reliable, fun for the whole family!


There was (is?) a type of hard disk where the platters are made out of glass. These are easy to dispose of securely... drop onto a concrete floor from about six feet up. Shake, feel finely pulverized glass rattling around inside. Done.


How do you magnetize glass?


With a thin coating of "stuff" that peels off like metallic foil. Iron oxide spiced with rare earths I think it usually is. The pics in this article illustrate it: https://www.extremetech.com/computing/326292-why-lying-about...


That's it. You can't beat maximum entropy :D


Super interested in this and would love to hear about some techniques. Used to work at a HealthTech co. We had an “appliance” that we’d send to doctor offices to integrate with other diagnostic machines on the network.

Sometimes we would send out new ones to replace the old. When we got the old ones back, it was always unclear how to purge and recommission SSD/NVME drives.

My best attempt was using GNU shred, but it wasn’t recommended for flash-based storage back then.


The only proper way of disposal is physical destruction, preferrably on site and under observation of inhouse staff. There are contractors you can hire in those: https://shredsupply.com/hard-drive-shredding-trucks/ (no relation, just arbitrary google hit).

I'm also unsure as to why you are getting drives back at all, any of your customers should not have any kinds of storage devices leave their site intact. At least that is the standard over here in Europe for healthcare and other industries dealing with sensitive data.


Shredding/sledgehammer is likely the best option for secure disposal of mechanical drives, but SSDs are a different animal and are likely reusable given some of the suggestions in this thread.


Not really. HDD recoverability is a myth, anything overwritten isn't coming back with modern HDDs. Bits can't "bleed out" anymore, density is far too high for that.

The thing you need to guard against is relocated/reserve sectors, in HDDs same as in SSDs. The proper way to do this is full disk encryption, and if that's not possible, physical destruction.


Most hard disks support hardware level encryption and full disk erase at this point.

There’s very little need to sledgehammer anything given realistic attacker models.


Both, HDD/SSD encryption in hardware and secure erase have been proven untrustworthy numerous times.


Oh, we did stuff like this for disposal. The tricky part is drives we desired to recommission.



Buy SSDs with secure erase (many of them!), they encrypt data internally. Secure-erase the throws away the key, and all the data is now irretrievable.

Job done…


If you trust that the secure erase part of the firmware functions properly. There was work in the last 5 years to let the OS (at least Linux) do most of the SSD controller functionality which I think would be helpful if it caught on.


I take your standard, solder it to a motherboard, and disable it with a TPM protected bios.


In addition to the sibling comment, I'm surprised that there was no mention of FDE.


We did do some encryption with LUKS, and I’d try to write over boot records, keys, and headers, but I was pessimistic that was enough. Not an encryption expert myself. Always felt that any given encryption tech (be it hardware or software) has possibility of vulnerability later found or backdoors.

So it made sense to me that a physical erasure prior to recommission would be good. There’s also regulatory/compliance checkboxes (be them effective or not).


If you care about security then you'd have full disk encrypted the thing anyway. So really, it's not a problem.


If your bios is not allowing secure ATA functionality ("frozen" in hdparm) try doing a pm-suspend and then waking back up. For me this unfroze the drive, although I have no idea why. I was then able to mess with ata security options, run secure erase, etc.


IIRC it is often frozen by the BIOS on boot.

I think the pm-suspend/wake 'reinits' (for lack of a better word) the drive and doesn't set the magic frozen bit.

I'd love to know from someone who has the actual tech details vs. my half-remembering an explanation from a wiki.


Full disk encryption that are decrypted using a U2F security keys are now here (latest LUKS/systemd allows to do that).

And Yubikey don't leak secrets.

If your system is compromised to the point that someone gets access to the full disk decryption key protected by your Yubikey, you've got bigger problem than securely erasing the disk.

Need to recommission a disk? dd to the disk because why not. Some sectors didn't get dd'ed? Well, the bad guys need your Yubikey too.

So, really, what do we think about Yubikeys?

Are people now going to steal your Yubikey in addition to finding the sectors that weren't overwritten by dd?


is the Yubikey protected by some kind of pin or password or something?

and is there something that prevents someone from brute forcing that pin/password if they have access to the Yubikey?

I've not used a Yubikey in this way before, sounds really neat.


Unrelated to trim, but for some reason when I used secure erase on an SSD with a ZFS pool, the pool was still able to be found by ZFS.

That shouldn't be the case, right?


Secure erase depends upon the drive firmware correctly implementing the function. The only study I've seen that actually tested this found about half of drives did not do it properly! But that was from 10 years ago now.

I once used secure erase on a nvme SSD and found some of the partition table was still present when going to reformat, even though the operation indicated success. I eventually tracked it down to the drive being mistakenly mounted at the time of the secure erase. So there is your other failure mode, making sure you and your software utilities also perform the secure erase correctly.


I think ZFS identifies drives by their serial number. So you can erase a drive in a pool, and ZFS on that machine will still recognise the drive based on its serial number, and know its pool membership.

ZFS should then throw all kinds of errors once it notices that all the super locks and other FS metadata is missing. But a scrub should cause ZFS to simply fix that via normal resilvering.


No, ZFS identifies by finding superblocks. This drive was not securely erased if any superblocks could still be found.


Not sure how secure erase works, but I’ve run into this a few times after “erasing”. I think it has something to do with boot records or partition tables. So there’s a piece of some drives (usually at the front) that contains this data. You can overwrite it properly with the appropriate tools. I always just used `dd` on the raw drive ref in Linux to blow it up.


The keyword / feature you're looking for is 'sanitization'. https://ieeexplore.ieee.org/document/10008943


Edit: Below doesn't apply to T2 enabled Macs.

If you erase the volume, regardless of whether you had FileVault enabled or not, the key is deleted [0].

    If FileVault isn’t enabled on a Mac with the T2 chip during the initial Setup 
Assistant process, the volume is still encrypted, but the volume key is protected only by the hardware UID in the Secure Enclave.

and

    When deleting a volume, its volume key is securely deleted by Secure Enclave

[0] https://www.apple.com/mideast/mac/docs/Apple_T2_Security_Chi...

----------------------------------------------- Incorrect ramblings below: -----------------------------------------------

Apple recommends setting up FileVault on their SSDs. It is literally the only way to ensure that the data on the SSD is not recoverable.

And you should do this before you put anything ON the Mac.

If you add a file and only later encrypt the drive, you have no guarantee that that file still isn't somewhere readable one way or another.

This is important when you think about buying a Mac and then choose to return it 3 days later.

Apple states [0]:

    Note: With a solid-state drive (SSD), secure erase options are not available in Disk Utility. For more security, consider turning on FileVault encryption when you start using your SSD drive.

On their dedicated encryption page [1] they do state that devices with T2 are encrypted by default:

    Note: If you have an iMac Pro or another Mac with an Apple T2 Security Chip, the data on your drive is already encrypted automatically. However, turning on FileVault provides further protection by requiring your login password to decrypt your data.
However I can't find any reference to see how to reset that built-in key.

[0] https://support.apple.com/guide/disk-utility/erase-and-refor...

[1] https://support.apple.com/guide/mac-help/encrypt-mac-data-wi...


Most relevant ssds are always encrypted.

I don't think it's necessary to delete it.

But if you want, write the disk fully


Take the drive out and smash it to bits.


If security is your biggest concern, and not reuse... then using industrial shredding machinery is your best bet.

If you want reuse... then a handful of fill/erase cycles with random data is probably your best bet, even then, there will still likely be some overprovisioned blocks with prior data.

Encryption and throwing away the key doesn't do more than random data to fill said drive.


Why is this so hard? Why can't we just specify that a particular file needs to be actually deleteable and that we don't want the system (at any level) to leak copies of any of the data in that file? There is no technical reason this is not possible, it is just a convention that has caused problems since the dawn of computing. There is no point in making a file system that supports that filesystem would not be able to specify deleteabiliy to the hardware. There is no point in making hardware that supports this because there is no file system that exists that would have anything to tell the hardware. This situation exists simply because of inertia.

Sure there would be performance implications, but who would care? You might just want to store a key for some full performance encrypted data that you would like to be able to actually remove access to. Sometimes performance is not the most important thing to optimize for.


> There is no technical reason this is not possible, it is just a convention that has caused problems since the dawn of computing.

Technical reasons are all over the place, because the only "convention" you could possibly be referring to is the practice of having layers of abstraction for compatibility purposes. If you want to re-design your entire storage stack to accommodate every new technology, then it's trivial to ensure deleteability in each from-scratch re-design. But as soon as you want to use flash memory to build a drop-in replacement for hard drives, it gets quite difficult to retain this property without making huge sacrifices to more desirable properties.


It would be possible to add this particular bit of information without changing anything that affects compatibility. The only difference would be that some files would actually be deleted when a delete operation was applied. It is a separate information channel that can be added on top.

This has nothing in particular to do with flash vs spinning rust. A copy on write file system for example generates tons of information leaking copies all the time, but could trivially be be changed so that some files did not leak in that way.

This is truly one of those issues that exists simply because no one can be bothered to fix it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: