I have just learned that one should "Never defragment your SSD". But I have no idea if that is true.

I believe that Windows 10 was automatically scheduled to finish defragmentation on my SSD, but I cancelled it. Would it cause any problems for defragmentations done before?

The SSD is still not partitioned, because I cannot see the SSD drive in My Computer folder, only in system hardware manager. What are the correct steps I should take to install Windows on an SSD (it's my first SSD)?

  • 11
    The advice to "never defragment your SSD" is obsolete and comes from a time when SSDs were slower and had much more limited write endurance than modern SSDs. Modern SSDs tend to be IOPS limited, and defragmented file systems need fewer I/Os. – David Schwartz Nov 28 '16 at 18:34
  • 5
    To @DavidSchwartz point, the amount of writes/deletes needed to spontaneously kill a modern SSD is ridiculously high. Unless you are processing an extraordinary amount of information your SSD will most likely last longer than many of your other components even if you are performing conventional defrags. – DanK Nov 28 '16 at 20:23
  • 29
    Why would you want to defragment an SSD? The point of defragmentation is to make files be contiguous on the disk, so the read heads don't have to seek all over the place (which takes time, as it involves physical movement) to read the file. I'm no expert, but AFAIK SSDs are solid state and random access. All accesses take the same time, so it shouldn't matter how file blocks are distributed. – jamesqf Nov 30 '16 at 4:46
  • 3
    Possible duplicate of Why can't you defragment Solid State Drives? – Raystafarian Nov 30 '16 at 12:06
  • 6
    As Ajedi32 notes, the recommendations on the two questions are completely opposite. That should affect the direction of the duplicate. If the recommendations on the other question are now considered wrong, the only way readers landing there will find the answers on this question is if that one is made a duplicate of this one. – fixer1234 Nov 30 '16 at 19:57

Let Windows do its job. Once per month it does a real full defrag, also on a SSD, to optimize the internal meta data.

The short answer is, yes, Windows does sometimes defragment SSDs, yes, it's important to intelligently and appropriately defrag SSDs, and yes, Windows is smart about how it treats your SSD.

Here is a reply from Microsoft:

Storage Optimizer will defrag an SSD once a month if volume snapshots are enabled. This is by design and necessary due to slow volsnap copy on write performance on fragmented SSD volumes. It’s also somewhat of a misconception that fragmentation is not a problem on SSDs. If an SSD gets too fragmented you can hit maximum file fragmentation (when the metadata can’t represent any more file fragments) which will result in errors when you try to write/extend a file. Furthermore, more file fragments means more metadata to process while reading/writing a file, which can lead to slower performance.

As far as Retrim is concerned, this command should run on the schedule specified in the dfrgui UI. Retrim is necessary because of the way TRIM is processed in the file systems. Due to the varying performance of hardware responding to TRIM, TRIM is processed asynchronously by the file system. When a file is deleted or space is otherwise freed, the file system queues the trim request to be processed. To limit the peek resource usage this queue may only grow to a maximum number of trim requests. If the queue is of max size, incoming TRIM requests may be dropped. This is okay because we will periodically come through and do a Retrim with Storage Optimizer. The Retrim is done at a granularity that should avoid hitting the maximum TRIM request queue size where TRIMs are dropped.

So install Windows on the SSD and forget it. Windows will do everything on its own.

  • 14
    Sometimes we rant about Microsoft being so stupid and all. But sometimes I am just amazed on how well-thoughtthrou some of the Windows-Elements are. – BlueWizard Dec 2 '16 at 8:54
  • 1
    Yet AFAIK Linux/EXT doesn't need to do this at all. – spraff Dec 2 '16 at 10:38
  • 5
    Fragmentation is kept to a minimum in EXT but can still occur in specific use cases, at least in Ext3: en.wikipedia.org/wiki/Ext3#Disadvantages – Bret Dec 2 '16 at 15:39
  • 5
    EXT variants do fragment and lose performance. Anyone saying otherwise is peddling Linux-superiority lies. Source: I implemented a driver for it and it does. – imallett Dec 3 '16 at 21:37
  • @Jonas Dralle: Though I would like to know just what a "peek resource usage" is supposed to be. Sounds vaguely pornographic :-) – jamesqf Dec 4 '16 at 5:49

I have just learned that one should "Never defragment your SSD". But I have no idea if that is true.

A little knowledge is dangerous. Never defragmenting your SSD is probably a good idea if your system is utterly clueless to what an SSD is - say Windows XP. And if SSDs were fragile snowflakes likely to wear out and melt in the harsh heat of normal usage - I have a detailed answer to why that isn't true. It is pretty hard to 'wear out' a drive in normal usage. It might be handy to unlearn this.

Let's take into account that if your application was killing SSDs or even doing heavy writes, like Spotify did, people would flip. And quite often the people who write OSes are smart.

I'm referencing this blog post from Scott Hanselman heavily for the rest of this answer. Magicandre's answer references this too, but I kind of took away different lessons from it. It's worth a read for the details. I'm taking a few liberties with how I'm representing the information. I'd start with this

I think the major misconception is that most people have a very outdated model of disk\file layout, and how SSDs work.

SSDs do fragment and these fragments need to be kept track of. At a fundamental level defragging SSDs helps your file system run efficiently, even if it's different from how a spinning rust drive would. The post I referenced points out volume snapshots would be slow without defragmentation.

SSDs also have the concept of TRIM. While TRIM (retrim) is a separate concept from fragmentation, it is still handled by the Windows Storage Optimizer subsystem and the schedule is managed by the same UI from the User's perspective.

TRIM is good. Trim saves on writes since it's a mechanism for marking blocks as read without erasing them, and erasing them as needed.

Whoever told you to never defragment a drive has no idea that modern OSes are designed for SSDs and that the necessary housekeeping processes are rolled in.

While it's tempting to assume you know better, in this case the people who wrote the OS have optimised things for you. Keep calm, and let Windows defragment your drive.

  • 8
    I think this answer would be greatly improved by mention of logical block addresses vs physical block addresses. It's not possible to defragment an SSD, the filesystem-level procedure that results in sequential logical addresses will still result in data scattered all around the physical disk, due to the flash mapping, and that's OK because SSDs are random-access. – Ben Voigt Nov 29 '16 at 20:07
  • To be honest, that's a concept that I still haven't wrapped my head around. I do believe a full treatment of that would make an awesome answer for one of my questions and I have some rep floating around in my test account I'd be happy to award as a bounty for it. – Journeyman Geek Nov 30 '16 at 0:28

For completeness' sake:

Fragmentation depends on the filesystem(FS), not on the disc or OS.

This means that answer to your question does not really need to ask for Windows*; SSD is special case - it works differently than an ordinary disc.

An FS is a way of organizing your files on the disc. The most common Windows formats are NTFS and FAT32. The most commonly used FSs on Linux are ext3/ext4, but there are many others (zfs, xfs, jfs, ReiserFS, btrfs, and more).

A disc is divided into blocks. You can imagine it as a long tape on which you can write some data. When you write something onto the disc, you use these blocks. Obviously you want related files to be written next to each other, and a single file to be written in a single block, so you don't have to jump around the tape. When things are all scattered around, that's what we call fragmentation. Defragmentation organizes them.

Obviously how you organize things (FS) determines how well they are organized (whether there is fragmentation). If you organize your files from the start, you won't have fragmentation. That's what happens in some filesystems (e.g. the ext family). These filesystems organize your files on the fly (before writing), so that you don't have to defragment them except under special circumstances when there wasn't any other choice but to to introduce a little disorder.

For more information about ext4 and how it prevents fragmentation, you can refer to this page

Now an SSD works differently; it's not a tape. You can get instant access everywhere. The whole point of defragmentation is that you organize your files neatly, so that you don't have to jump around. There's no way to jump around in an SSD. You don't care whether you have to get to the other end of the tape back and forth; there's no tape.

However, there are other ways of optimizing an SSD. See this topic for clarification.

*Almost; filesystem choice is correlated with OS. Most Linux users use different FS than Windows or OS X users.

  • 3
    Exactly this. Fragmentation happens on FS level, regardless of storage medium. Some media are more affected by it than others, but there's always some impact, and it won't go away simply because you have an SSD. – Dmitry Grigoryev Nov 29 '16 at 10:37
  • 1
    The main difference is really Ext3 and FAT vs Ext4 and NTFS; but even then, applications, the OS and even the hardware do contribute, often significantly. Windows organizes files used at startup in the order that they are used, for example - allowing most of startup to use block reads instead of seeks. You could call that defragmentation - it's just that apart from defragmenting on the FS level (reducing file fragmentation), it also "defragments" groups of files to optimize access in a way the FS can't really help. You could imagine many similar optimizations, e.g. moving DLLs close to EXEs. – Luaan Nov 29 '16 at 11:47
  • There's many different layers that all matter in their own way. For example, striping is a way of intentional fragmentation that can improve performance. Physical organization of a HDD can also use this, if the HDD has multiple heads that can read from multiple platters concurrently. SSDs don't need to spin the platter and move heads to seek, but they still have an IOPS limit - and with the speed of SSDs today, this is often more important than raw bandwidth. SSD seeks are no longer fast enough to saturate the bandwidth. Fragmentation is a small part of the fundamental problem - caching. – Luaan Nov 29 '16 at 11:53
  • @Luaan: There's no such thing as an "SSD seek". I think you're actually talking about the per-command processing overhead. – Ben Voigt Nov 29 '16 at 20:10
  • 1
    @BenVoigt I think Luaan means that non-consolidated files still take longer to read even on SSDs. There's no seek delay, but there's a significant difference between sequential and random reads on SSDs. – user1306322 Nov 30 '16 at 8:24

The existing answers are great, but I have some things to supplement them...
I defragment my SSD and disable automatic TRIM, but for totally different reasons than mentioned:

  1. I want to be able to recover files or partitions if & when I accidentally delete something.
    No, it doesn't happen often, but the few times it's happened, it's been quite frustrating to not be able to recover things that I would have been able to recover on a hard drive, even when I tried to recover them immediately after deletion.

  2. I expand, shrink, and even move partitions around every few months, and defragmenting and consolidating files makes this operation far faster and less risky. You'd think you can trust partition managers nowadays, but as late as December 2015, I've run into errors (corruption) on plain move/resize operations. And the smarter partition managers try to avoid running on volumes that are heavily fragmented before any damage is done (and usually, though not always, succeed).

  3. I use Linux sometimes, and I've gotten burned by its corruption of NTFS volumes as lately as a year or so ago. This isn't due to fragmentation specifically, but seeing that it can't even handle unfragmented files properly, I'm on the defensive and trying to present as clean of a volume as possible to it (and even then, I avoid writes most of the time).

The sad part about #2 and #3 is, people who haven't seen these problems with their own eyes always think I'm nuts and am making this all up, or that my system must be broken somehow. But I've reproduced these quite a few times on multiple systems, and as someone who has written his own NTFS readers, I know a thing or two about file systems and kernel programming... with NTFS being at the center. So I know bugs when I see them. Nobody believes me, but I warn people anyway, given that I've seen these happen with my own eyes -- so, if you mess with partitions or use Linux at all, I recommend you keep your drives defragmented. YMMV.

Oh, and don't forget to run TRIM manually every once in a while when you don't need to recover anything. Although if I'm being honest I have yet to see any benefit from it...

  • 1
    Excellent, it is good procedure to defrag (and even "zero") flash drives for the the same reasons, particularly for photography. Contiguous files are easier to recover, even partially. Seek time aside, you will also benefit from pre-fetch (read ahead) if the OS is clever enough. SSD's tend to be slower than memory (not always true though). – mckenzm Dec 2 '16 at 16:44
  • Isn't your Linux advice actually "if you use Linux and mount NTFS file system read/write with it", not actually "if you use Linux at all"? Using Linux with ext4 or XFS on SSD is perfectly safe. (And FAT, for that matter, so you could make a FAT partition for data exchange if need be.) – mattdm Dec 4 '16 at 7:02
  • @mattdm: Yeah, I guess. (Weird, I thought I replied to this already...) – Mehrdad Dec 6 '16 at 1:15

Defragging a SSD promotes early failure of the lowest addressed memory blocks.

See: http://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead

"Even with wear-leveling algorithms spreading writes evenly across the flash, all cells will eventually fail or become unfit for duty. When that happens, they're retired and replaced with flash allocated from the SSD's overprovisioned area. This spare NAND ensures that the drive's user-accessible capacity is unaffected by the war of attrition ravaging its cells."

"The casualties will eventually exceed the drive's ability to compensate, leaving unanswered questions. How many writes does it take? What happens to your data at the end? Do SSDs lose any performance or reliability as the writes pile up?"

  • 5
    This is an extreme case where the drive's having large amounts of data written and overwritten to it. What does this have to do with regular, routine defragging? – Journeyman Geek Nov 30 '16 at 0:55
  • No it is not a extreme case... It a test that shows the inherent or eventual weakness of the product. Tests are designed to show limits. – jwzumwalt Dec 4 '16 at 4:43

Each cell on an SSD gets slower every time it is rewritten. The disk hides that wear by keeping track of which cells have been written to, and writing to little-used cells first. Defragging means massive rewriting on many cells, and therefore it will wear down the SSD. HDs benefit from defragging because performance is improved by not having to move the servo arm around often, but SSDs are not slowed as much by random placement of data.

  • 10
    Didn't downvote, but would be interested in seeing sources for this. – brichins Nov 28 '16 at 19:55
  • 3
    You're confusing write amplification with wear leveling. SSDs have to erase before they can write, but they erase larger blocks than they can write. This means a single write might do multiple writes to shuffle data around and free up a block to erase. As more writes happen, this process gets more complicated. That's write amplification. There are various ways to deal with this like TRIM and over-provisioning. Wear leveling is spreading out which cells you write to so they don't fail entirely. – Schwern Nov 28 '16 at 21:03

Your Answer

 

By clicking "Post Your Answer", you acknowledge that you have read our updated terms of service, privacy policy and cookie policy, and that your continued use of the website is subject to these policies.

Not the answer you're looking for? Browse other questions tagged or ask your own question.