13.
 
Re: Out of the Blue
Sep 12, 2011, 18:57
Tom
13.
Re: Out of the Blue Sep 12, 2011, 18:57
Sep 12, 2011, 18:57
Tom
 
People are always claiming that defragging files stored on an SSD is pointless because it will make such a tiny difference in performance. These arguments appear to be based on insufficient understanding of how SSDs and operating systems work. It's ESPECIALLY bad when people confuse wear leveling with filesystem fragmentation. SSDs perform wear leveling internally - the OS knows nothing about it, even with TRIM. The OS performs fragmentation - the SSD knows nothing about it, even with TRIM.

If you dig a little deeper, maybe even do some measurements, you'll find that there IS a real performance difference and the reason for its existence is completely logical.

Perhaps you've noticed that people tend to report several measurements when evaluating SSD sequential read performance. This is because there's a BIG - sometimes HUGE - performance difference depending on I/O size. For example, 4K vs. 64K vs. 512K. I/O size is how many bytes are read or write when the OS says to the drive "hey, read/write X bytes from/to location Y".

Well guess what. When your file is split into hundreds or thousands of fragments (which DOES happen in the real world), and you want to read it all in, sequentially, as fast as possible, what sizes do you think the I/O requests going to the drive are going to be? Will all the fragments be aligned on nice BIG boundaries? No. Because of fragmentation, you can't just zoom over the whole file in big chunks. You have to read a little from here, a little from there - gather all those fragments together. Performance won't be as bad as all 4K I/Os, but it won't be as good as all large I/Os either.

For example, I just did a real-world test with two files in my downloads directory, stored on an Intel 120GB SSD with 49GB free, running Windows 7:
File 1, 12MB, 189 fragments: read at 135MB/sec
File 2, 10MB, 1 fragment: read at 175MB/sec

That's a 30% difference. Reading other non-fragmented files consistently gets me very near 175MB/sec.

Bottom line: the performance penalty is not nearly as severe as with traditional disks, but it's hardly as miniscule as people claim. Fragmentation is something that gets worse as the contents of the volume are modified, and it can get really bad if the volume is low on space for a long time. By never defragmenting, your disk activity WILL get slower and slower. People attribute this to SSD internal implementation details such as wear leveling, poor TRIM, whatever, but the truth is that in many systems fragmentation will be a factor in that degradation over time.

I've raised this point several times with SSD manufacturers as well as the team at MS responsible for optimizing Windows 7 for SSDs. They all went through the following stages: 1) deny, 2) downplay, 3) grudgingly accept.

Why did I bother writing all this? I just think it'd be nice if people thought a little more about this topic and did some measurements themselves before making inaccurate claims. Use contig from Sysinternals to measure fragmentation for individual files. I wrote my own program to measure sequential read throughput for individual files, but there's probably something else suitable out there.
Date
Subject
Author
1.
Sep 11, 2011Sep 11 2011
2.
Sep 11, 2011Sep 11 2011
3.
Sep 11, 2011Sep 11 2011
4.
Sep 11, 2011Sep 11 2011
8.
Sep 11, 2011Sep 11 2011
12.
Sep 12, 2011Sep 12 2011
5.
Sep 11, 2011Sep 11 2011
6.
Sep 11, 2011Sep 11 2011
7.
Sep 11, 2011Sep 11 2011
11.
Sep 11, 2011Sep 11 2011
10.
Sep 11, 2011Sep 11 2011
 13.
Sep 12, 2011Sep 12 2011
Re: Out of the Blue