On my early PCs I used to defrag regularly, just like the next guy. Today, I practically never do, and I'll explain just now why I don't need to.
A bit of historical background: When I started working with computers disk access was still very much in the hands of the programmer - addressing the exact cylinder/head/record location where you wanted to write your data - as well as the record size. There was no such thing as a standard sector or cluster size that was applicable to all software on that file system. For all I know, this is still the way disks are typically accessed under z/OS, one of the mainframe operating systems that originated in those days.
But back to PCs: Remembering that early disk drives would have capacities of, say, 5 Megabytes - no, not Gigabytes or Terabytes (!) - it was fairly normal then to format disks to physical sectors of 512 bytes and organize those in clusters of 1, 2 or 4 kilobytes. As disk cpacity grew it became necessary to combine more sectors into clusters that would then be quite a bit larger, so that cluster numbers would still fit in the 32-bit data fields allocated for them in the standard data structures. Eventually all that broke apart and today we have disks with 4096 byte physical sectors and much larger clusters as well as cluster numbers now being stored in 64-bit fields in extended data structures - let me spare you the details of all the intermediate steps. When you have to make a choice nowadays between an MBR or a GUID partition table - that is what we are talking about here and that became necessary because we now have to be prepared to run with disks of many Terabytes capacity, which means lots more and lots bigger clusters than in the old days.
Originally, disks or disk partitions would be formatted with one of the FAT file systems, the "biggest" one being FAT32, which can handle partitions up to 8 Terabytes (at a stretch) and files up to 4 Gigabytes (minus 1 byte.) Windows will not format anything bigger than 32 Gigabytes in FAT32, although it will understand the larger FAT32 partitions. Windows nowadays clearly prefers the more advanced NTFS file sytem, which can work with partitions of 256 Terabytes and files of 16 Terabytes (approximately.)
This is what we are working with today. Clusters can be up to 64 Kilobytes large. Files up to 4096 bytes are stored inside their directory entry in the master file table. Such files as well as all files up to the cluser size can by definition not be fragmented - you need at least two clusters to form two fragments.
Now, on the final lap to defragmentation considerations, we need to understand which files this leaves us with that could be fragmented, how many fragments per file are likely, and how fragmentation of such files will affect your processing. The main thought here is that - compared to the early PC days (37 years ago) a LOT fewer files will be fragmented. And most of those that are big enough to be "seriously" fragmented would fall into mainly two categories - multimedia files and big databases, including things like email repositories and the like. These have the property that they may grow over time and each added cluster would then be likely to be discontiguous from the main body of the file. Such files would tend to get rather fragmented over time. Multimedia files, on the contrary, would tend to be moved around as a complete entity and would be placed by the operating system into an area that has a sufficient number of contiguous clusters, so we wouldn't expect much fragmentation there. (The ability to allocate enough clusters for the whole file up front did not exist in the early versions of DOS and Windows.)
One other pertinent thought: Whenever you copy a fragmented file to a disk with sufficient space the copy would be (mostly) unfragmented. So, for instance, if you copy all your files on a disk (can't do that with your system partition so easily, unfortunately) to a new, maybe larger disk the migration automatically defragments the lot.
Plus: Faster disks with bigger built-in cache also reduce the effect of fragmentation simply by giving quicker responses and in some cases by reading ahead for data that is expected to be needed next. Also, as cylinders (yes, they still do exist on spinning disks) have gotten so much bigger now, the somewhat slower cylinder-to-cylinder seeks are getting rarer as well.
In summary: On today's computers there is much less of a need for defragmentation. And most of us probably could happily live with a fragmented email repository. And maybe that email repository would look really good on your SSD, where fragmentation doesn't matter at all. All in all I have come to the conclusion that defragmentation is overrated and not worth it.
(And - for those of us who worry about wearing out the drive mechanics, which I suppose could happen - have a look at a graphical representation of the defragging process. Some of the older, really smart defragging tools show you how they move fragmented data out to free contiguous clusters and then write the data back. This is a real treat to watch, especially on a reather full drive. But it also gives you a very clear idea of how much wear and tear that will add to your drive mechanics ...)