Does Mac OS X need a disk defragmenter/optimizer?

Does Mac OS X need a disk defragmenter/optimizer?

Defragmentation and disk optimization in Mac OS X collectively represent an issue nearly as contentious as the debate over repairing disk permissions -- one camp argues that utilties purportedly performing these functions amount to little more than nostrums, while others claim real-world performance gains as a result of the tools' usage.

Apple's input on the subject is, as usual, less than definite. The company states explicitly in Knowledge Base article #25668 (published in 2003) that that "you probably won't need to optimize at all if you use Mac OS X," then provides instructions for what you should do "if you think you might need to defragment."

According Apple's advice, there are two scenarios under which you might need to defragment your drive:

  • You have many large files (such as digital videos)
  • Your disk is low on space (i.e. more than 90% full)

If these are in fact your only concerns, there are some basic remedies.

In the case of myriad large files, you can easily (as described by Apple) create a backup of all your important data -- essentially everything but operating system files -- then re-install Mac OS X and restore the files from backup. It's somewhat tedious, but should result in faster access to said large files. Or, you can use a utility like SuperDuper! to make a clone (or near-clone) of your startup drive, then simply format your drive using Apple's Disk Utility (located in Applications/Utilities) and copy the files back.

The reason the aforementioned methods work requires a quick explanation of what fragmentation is, and the difference between disk defragmentation and disk optimization:

Fragmentation, to put it simply, occurs when files are split up into multiple parts and stored in different locations on the hard drive. Hard drives need time to seek (move their point of access to a different location) and begin culling data. As such, a file in one contiguous lump will be accessed more quickly than a file in two segments, three segments, etc. Think of it like this: if your disk can transfer data at 20 MB per second, a 200 MB file will theoretically take 10 seconds to access/transfer if it is stored in one chunk. If it is split into 10 chunks, and your drive has a seek time of 8 ms, you will need to add 8 ms per chunk, plus 8 ms for the initial seek -- or 90 ms -- to the access time. That doesn't seem significant for a simple transfer or one-off access, but the fragmentation can cause noticeable slowdown when you are repeatedly manipulating a file not stored in RAM, or working with a collection of smaller, fragmented files.

The filesystem consists of various, differently-sized portions of free space, separate from one another. When hard drives become full, the filesystem needs to begin using smaller, and smaller portions of free space to store data. As such, it will start to split files into smaller chunks and spread them to free portions at different locations on the disk -- i.e. fragment them.

Fortunately, Mac OS X's HFS+ filesystem has some safeguards against avoidable fragmentation. First, the filesystem avoids using space recently freed by recently deleted files whenever possible, looking instead to potentially larger, already free portions of the disk first. Second, Mac OS X 10.2 has a routine that clumps smaller portions of disk space into larger portions on the fly. Finally, Mac OS X 10.3.x can automatically defragment some files through a process called "Hot-File-Adaptive-Clustering." Though these routines have undoubtedly have made consequential fragmentation a less common occurence, their efficacy is not beyond question. First of all, though they can reduce fragmentation of extant files, they can also cause remaining free portions on the disk to become smaller in size, potentially leading to more fragmentation down the road as new files are written. Second, the automatic defragmentation routines will not work on certain files -- specifically those above 20 MB nor those fragmented into 8 or fewer segments.

Back to the effectiveness of disk cloning and susceptibility of large files to fragmentation: obviously larger files are more likely to be fragmented because they require more disk space, and as such may occupy many separate portions of the filesystem. Generally, cloning a disk with tool like SuperDuper! will eliminate most fragmentation because data is being written freshly to a new, blank filesystem, and can occupy appropriately sized portions of space. No data is being added to any particular file, meaning said file will not spill into other unused portions. As such, the disk clone will have files that are stored in more contiguous segments than the original. When you copy the entire filesystem back to the formatted original drive, the same structure will remain intact.

Disk optimization is a different process from defragmentation, with a different end. Rather than concerning single files split into multiple chunks and strewn across the disk (as with fragmentation), optimization deals with the organization of related files -- or those commonly accessed together -- into logical groups for enhanced performance (quicker access). Some of the most crucial files in this regard are those depended upon to launch applications. Literally dozens of frameworks are accessed each time a Cocoa application is launched, and if they are spread out in far-reaching locations on the disk, launch time can slow significantly.

How can these files become disorganized, as it were? One of the major culprits lies in system updates. When Mac OS X is first installed, it logically groups files close together on the disk to enhance performance. When a major system update -- such as an incremental Mac OS X release -- is applied, however, the old frameworks need to be deleted, and new frameworks need to be written. Unfortunately, the new framework version usually is not (and cannot) be written to the same portion of disk space occupied by the old framework version. Instead, it may be written at a location far away from the original group of critical, application-launch related frameworks. This where disk optimization utilities like iDefrag come into play.

iDefrag is a disk optimization utility (also a defragmenter, as the name implies) that changes the arrangement of files on the disk to patterns that can theoretically result in increased performance. The tool comes with pre-defined layouts for the files in Mac OS X that the developers have identified as the most universally applicable for increased performance (the best speed for the greatest number of users), but you can also create your own "class sets" that allow customized grouping of files.

iDefrag also offers an overview of the most fragmented files on your system, which can be very revealing. [see screenshot below].

On an in-house Intel-based Mac with ample free disk space running Mac OS X 10.4.8, for instance, we found several large video files were highly fragmented, as were some large applications like the Photoshop CS3 beta. Some critical, oft-used system files were also significantly fragmented, including the Spotlight database. The Spotlight database and some other files (like caches, the sleepimage file used to store RAM contents for safe sleep, the Mail.app envelope index, etc.) are prone to repeated fragmentation because they are constantly being written to, hence seeking new portions of disk space to occupy. As such, they are like to quickly re-fragment after a defragmentation is performed. Still, in our informal testing, we noticed significantly snappier operation of Spotlight and quicker response from Mail.app after performing only the least invasive of iDefrag's optimization routines, which can be performed on the current startup drive and allows other operations to take place simultaneously. These speed increases will likely reduce quickly with time as the files re-fragment.

For another informal test, we checked the time it took to duplicate a 778 MB video file that iDefrag listed as being split in 1027 fragments. Prior to running a quick defrag routine, the file took 1 minute, 5 seconds to duplicate, or ~12 MB per second. After running the quick defrag routine, the file took 48 seconds to duplicate, or ~16 MB per second -- a significant improvement. Though many other factors can affect it, were also able to shave several seconds off our MacBook Pro's boot time with the quick iDefrag routine -- (from 31 seconds to 26 seconds).

The bottom line is that users with large files will benefit most greatly from defragmentation routines in Mac OS X. However, use of the disk optimization routines offered by tools like iDefrag can also serve as a boon to casual users of Mac OS X looking for a speed boost. If you choose to use such a utility, you'll likely see the most betterment after applying a significant system update.

Feedback? Late-breakers@macfixit.com.

Resources
  • More from Late-Breakers
  •  

    Join the discussion

    Conversation powered by Livefyre

    Show Comments Hide Comments
    Latest Galleries from CNET
    Uber's tumultuous ups and downs in 2014 (pictures)
    The best and worst quotes of 2014 (pictures)
    A roomy range from LG (pictures)
    This plain GE range has all of the essentials (pictures)
    Sony's 'Interview' heard 'round the world (pictures)
    Google Lunar XPrize: Testing Astrobotic's rover on the rocks (pictures)