The reasoning goes something along the lines of that HDDs have moving parts, and like everything else with moving parts, will eventually fail. The more you use the drive, the closer you get to that failure point. Kind of like how the more miles you put on a car, the more you expect it to break down.
Of course with the rather stupid file placement algorithms used by FAT and NTFS, you get high degrees of fragmentation, which if left unchecked will cause far more wear and tear on a drive. Sort of like the difference between the amount of wear and tear you'd expect on a car that does mostly highway driving compared to one that does a lot of stop and go city type traffic.
FAT and NTFS take the easy way out when it comes to placing files. They divide the drive up into small segments of around 4K each, sometimes more, usually not less. Every time you want to save a file, the filesystem calculates how many of these segments (clusters) are necessary to store the file. A 5K file requires 8K of space because it requires 2 clusters. The remaining 3K is lost, and called slack space. Now, where FAT and NTFS fail miserably compared to more intelligent filesystems like those you see for Linux, is that when you save a file FAT and NTFS just look for the first avaliable cluster and start storing the file there. As you can imagine, this only serves to exascerbate the problem making a fragmented drive even more fragmented, requiring you to come along and use some tool to essentially manually defragment the drive. Linux users enjoy automatically defragmenting drives because of an intelligent file system. Linux filesystems still use the same basic cluster technique as FAT and NTFS, however instead of looking for the first open cluster, they look for the first avaliable group of clusters large enough to store the file, falling back on the FAT/NTFS method of first avaliable only when the other options fail. This helps keep fragmentation at a minimum just through regular use of the drive. It doesn't provide for any sort of optimal file placement like some defragmenters do, so there is room for improvement, but one would hope that Microsoft will some day take a lesson from this.
In summation, there's a happy medium you want to try and find. Too little defragmenting will cause unnecessary wear and tear on a drive, but at the same time so does too much. You probably don't want to be defragmenting every day, or even every week, but every month probably would fall in that happy medium area.