This is my final (long) post on the subject, since I really have no interest in wasting my time on this silly discussion anymore. I don't think you have any clear understanding of what you are talking about, and are merely throwing around ideas/concepts without understanding them or their relevance to the discussion at hand.
Also, forget about the consoles/linux whatever. Let's stick to HDDs + fragmentation for now.
It doesn't quite work that way. You're working on the assumption that the drive's speed is variable... Which they're not. The platters are always spinning at a fixed velocity. For desktop drives, it's usually 5400 or 7200rpm. The drive heads generate insignificant amounts of heat relative to the motor that keeps the drive platters moving, even if they were in constant motion. I doubt they could raise the drive's operating temp by even a single degree Celsius.
Beyond that, platter sizes are fixed. HDD storage capacity is increased by increasing the density at which data can be packed onto the platter. A 1GB drive might have only allowed for say 10KB/in^2 (I have no idea what it actually is, I just made that number up for example's sake) while a 100GB drive might allow for 1MB/in^2. Same surface area, you're just packing the data in tighter. All of this means that the drive heads have a lower probability of needing to move great distances to get any given bit of data
It is largely the spindle motor that generates the most heat from the work it has to do to keep the platter(s) spinning. The heat output from the actuator arm moving around may not be much. BUT, remember that moving the arm around excessively also requires power, and increased power consumption = increased heat loss. (see intel link further down) So there will be a contribution, but may not be as much as from the spinning platters.
I know that the platter angular velocity is not variable. But when the actuator arm assembly has to do more work unnecessarily moving the arm across the platter many times a minute, it will increase wear more and reduce performance. WHY subject it to extra work unnecessarily ?
Areal density is currently ~ 150 Gbits/sq inch for ~300 GB drives (higher with PRT) and expected to increase as PRT finds its way into more consumer level drives. BUT, file sizes have also grown in parallel. The average home user deals with much larger file sizes than even 3-4 years ago, with games, photos and video files ever increasing in size. Do you have numbers from 'scientific studies' to prove that increases in areal density have overcome the negative effects of fragmentation or is it merely your guess? All evidence points to the contrary.
There are also multiple read/write heads on a drive, so if we make a basic assumption about the logic programmed into the drive's controller, it will figure out which head is closer to the data requested, and use it to read it in. Things will then be assembled in the drive's buffer memory before being sent off to the system RAM.
You are totally wrong here! There is only a SINGLE read/write head for one surface of a platter. The second r/w head is on the opposite surface of the platter. Show me a single desktop drive with multiple r/w heads for the same platter surface .
What is more, the actuator axis for those multiple heads is the same so the heads will move as a single unit synchronously, but on opposite surfaces of the platter. Therefore, having multiple heads is not an advantage, since they are not on the same surface. In fact, the industry trend is towards reducing the number of arms/heads and platters since having multiple arms stresses out the actuator assembly. Google, and all this will be clear. Earlier, drives used to have more platters and more arms/heads, but the preference these days is to greatly minimize the number of moving components.
Your basic assumptions about drive logic are also not right. The drive will still collect data for a single fragmented (or otherwise) file in the order it was written, but even if that were not so , the actuator arm still has to move across the platter working needlessly to collect all the fragments rather than reading it off contiguously. The IDEAL scenario is when the file is read/written sequentially as a contiguous unit.
NCQ changes the path profile of the drive head to some extent, but not from the POV of fragmentation. (Anyway, NCQ is better for asynchronous I/Os on servers, it is not geared towards home systems)
Which brings us to the real performance bottleneck when it comes to HDDs, the system bus. The IDE bus is many many times slower than any other bus in the system. Getting data from the drive to the CPU is where the real performance bottleneck comes into play.
Wrong again! The bottleneck is not the ATA bus width, but the mechanical nature of the operation of the arm+ head moving across the platter.
Wasted movement = lower performance!
Greater unnecessary movement between tracks + rotational latency + settle time = More unnecessary decrease in performance.
The mechanical operation by its very nature is orders of magnitude slower than the data transfer across any electronic system bus. This ought to be obvious!
So, by having fragmentation, you are weakening the already weakest link in the I/O chain and increasing the bottleneck from the harddrive!!!
The fastest SATA drive has a burst transfer rate of 1Gbps IIRC. That's about half of what the aging PCI bus can do. The FSB, the connection between the RAM and CPU, is 50-60X greater at least. So most of the time, the CPU is just sitting around waiting for something to do. If you want to get into CPU mechanics like data starvation, you can get an even better picture of what's really going on.
Your numbers are off again. Fastest SATA (2.0) drives have theoretical burst speeds of 3.0Gbits/sec (~300 MBytes/sec), with sustained transfer rates of about 50-80 MBytes/sec. Newer Hitachi and Seagate drives with the 32 MB cache advertise something closer to 100 Mbytes per second sustained transfer IIRC.
Parallel ATA-133 bus ~ 133 Mbytes/sec theoretical max
Serial ATA 2.0 bus ~ 300 Mbytes/sec theoretical max (see above)
PCI bus (32 bit/33MHz) ~ 133Mbytes/sec
Draw your own conclusions!
Ofcourse, the FSB/HyperTransport is far faster always.
Physics is how I know. Doesn't even require a great understanding of physics, just a moderate understanding of Newton's laws. Particularly the law regarding inertia. Acceleration is calculated as a force acting on an object over a distance for a period of time. Just getting a car weighing say a metric ton moving requires a significant amount of force. Accelerating that car from 0 to 60mph in 10 seconds vs 30 requires a considerably greater amount of force, and an equal and opposite force is acting on the car. So if it's say 10,000N (again, a figure merely for example purposes) of force to go from 0 to 60 in 10 seconds, and only 3,000N to do it in 30 seconds, that's a difference of 7,000N that the frame of the car doesn't have to absorb.
Acceleration = differential of velocity wrt time i.e. a = (dv/dt).
Force = mass*acceleration ie. m(dv/dt).
Your force numbers ought to come out as 2.8kN and 0.46 kN if my rough calculations are right. Since you are merely using example numbers, I will not dwell on this.
Now, let me ask: what are the actual forces experienced, and by what components during
(1) Defragging a drive for a short time
(2) Running a fragmented drive for a long time without defragging.
Unless you have the hard numbers for these, your claims that defragging is worse for the drive over the long run than letting it run fragmented, have NO value.
In fact, the link below debunks your claim as a myth.
Defragmenting the hard drive will stress the needle (head actuator).
This myth has some truth in it, albeit misplaced. Defragmenting the hard drive may involve a lot of seeking as the hard drive rearranges its data in a contiguous fashion. This allows the read/write heads to read large amounts of data without seeking all over the platters.
However, after defragmentation, the hard drive no longer needs to seek all over the platters for your data. This reduces the amount of head actuator movements as well as greatly increase the hard drive's performance.
Therefore, while it may be technically correct to say that defragmenting your hard drive will stress the head actuators, the truth is defragmenting your hard drive will reduce the amount of seeking from then on and thus reduce the head actuators' workload.
Of course, this all really becomes pointless when you factor in angular velocity, which is what most defragmenting proponents don't seem to quite understand when they base their arguments on it.
Angular velocity of what? Please do explain with hard numbers.
Say I have a drive with absolutely no fragmentation. I need a file that is on the inner most ring of the drive, and then I need a file on the outer most ring. The drive arm is still going back and forth quite a bit, so your whole argument about fragmentation reducing head movement is invalidated.
The arm will not have to go back and forth for the defragmented file compared to the fragmented file.
Now, if you take and arrange frequently used files on the outer edges of the drive platter to take advantage of the increased angular velocity, you might have a point.
Increased angular velocity of what? I suspect I know what you are talking about, but let me hear it from you anyway. Don't mix up angular velocity and linear velocity.
Anyway, yes, rearranging the frequently used files on the outer edge will improve access speed, but what happens when those files are fragmented? The placement method's criterion is important-is it for frequently accessed (without modification) or frequently modified files? For the latter case, it will quickly lose benefit unless some preallocation is made for the files to grow contiguously in case of fragmentation. (how, I don't know). Otherwise, you will have to defrag and place those files there again.
Of course even then, the benefits really only affect larger files, which you don't tend to find on Joe Average's computer now do you? You tend to find a bunch of smaller files, under 1MB in size.
Huh? WTH?! Under 1 MB is size?!!! This is not 1995! Even a 128kbps mp3 file is a few MB! A jpeg from a 5 mp digital camera is about 2 MB. A divx file for a 2 hr movie can easily run to over 700 MB. It is fallacious to claim that the 'average' user has mostly <1 MB files! Word or Powerpoint documents can easily hit a few MBs. I am not even touching games, home made movies, downloaded crap etc.
You ought to give a little more credit to Joe Average. He may not know much about the workings of the PC, but he does use the PC for more than mere e-mail these days!
Even the MFT for a small partition with few files can run into MBs or much more. Fragmentation of the MFT itself will reduce performance.
So without being able to predict with perfect, or near perfect, accuracy the order of the files that are going to be requested so that they can be arranged sequentially on the drive platter... Well, even the angular velocity argument doesn't really amount to much.
Er..what is your point? Defragging contiguously rearranges the bits and pieces of a single file. It does not rearrange all files contiguously unless you do free-space consolidation( and even then maybe only to a limited extent). It reduces the time/resources required to access that single file compared to if it was fragmented.
I skimmed over those links you posted, and can't seem to find any obvious credentials as to why I should take them seriously. I don't just take everything I may read on Wikipedia at face value, and anyone can register a fancy sounding domain and put up any kind of drivel they want. I don't require a lot of fancy degrees from highly respected schools, but I do require SOMETHING to tell me why I shouldn't just count this person as one more nutter. Like maybe a degree in computer science, and working as a kernel developer for Microsoft for some time period.
There is nothing wrong with the information posted in those links. It's all correct. Easily corroborated with google searches. LOL, those links suddenly have become disreputable because YOU - some anonymous guy on the net with no credentials whatsoever, without any proof whatsoever - says so. Sure!
Yeah, sure you want kernel developers from MS to prove things to you. I have an idea-why don't you ask Bill Gates himself to come down and prove to you one-on-one why fragmentation is irrelevant for the desktop user.
And by the same token, just because you say defragmenting does have an effect on performance doesn't make it true.
Oh, okay. How about if Intel says it does? How about Storage review?
Now if you doubt the credibility of these sources, you are seriously out of your mind.
It's an article on power consumption of laptop HDDs.
Some excerpts (italics mine):
File fragmentation causes serious problems from a performance point of view, as well as from how it affects the user's experience. Let us first look at performance impact.
Sequentially reading a fragmented file will take much longer than reading a defragmented file. This is due to the seek time and rotational latency penalty incurred while gathering data from non-contiguous clusters. This latency is greatly minimized in a defragmented file since the data is in contiguous clusters.
Below is an example of reading a 256MB file that was initially fragmented and later defragmented. It took more than twice as much time to read a fragmented file and caused a significant increase in total energy for the same task . (italics, mine)
They also write
The cost of fixing a fragmented file one time far outweighs the energy penalty associated with multiple access times.
So don't defrag while on batteries. Defrag only when the laptop is plugged into the AC mains. Most defraggers have this option to automatically suspend defrag operations when on battery power. Not an issue for desktop HDDs.
What storage review ( a very respected site for HDD info/news etc) says:
A fragmented file system leads to performance degradation. Instead of a file being in one continuous "chunk" on the disk, it is split into many pieces, which can be located anywhere on the disk. Doing this introduces additional positioning tasks into what should be a sequential read operation, often greatly reducing speed. For example, consider a 100,000 byte file on a volume using 8,192 byte clusters; this file would require 13 clusters. If these clusters are contiguous then to read this file requires one positioning task and one sequential read of 100,000 bytes. If the 13 clusters are broken into four fragments, then three additional accesses are required to read the file, which could easily double the amount of time taken to get to all the data.
Defragmenting a very fragmented hard disk will often result in tangible improvements in the "feel" of the disk. To avoid excessive fragmentation, defragment on a regular basis; usually once every week or two is sufficient. See the system care guide for more.
I happened to have Anandtech's (yeah, Anand has a CS degree) 'Guide to PC Gaming Hardware' lying around, and it clearly states on P.410-411 the need for defragging. I am not going to type out the whole 2 pages, but one sentence stood out:
Defragmenting your hard disk should be done as frequently as possible
Now, don't tell me that AT is not a reputable source!
No, actually, logic DOESN'T show any such thing. It's one of those things that is deceptively seductive, but falls apart quickly when you start digging past the surface.
The quote from that Microsoft article talks about server files, and I specifically stated I wanted evidence regarding Joe Average who does things like browse the web, read/write email, and little else.
It clearly mentions workstations, maybe you are wilfully ignoring it.
And who the heck says that the average user only browses the web and does email? Maybe that description applies to a few. But millions of 'average' users play games, browse the web, do email, use applications like Photoshop, P2P clients, MS Office, store/view photos from digital cameras, create/play music and video files on their PCs. All these activities lead to fragmentation. Heck, mere installation/uninstallation of programs creates fragmentation. See a few lines above.
I also automatically disqualify anything from ZDNet/Cnet. Even before Cnet bought ZDNet from Ziff-Davis, it had absolutely no credibility on anything more technical than choosing a desktop wallpaper. Cnet is the same. I also had to create an account to get the whitepaper, which I'm unwilling to do. Any kind of access restrictions on the information immediately causes me to be suspicious. If the person(s) who collected the data stands by their work, they shouldn't have any problem putting it up for the world to see. Putting conditions on being able to see that data makes it suspect. Which is of course assuming Cnet/ZDNet had any concept whatsoever of conducting a scientific study.
The one thing I will credit you on, is that the summary talked about the effects on web browsing. Though I somehow suspect it has more to do with web server performance rather than client side browsing. I will also give you credit for not posting a link to some study done by the makers of Disk Keeper like someone did the last time I asked this question. So, you've at least done a better job than that person.
LOL what rubbish...so now you are the final arbiter of the credibility of well respected sites? Just because your claims have been proved to be totally wrong by Microsoft and other sources, you are hiding behind stupid excuses of lack of Zdnet/Cnet's reliability. If you don't want to look up information, suit yourself, but don't claim that information does not exist just because you are unwilling to look it up.
Anyway, WTH are you doing on Cnet forums if you disdain it so much?
And the same could be said for defragmenting. I may be some "random guy", but I include a defense of my position along with, so it's not just me saying, "Don't bother defragmenting, because I, some random guy on the internet, said so!"
The 'defense' of your position consists of pseudoscience, leaps of logic, misinformation and some plain good old fashioned disingenuity. (yeah, cnet is a disreputable site and you want MS kernel developers to prove things to you..lol)!
Why don't you provide a few links from scientific studies done by reputed MS kernel developers (lol!) that defragging provides no benefit at all.
Those who want to defrag can defrag, those who don't want to, need not. Not my problem anymore.
I tried to clearly explain the benefits of defragging, but if someone wants to believe that defragging is a big, bad thing that's going to kill your PC ..please feel free.
No more in this thread from me.