HolidayBuyer's Guide

Windows Legacy OS forum

General discussion

Regarding ?Defrag? programs

by eddie11013 / August 14, 2007 11:48 PM PDT

My general understanding is that it isn?t necessary to ?defrag? very often. An occasional defrag should do the trick. Unless, maybe, you?ve done a lot of stuff recently. I do have several questions though that I hope someone can clear up for me and anyone else that might be interested.

1. Does a ?pre? windows defrag program help any at all? If so, how often should it be run?
2. Is this different than the pre installed Microsoft chkdsk program? Isn?t it also a ?pre? windows defrag program?
3. Is one or the other better? Or are they basically the same?
4. What happens if, on occasion, you use a ?pre? windows defrag program and the chkdsk program? Is there a conflict?
5. Does one try to ?write? a certain way and the other ?rewrites? a different way and therefore your wasting your time and effort?
6. Does a ?pre? windows defrag program make a ?windows? defrag program unnecessary?
7. Since it has been stated that the windows defrag program works fine, but isn?t the greatest, what if you use a different combination, like, JkDefrag and a ?pre? windows program? I mention JkDefrag because, if you wanted, it can be set to defrag daily or weekly. Set and forget.
8. Assuming one is doing a ?lot? of stuff, when does a registry defrag program enter the picture? Or is one ?ever? needed? Is this just a marketing thing?
9. And, of course, there are also registry cleaners. Again, assuming your doing ?lots? of stuff.
10. And also there?s also programs like ?CCleaner? program, which cleans out cookies, temp files, etc.
11. Is there an ?order? that one might want to run these programs? Even if it is only ?monthly? or even less often?

So, my inquiry, isn?t which program is better, as I would assume there would be a lot of opinions about that. But rather, what combo might one want to have on hand, or run, on those occasions that you ?feel? the need to defrag. Again, if we look at the ?CCleaner? program, in basically one step, it does a pretty good job of cleaning out all sorts of stuff. If you wanted, it can even be set to ?clean? automatically. Set and forget. If one were to do the same steps, manually, it would take much longer, and maybe, still not be as effective as the CCleaner program. I have Zone Alarm firewall. Basically its set and, for the most part, forgotten about. I have the AVG anti virus program, set to do auto scans on a daily basis. Set and forget.

I?m not looking for the best program, as for ease or convenience, or best way and in what proper order. As an example, would you do the following, and in this order:
1. CCleaner
2. Registry Cleaner
3. Registry Defrag
4. Windows defrag program
5. Pre windows defrag program/chkdsk

Sorry for the long post. I have windows xp pro.

Thanks in advance,
Eddie

Discussion is locked
You are posting a reply to: Regarding ?Defrag? programs
The posting of advertisements, profanity, or personal attacks is prohibited. Please refer to our CNET Forums policies for details. All submitted content is subject to our Terms of Use.
Track this discussion and email me when there are updates

If you're asking for technical help, please be sure to include all your system info, including operating system, model number, and any other specifics related to the problem. Also please exercise your best judgment when posting in the forums--revealing personal information such as your e-mail address, telephone number, and address is not recommended.

You are reporting the following post: Regarding ?Defrag? programs
This post has been flagged and will be reviewed by our staff. Thank you for helping us maintain CNET's great community.
Sorry, there was a problem flagging this post. Please try again now or at a later time.
If you believe this post is offensive or violates the CNET Forums' Usage policies, you can report it below (this will not automatically remove the post). Once reported, our moderators will be notified and the post will be reviewed.
Collapse -
If you ask me
by Jackson Douglas / August 15, 2007 1:15 AM PDT

Defragmenting at any point is incredibly overrated for people who aren't doing things that live and die by disk access speeds, like running a large database server or doing lots of high end AV work. All you have is subjective "It seems faster" anecdotal evidence... And after you spend an hour or two on some process you think is supposed to make your system faster, whether or not it really is faster, you're going to convince yourself it is... The placebo effect strikes again. For the average person, the amount of time spent defragmenting, plus the stress put on the drive by the entire process, negates any meager performance gains you may receive, and then some.

And I take a similarly dim view of registry cleaning programs. It's another very clear cut case of ignorance leading to misinformation and even exploitation. The registry is nothing particularly special. It's a very simple, very poorly done, flat file database. Something a first year computer science student could recreate in an afternoon if they were so drunk they could barely sit up straight. All it does is store program settings and other "metadata". It's not this mysterious black box component with all sorts of mystical properties that can have a profound effect on your computer's well being. If you remove a program, and it leaves bits in the registry, the only thing it really costs you is a tiny bit of disk space, and maybe a few nanoseconds (imperceptible to the human brain) when some program needs to read/write data in the registry that is physically located past that point.

The only kind of registry cleaning program you need are the likes of Adaware or Spybot, that get rid of bits of malware hiding in the registry. Any others are too expensive even if they're free, IMO. And I'd never heard of a registry defrag program, but that's a whole new level of stupid.

If you want a smooth running system, it's not rocket science, it's actually pretty simple. Just follow the guidelines I list below, and the number of problems you have will drop significantly. Your system will also continue to perform at a near constant level if you implement these suggestions immediately after a format and fresh install of Windows.

These are only suggestions, but the more you follow the better your odds of avoiding problems will be.

1) Avoid using Internet Explorer as much as possible
1a) Use Mozilla Firefox, Seamonkey, Safari, or Opera
2) Be sure to always install security updates for Windows
2a) One and only time it's safe to ignore #1
3) Firewalls are important
3a) XP's firewall is good enough, but feel free to use a third party one
4) Anti-Virus programs are important
4a) If you need a free AV program, try AVG Free and Avast
5) Avoid using any pirated programs
6) Avoid using any file sharing (P2P) programs
7) Avoid using Outlook or Outlook Express
7a) I would recommend Mozilla Thunderbird or web based email such as Gmail

I personally follow every one of those suggestions, and it's amazing how few problems I have with my system. Last big problem I had, was a few files had incorrect date stamps on them, and so nightly Firefox alpha builds wouldn't run. That was a couple of months ago. I'm constantly playing with files in the hundreds of MB... Moving them around, deleting them, making new ones... Never really noticed any performance decrease, and I've never once defragmented.

Collapse -
for set-it-and-forget-it defrag
by oleicacid / August 15, 2007 3:21 AM PDT

The best automatic defragmenter IMO is Diskeeper. It's truly set-it-and-forget-it. I have always recommended Diskeeper because it works very well for me. It's not free but there are free trial versions that you can download and try out at the diskeeper site.

BTW, I don't think defragging is overrated at all, if you use your PC for anything of consequence. Heavy fragmentation can slow down a lot of programs that frequently access the HDD, especially games..play Oblivion on a heavily fragmented system and tell me how it feels like. Try burning a DVD from heavily fragmented files....there is a good chance that the end result is a coaster even with buffer underrun protection. Badly fragmented movie files will stutterand playback will not be smooth. Et cetera et cetera.

Heavily fragmented drives also have to do more work to gather data from the platter, so they generate more heat needlessly besides wearing out faster.

There is nothing good about having fragmented files as far as I can see.

FAT was much more prone to fragmentation than NTFS is, but that does not mean NTFS escapes the effects of fragmentation. If the only applications you use on the PC are the web browser and notepad, then yeah, you don't need to worry about fragmentation Wink

Collapse -
defrag
by jdog1234 / August 15, 2007 3:41 AM PDT

in total agreement with your choice of diskeeper for defrag utility....but i use perfectdisk and it works just the same..great layout easy to use and set-up and it just takes its course. i have the daily defrag time set for 3 a.m. so it works while im sleeping awesome

Collapse -
Do that quite often
by Jackson Douglas / August 15, 2007 5:46 AM PDT

The large video files I play with are usually rips of my own DVD collection, and guess what, they play just fine. No skipping, no stuttering, no nothing. I even stream these files from my desktop to my Xbox to view with Xbox Media Center... Nothing.

I've burned several CDs and DVDs in just the last few hours as well... Not a single coaster out of the lot. Maybe it's because I'm burning Linux and FreeBSD ISO images, but either way, not a single problem.

I gave up on PC gaming years ago, when I figured out that console gaming is cheaper and a better overall experience. If I'm ever in the mood for another cookie cutter first person shooter title, that offers little beyond slightly better graphics than the last cookie cutter FPS I played... Then who knows, but I'm sure I'd have to replace my GeForce FX 5700 Ultra with something better, and for the price of what it'd cost me to keep my PC up with the latest games for about 6 months, I could buy a PlayStation 3 and be essentially guaranteed 10 years worth of games.

I also don't buy the hardware based argument about fragmentation causing the drive heads to go back and forth more often, etc, etc. What exactly do you think the drive is doing while you're defragmenting, playing scrabble? There's few tasks more intensive than defragmenting for a HDD. You either have a near constant small level of stress on the drive due to fragmentation, or a slightly lower constant level of stress from a non-fragmented drive, but tack onto that a huge amount of stress from the defragmenting process. It's like how accelerating quickly is very hard on the frame of your car, and stopping quickly is a great way to burn out your breaks. Whereas accelerating at a slower rate, and braking less over a longer distance, will greatly increase the lifespan of your car's components.

Also, when it comes to file placement on a drive, FAT and NTFS work exactly the same way. They both use the same simple FIFO method, so are basically equally prone to fragmentation. Of course if Microsoft just did what the Linux developers did years ago, this whole issue could be a non-issue. The Linux developers, many years ago, came up with what is essentially a self-defragmenting file system. Instead of a simple FIFO method of file placement, it looks for the largest chunk of open space that will fit the entire file. That failing, it tries to split the file up into as few pieces as possible. So, fragmentation levels over 2% on Linux are extremely rare, and usually only happen on developer systems with large numbers of small files.

I suspect, that the majority of people who claim fragmentation causes this or that, are attributing the effects of other problems (such as malware) to fragmentation levels. I've yet to be able to find anyone who can even do more than offer up the same basic set of arguments for that matter. Show me even a single study that clearly indicates significant improvements from defragmenting for NON-SPECIAL CASE uses. Show me a single study that shows how Joe Q. Public, who uses their computer to browse the web, check email, and little else -- you know, like about 90% of all computer users -- will benefit significantly from defragmenting. Any scrap of hard evidence that isn't just someone's impression or feeling about it. Evidence gathered in at least something approaching a scientific fashion. Until then, I'm calling BS on it having any appreciable benefit to the average user.

Collapse -
Great!
by oleicacid / August 16, 2007 1:53 AM PDT
In reply to: Do that quite often

The large video files I play with are usually rips of my own DVD collection, and guess what, they play just fine. No skipping, no stuttering, no nothing. I even stream these files from my desktop to my Xbox to view with Xbox Media Center... Nothing.

I've burned several CDs and DVDs in just the last few hours as well... Not a single coaster out of the lot. Maybe it's because I'm burning Linux and FreeBSD ISO images, but either way, not a single problem.
=====================================================================

*******Good for you. Not everyone has the same experience, especially those with single drive setups. Was your drive heavily fragmented?

=====================================================================
I gave up on PC gaming years ago, when I figured out that console gaming is cheaper and a better overall experience. If I'm ever in the mood for another cookie cutter first person shooter title, that offers little beyond slightly better graphics than the last cookie cutter FPS I played... Then who knows, but I'm sure I'd have to replace my GeForce FX 5700 Ultra with something better, and for the price of what it'd cost me to keep my PC up with the latest games for about 6 months, I could buy a PlayStation 3 and be essentially guaranteed 10 years worth of games.
=====================================================================

********I disagree. Better for YOU, but not for the millions of PC gamers out there, including me. Gaming on a console is beyond stupid for FPS, RPG and RTS games. The only genres of games that are half-decent on consoles are third person shooters, racing games and fighting games. And even these are superior on a PC with the right accessories.

Try playing a FPS or an RTS/RPG with the lousy xbox360/PS3 controllers....precision is non-existent...not even remotely comparable to a good keyboard+mouse setup. (auto aim for FPSs lol!) And, console + games + HDTV + controllers is not cheap either. Granted, a high-end PC set up is slightly more expensive, but then again, it can do so much more, especially with all the free mods and add-ons available for the popular games.

Your PS3 will be obsolete in 3-4 years time, so your claim of 10 years worth of games is pretty much bogus. And AFAIK, console games cost much more than PC games, as do accessories. And I am not even touching the horrible xbox360 reliability problems. If you prefer consoles (as millions do), then more power to you. Enjoy your console, but dont plug that as the 'right' choice for everyone.

And those of us who do game on PCs can feel the effects of fragmentation, especially those very familiar with the system and sensitive to it's performance deterioration/boosts.

=====================================================================
I also don't buy the hardware based argument about fragmentation causing the drive heads to go back and forth more often, etc, etc. What exactly do you think the drive is doing while you're defragmenting, playing scrabble? There's few tasks more intensive than defragmenting for a HDD. You either have a near constant small level of stress on the drive due to fragmentation, or a slightly lower constant level of stress from a non-fragmented drive, but tack onto that a huge amount of stress from the defragmenting process.
=====================================================================

**********If you defrag regularly, your HDD head won't have to play scrabble...er move back and forth for too long. If you have defragged thoroughly once and follow it up with regular defrags (depending upon your usage patterns) then the drive has to do less work for each successive defrag. Compare that to a moderate-to-heavily fragmented drive which is needlessly stressed out all the time, and runs hotter.

======================================================================

It's like how accelerating quickly is very hard on the frame of your car, and stopping quickly is a great way to burn out your breaks. Whereas accelerating at a slower rate, and braking less over a longer distance, will greatly increase the lifespan of your car's components.

=====================================================================

*******Poor analogy. Cars and hard drives work very differently. How do you know that 'lower' stress levels for a longer time period is better for the drive than an infrequent dose of slightly higher stress for a short time during a defrag? The materials used in drives are very different from cars; their fatigue characteristics will be different. The susceptibility of the magnetic media to heat and mechanical forces will be different.

Infact, how do you know that a moderate-heavily fragmented drive actually experiences 'lower' stress over a longer time period *than* during a short defrag? I'd like to see some data to that effect.

The way you put it, a defrag is going to quickly kill your drive, which is a bit ridiculous. How is defrag more stressful than, say, an hour of P2P (assuming the space for the file is not preallocated) during which time the drive is constantly working?

=====================================================================

Also, when it comes to file placement on a drive, FAT and NTFS work exactly the same way. They both use the same simple FIFO method, so are basically equally prone to fragmentation. Of course if Microsoft just did what the Linux developers did years ago, this whole issue could be a non-issue. The Linux developers, many years ago, came up with what is essentially a self-defragmenting file system. Instead of a simple FIFO method of file placement, it looks for the largest chunk of open space that will fit the entire file. That failing, it tries to split the file up into as few pieces as possible. So, fragmentation levels over 2% on Linux are extremely rare, and usually only happen on developer systems with large numbers of small files.

=====================================================================
********Maybe Linux handles fragmentation better, but it is not an option for the vast majority of the home users as of now. Besides, XP Pro+SP2 works just fine, if you do a little maintainence once in a while and use the right apps for your needs i.e. browsing, AV etc some of which you nicely listed in your first post.

See this for the differences in how NTFS and FAT handle fragmentation. More googling will probably yield more
http://www.pcguide.com/ref/hdd/file/ntfs/relFrag-c.html
http://www.pcguide.com/ref/hdd/file/ntfs/filesFiles-c.html

=====================================================================

I suspect, that the majority of people who claim fragmentation causes this or that, are attributing the effects of other problems (such as malware) to fragmentation levels.
=====================================================================

********Malware definitely can be a cause of poor performance. Fragmentation can also be a cause of poor performance. Just because you say fragmentation has no effect of system performance does not make it true.

=====================================================================

I've yet to be able to find anyone who can even do more than offer up the same basic set of arguments for that matter. Show me even a single study that clearly indicates significant improvements from defragmenting for NON-SPECIAL CASE uses. Show me a single study that shows how Joe Q. Public, who uses their computer to browse the web, check email, and little else -- you know, like about 90% of all computer users -- will benefit significantly from defragmenting. Any scrap of hard evidence that isn't just someone's impression or feeling about it. Evidence gathered in at least something approaching a scientific fashion. Until then, I'm calling BS on it having any appreciable benefit to the average user.
=====================================================================

********Oh come on now! You are being disingenuous here. It is logical that fragmentation reduces drive performance, yet you want *me* to prove that it *does not*? Nice :/

Even Microsoft says that fragmentation can slow down things, and they ought to know better than you!

http://www.microsoft.com/technet/prodtechnol/windows2000serv/maintain/optimize/w2kexec.mspx

I quote from the above link:

"How fragmented can a system get? In June 1999 the American Business Research Corporation of Irvine, California performed a fragmentation analysis and found that, out of 100 corporate offices that were not using a defragmenter, 50 percent of the respondents had server files with 2,000 to 10,000 fragments—and another 33 percent had files that were fragmented into 10,333 to 95,000 pieces. In all cases the results were the same: Servers and workstations experienced a significant degradation in performance."

Here is another link of a 'scientific study' that you wanted.

http://whitepapers.zdnet.com/whitepaper.aspx?docid=155234

I am sure there are more, if you google.

okay, I am done with this discussion.

Defrag or not, and with what software- that's upto the user. But deciding NOT to defrag because some random guy on the internet thinks defragging is not a good idea is quite strange.

Collapse -
Answers
by Jackson Douglas / August 16, 2007 4:04 AM PDT
In reply to: Great!
Good for you. Not everyone has the same experience, especially those with single drive setups. Was your drive heavily fragmented?

I have no idea. I've never run a defrag program on it since installing it in my system. I suppose I should also mention I regularly have a web browser open, several tabs, sometimes several windows with several tabs... Winamp 2.95 is often playing something... And my music collection is stored on a secondary drive, on the same IDE channel as my OS drive... So given the IDE limitation of only being able to access a single device at a time, you'd really think I should have run into problems by now. I've done more to tempt fate than most people, and still nothing.

I don't even have anything that special for a system. 2GHz Athlon64, 1.5GB of RAM, 2 IDE HDDs on the same channel, and my DVD burner is on the second channel. It had a partner, but I just put that into a system I turned into a Squid server.

I disagree. Better for YOU, but not for the millions of PC gamers out there, including me. Gaming on a console is beyond stupid for FPS, RPG and RTS games. The only genres of games that are half-decent on consoles are third person shooters, racing games and fighting games. And even these are superior on a PC with the right accessories.

I can't agree on the RPG front. Console RPGs are worlds better than anything I've seen on the PC. I have yet to see ANYTHING on the PC that can even begin to approach the likes of Final Fantasy X. That game had pretty much everything going for it. A deep and emotional story full of political intrigue, love, loss, religious intolerance, hope, despair, betrayal... If the ending to the game doesn't stir up some kind of an emotional response in you, you should probably check for a pulse. The cast of characters each had their own distinct personalties and motivations. It also had what I would consider to be the perfection of turn based combat. Even the voice acting was pretty good. The Xenosaga trilogy was also phenomenal. Even the weaker second game was easily better than most anything I've seen on the PC for RPGs.

I'll give you the FPS and RTS, even if Kingdom Under Fire for the Xbox did do a pretty impressive job of adapting a RTS game to a console... But I have grown tired of FPS personally. They're all the same, and I can barely tell one from the other. And if it's not a Blizzard made RTS, I'm probably not interested. The Kohan series was pretty good, but lacked the highly refined balance Warcraft and Starcraft have.

And my 10 year claim for the PS3 is not "bogus". There are still new games coming out for the PS2. What I meant, was that I can go to the store, buy any game for a given console, and be guaranteed that it will work. Sony plans on supporting the PS3 for the next 10 years (well, probably about 9.5 now). I don't personally care a wit about graphics so long as the story and gameplay are good enough to make up for it. Who cares how good something looks if it's impossible to play, and isn't interesting in the least?

Of course this is getting off topic, so I suggest it be tabled unless you want to carry it over to the game forum.

If you defrag regularly, your HDD head won't have to play scrabble...er move back and forth for too long. If you have defragged thoroughly once and follow it up with regular defrags (depending upon your usage patterns) then the drive has to do less work for each successive defrag. Compare that to a moderate-to-heavily fragmented drive which is needlessly stressed out all the time, and runs hotter.

It doesn't quite work that way. You're working on the assumption that the drive's speed is variable... Which they're not. The platters are always spinning at a fixed velocity. For desktop drives, it's usually 5400 or 7200rpm. The drive heads generate insignificant amounts of heat relative to the motor that keeps the drive platters moving, even if they were in constant motion. I doubt they could raise the drive's operating temp by even a single degree Celsius.

Beyond that, platter sizes are fixed. HDD storage capacity is increased by increasing the density at which data can be packed onto the platter. A 1GB drive might have only allowed for say 10KB/in^2 (I have no idea what it actually is, I just made that number up for example's sake) while a 100GB drive might allow for 1MB/in^2. Same surface area, you're just packing the data in tighter.

All of this means that the drive heads have a lower probability of needing to move great distances to get any given bit of data. There are also multiple read/write heads on a drive, so if we make a basic assumption about the logic programmed into the drive's controller, it will figure out which head is closer to the data requested, and use it to read it in. Things will then be assembled in the drive's buffer memory before being sent off to the system RAM.

Which brings us to the real performance bottleneck when it comes to HDDs, the system bus. The IDE bus is many many times slower than any other bus in the system. Getting data from the drive to the CPU is where the real performance bottleneck comes into play.

The fastest SATA drive has a burst transfer rate of 1Gbps IIRC. That's about half of what the aging PCI bus can do. The FSB, the connection between the RAM and CPU, is 50-60X greater at least. So most of the time, the CPU is just sitting around waiting for something to do. If you want to get into CPU mechanics like data starvation, you can get an even better picture of what's really going on.

Poor analogy. Cars and hard drives work very differently. How do you know that 'lower' stress levels for a longer time period is better for the drive than an infrequent dose of slightly higher stress for a short time during a defrag? The materials used in drives are very different from cars; their fatigue characteristics will be different. The susceptibility of the magnetic media to heat and mechanical forces will be different.

Physics is how I know. Doesn't even require a great understanding of physics, just a moderate understanding of Newton's laws. Particularly the law regarding inertia. Acceleration is calculated as a force acting on an object over a distance for a period of time. Just getting a car weighing say a metric ton moving requires a significant amount of force. Accelerating that car from 0 to 60mph in 10 seconds vs 30 requires a considerably greater amount of force, and an equal and opposite force is acting on the car. So if it's say 10,000N (again, a figure merely for example purposes) of force to go from 0 to 60 in 10 seconds, and only 3,000N to do it in 30 seconds, that's a difference of 7,000N that the frame of the car doesn't have to absorb.

Of course, this all really becomes pointless when you factor in angular velocity, which is what most defragmenting proponents don't seem to quite understand when they base their arguments on it.

Say I have a drive with absolutely no fragmentation. I need a file that is on the inner most ring of the drive, and then I need a file on the outer most ring. The drive arm is still going back and forth quite a bit, so your whole argument about fragmentation reducing head movement is invalidated.

Now, if you take and arrange frequently used files on the outer edges of the drive platter to take advantage of the increased angular velocity, you might have a point. Of course even then, the benefits really only affect larger files, which you don't tend to find on Joe Average's computer now do you? You tend to find a bunch of smaller files, under 1MB in size. So without being able to predict with perfect, or near perfect, accuracy the order of the files that are going to be requested so that they can be arranged sequentially on the drive platter... Well, even the angular velocity argument doesn't really amount to much.

Maybe Linux handles fragmentation better, but it is not an option for the vast majority of the home users as of now. Besides, XP Pro+SP2 works just fine, if you do a little maintainence once in a while and use the right apps for your needs i.e. browsing, AV etc some of which you nicely listed in your first post.

While the comment about Linux being unsuitable is untrue, it's also not the point. I was using it as an example of a better system. Maybe not a perfect system, but at least a BETTER system.

I skimmed over those links you posted, and can't seem to find any obvious credentials as to why I should take them seriously. I don't just take everything I may read on Wikipedia at face value, and anyone can register a fancy sounding domain and put up any kind of drivel they want. I don't require a lot of fancy degrees from highly respected schools, but I do require SOMETHING to tell me why I shouldn't just count this person as one more nutter. Like maybe a degree in computer science, and working as a kernel developer for Microsoft for some time period.

Malware definitely can be a cause of poor performance. Fragmentation can also be a cause of poor performance. Just because you say fragmentation has no effect of system performance does not make it true.

And by the same token, just because you say defragmenting does have an effect on performance doesn't make it true.

Oh come on now! You are being disingenuous here. It is logical that fragmentation reduces drive performance, yet you want *me* to prove that it *does not*? Nice :/

No, actually, logic DOESN'T show any such thing. It's one of those things that is deceptively seductive, but falls apart quickly when you start digging past the surface.

The quote from that Microsoft article talks about server files, and I specifically stated I wanted evidence regarding Joe Average who does things like browse the web, read/write email, and little else.

I also automatically disqualify anything from ZDNet/Cnet. Even before Cnet bought ZDNet from Ziff-Davis, it had absolutely no credibility on anything more technical than choosing a desktop wallpaper. Cnet is the same. I also had to create an account to get the whitepaper, which I'm unwilling to do. Any kind of access restrictions on the information immediately causes me to be suspicious. If the person(s) who collected the data stands by their work, they shouldn't have any problem putting it up for the world to see. Putting conditions on being able to see that data makes it suspect. Which is of course assuming Cnet/ZDNet had any concept whatsoever of conducting a scientific study.

The one thing I will credit you on, is that the summary talked about the effects on web browsing. Though I somehow suspect it has more to do with web server performance rather than client side browsing. I will also give you credit for not posting a link to some study done by the makers of Disk Keeper like someone did the last time I asked this question. So, you've at least done a better job than that person.

Defrag or not, and with what software- that's upto the user. But deciding NOT to defrag because some random guy on the internet thinks defragging is not a good idea is quite strange.

And the same could be said for defragmenting. I may be some "random guy", but I include a defense of my position along with, so it's not just me saying, "Don't bother defragmenting, because I, some random guy on the internet, said so!"
Collapse -
Alright
by oleicacid / August 17, 2007 3:31 AM PDT
In reply to: Answers

This is my final (long) post on the subject, since I really have no interest in wasting my time on this silly discussion anymore. I don't think you have any clear understanding of what you are talking about, and are merely throwing around ideas/concepts without understanding them or their relevance to the discussion at hand.

Also, forget about the consoles/linux whatever. Let's stick to HDDs + fragmentation for now.

It doesn't quite work that way. You're working on the assumption that the drive's speed is variable... Which they're not. The platters are always spinning at a fixed velocity. For desktop drives, it's usually 5400 or 7200rpm. The drive heads generate insignificant amounts of heat relative to the motor that keeps the drive platters moving, even if they were in constant motion. I doubt they could raise the drive's operating temp by even a single degree Celsius.

Beyond that, platter sizes are fixed. HDD storage capacity is increased by increasing the density at which data can be packed onto the platter. A 1GB drive might have only allowed for say 10KB/in^2 (I have no idea what it actually is, I just made that number up for example's sake) while a 100GB drive might allow for 1MB/in^2. Same surface area, you're just packing the data in tighter. All of this means that the drive heads have a lower probability of needing to move great distances to get any given bit of data


It is largely the spindle motor that generates the most heat from the work it has to do to keep the platter(s) spinning. The heat output from the actuator arm moving around may not be much. BUT, remember that moving the arm around excessively also requires power, and increased power consumption = increased heat loss. (see intel link further down) So there will be a contribution, but may not be as much as from the spinning platters.

I know that the platter angular velocity is not variable. But when the actuator arm assembly has to do more work unnecessarily moving the arm across the platter many times a minute, it will increase wear more and reduce performance. WHY subject it to extra work unnecessarily ?

Areal density is currently ~ 150 Gbits/sq inch for ~300 GB drives (higher with PRT) and expected to increase as PRT finds its way into more consumer level drives. BUT, file sizes have also grown in parallel. The average home user deals with much larger file sizes than even 3-4 years ago, with games, photos and video files ever increasing in size. Do you have numbers from 'scientific studies' to prove that increases in areal density have overcome the negative effects of fragmentation or is it merely your guess? All evidence points to the contrary.

There are also multiple read/write heads on a drive, so if we make a basic assumption about the logic programmed into the drive's controller, it will figure out which head is closer to the data requested, and use it to read it in. Things will then be assembled in the drive's buffer memory before being sent off to the system RAM.

You are totally wrong here! There is only a SINGLE read/write head for one surface of a platter. The second r/w head is on the opposite surface of the platter. Show me a single desktop drive with multiple r/w heads for the same platter surface .

What is more, the actuator axis for those multiple heads is the same so the heads will move as a single unit synchronously, but on opposite surfaces of the platter. Therefore, having multiple heads is not an advantage, since they are not on the same surface. In fact, the industry trend is towards reducing the number of arms/heads and platters since having multiple arms stresses out the actuator assembly. Google, and all this will be clear. Earlier, drives used to have more platters and more arms/heads, but the preference these days is to greatly minimize the number of moving components.

Your basic assumptions about drive logic are also not right. The drive will still collect data for a single fragmented (or otherwise) file in the order it was written, but even if that were not so , the actuator arm still has to move across the platter working needlessly to collect all the fragments rather than reading it off contiguously. The IDEAL scenario is when the file is read/written sequentially as a contiguous unit.

NCQ changes the path profile of the drive head to some extent, but not from the POV of fragmentation. (Anyway, NCQ is better for asynchronous I/Os on servers, it is not geared towards home systems)

Which brings us to the real performance bottleneck when it comes to HDDs, the system bus. The IDE bus is many many times slower than any other bus in the system. Getting data from the drive to the CPU is where the real performance bottleneck comes into play.


Wrong again! The bottleneck is not the ATA bus width, but the mechanical nature of the operation of the arm+ head moving across the platter.

Wasted movement = lower performance!

Greater unnecessary movement between tracks + rotational latency + settle time = More unnecessary decrease in performance.

The mechanical operation by its very nature is orders of magnitude slower than the data transfer across any electronic system bus. This ought to be obvious!

So, by having fragmentation, you are weakening the already weakest link in the I/O chain and increasing the bottleneck from the harddrive!!!

The fastest SATA drive has a burst transfer rate of 1Gbps IIRC. That's about half of what the aging PCI bus can do. The FSB, the connection between the RAM and CPU, is 50-60X greater at least. So most of the time, the CPU is just sitting around waiting for something to do. If you want to get into CPU mechanics like data starvation, you can get an even better picture of what's really going on.

Your numbers are off again. Fastest SATA (2.0) drives have theoretical burst speeds of 3.0Gbits/sec (~300 MBytes/sec), with sustained transfer rates of about 50-80 MBytes/sec. Newer Hitachi and Seagate drives with the 32 MB cache advertise something closer to 100 Mbytes per second sustained transfer IIRC.

Parallel ATA-133 bus ~ 133 Mbytes/sec theoretical max

Serial ATA 2.0 bus ~ 300 Mbytes/sec theoretical max (see above)

PCI bus (32 bit/33MHz) ~ 133Mbytes/sec

Draw your own conclusions!

Ofcourse, the FSB/HyperTransport is far faster always.

Physics is how I know. Doesn't even require a great understanding of physics, just a moderate understanding of Newton's laws. Particularly the law regarding inertia. Acceleration is calculated as a force acting on an object over a distance for a period of time. Just getting a car weighing say a metric ton moving requires a significant amount of force. Accelerating that car from 0 to 60mph in 10 seconds vs 30 requires a considerably greater amount of force, and an equal and opposite force is acting on the car. So if it's say 10,000N (again, a figure merely for example purposes) of force to go from 0 to 60 in 10 seconds, and only 3,000N to do it in 30 seconds, that's a difference of 7,000N that the frame of the car doesn't have to absorb.

Acceleration = differential of velocity wrt time i.e. a = (dv/dt).

Force = mass*acceleration ie. m(dv/dt).

Your force numbers ought to come out as 2.8kN and 0.46 kN if my rough calculations are right. Since you are merely using example numbers, I will not dwell on this.

Now, let me ask: what are the actual forces experienced, and by what components during

(1) Defragging a drive for a short time

(2) Running a fragmented drive for a long time without defragging.

Unless you have the hard numbers for these, your claims that defragging is worse for the drive over the long run than letting it run fragmented, have NO value.

In fact, the link below debunks your claim as a myth.

http://www.techarp.com/showarticle.aspx?artno=84&pgno=1

I quote:


Myth:
Defragmenting the hard drive will stress the needle (head actuator).

Truth :
This myth has some truth in it, albeit misplaced. Defragmenting the hard drive may involve a lot of seeking as the hard drive rearranges its data in a contiguous fashion. This allows the read/write heads to read large amounts of data without seeking all over the platters.

However, after defragmentation, the hard drive no longer needs to seek all over the platters for your data. This reduces the amount of head actuator movements as well as greatly increase the hard drive's performance.

Therefore, while it may be technically correct to say that defragmenting your hard drive will stress the head actuators, the truth is defragmenting your hard drive will reduce the amount of seeking from then on and thus reduce the head actuators' workload.


Of course, this all really becomes pointless when you factor in angular velocity, which is what most defragmenting proponents don't seem to quite understand when they base their arguments on it.

Angular velocity of what? Please do explain with hard numbers.

Say I have a drive with absolutely no fragmentation. I need a file that is on the inner most ring of the drive, and then I need a file on the outer most ring. The drive arm is still going back and forth quite a bit, so your whole argument about fragmentation reducing head movement is invalidated.

Red herring.
Cognitive dissonance.

The arm will not have to go back and forth for the defragmented file compared to the fragmented file.


Now, if you take and arrange frequently used files on the outer edges of the drive platter to take advantage of the increased angular velocity, you might have a point.

Increased angular velocity of what? I suspect I know what you are talking about, but let me hear it from you anyway. Don't mix up angular velocity and linear velocity.

Anyway, yes, rearranging the frequently used files on the outer edge will improve access speed, but what happens when those files are fragmented? The placement method's criterion is important-is it for frequently accessed (without modification) or frequently modified files? For the latter case, it will quickly lose benefit unless some preallocation is made for the files to grow contiguously in case of fragmentation. (how, I don't know). Otherwise, you will have to defrag and place those files there again.

Of course even then, the benefits really only affect larger files, which you don't tend to find on Joe Average's computer now do you? You tend to find a bunch of smaller files, under 1MB in size.

Huh? WTH?! Under 1 MB is size?!!! This is not 1995! Even a 128kbps mp3 file is a few MB! A jpeg from a 5 mp digital camera is about 2 MB. A divx file for a 2 hr movie can easily run to over 700 MB. It is fallacious to claim that the 'average' user has mostly <1 MB files! Word or Powerpoint documents can easily hit a few MBs. I am not even touching games, home made movies, downloaded crap etc.

You ought to give a little more credit to Joe Average. He may not know much about the workings of the PC, but he does use the PC for more than mere e-mail these days!

Even the MFT for a small partition with few files can run into MBs or much more. Fragmentation of the MFT itself will reduce performance.


So without being able to predict with perfect, or near perfect, accuracy the order of the files that are going to be requested so that they can be arranged sequentially on the drive platter... Well, even the angular velocity argument doesn't really amount to much.

Er..what is your point? Defragging contiguously rearranges the bits and pieces of a single file. It does not rearrange all files contiguously unless you do free-space consolidation( and even then maybe only to a limited extent). It reduces the time/resources required to access that single file compared to if it was fragmented.

I skimmed over those links you posted, and can't seem to find any obvious credentials as to why I should take them seriously. I don't just take everything I may read on Wikipedia at face value, and anyone can register a fancy sounding domain and put up any kind of drivel they want. I don't require a lot of fancy degrees from highly respected schools, but I do require SOMETHING to tell me why I shouldn't just count this person as one more nutter. Like maybe a degree in computer science, and working as a kernel developer for Microsoft for some time period.

There is nothing wrong with the information posted in those links. It's all correct. Easily corroborated with google searches. LOL, those links suddenly have become disreputable because YOU - some anonymous guy on the net with no credentials whatsoever, without any proof whatsoever - says so. Sure!

Yeah, sure you want kernel developers from MS to prove things to you. I have an idea-why don't you ask Bill Gates himself to come down and prove to you one-on-one why fragmentation is irrelevant for the desktop user.

And by the same token, just because you say defragmenting does have an effect on performance doesn't make it true.

Oh, okay. How about if Intel says it does? How about Storage review?

Now if you doubt the credibility of these sources, you are seriously out of your mind.

http://www.intel.com/cd/ids/developer/asmo-na/eng/dc/mobile/333852.htm

It's an article on power consumption of laptop HDDs.

Some excerpts (italics mine):
File fragmentation causes serious problems from a performance point of view, as well as from how it affects the user's experience. Let us first look at performance impact.

Sequentially reading a fragmented file will take much longer than reading a defragmented file. This is due to the seek time and rotational latency penalty incurred while gathering data from non-contiguous clusters. This latency is greatly minimized in a defragmented file since the data is in contiguous clusters.

Below is an example of reading a 256MB file that was initially fragmented and later defragmented. It took more than twice as much time to read a fragmented file and caused a significant increase in total energy for the same task .
(italics, mine)

They also write

The cost of fixing a fragmented file one time far outweighs the energy penalty associated with multiple access times.

So don't defrag while on batteries. Defrag only when the laptop is plugged into the AC mains. Most defraggers have this option to automatically suspend defrag operations when on battery power. Not an issue for desktop HDDs.


What storage review ( a very respected site for HDD info/news etc) says:

http://www.storagereview.com/guide/fileFrag.html

A fragmented file system leads to performance degradation. Instead of a file being in one continuous "chunk" on the disk, it is split into many pieces, which can be located anywhere on the disk. Doing this introduces additional positioning tasks into what should be a sequential read operation, often greatly reducing speed. For example, consider a 100,000 byte file on a volume using 8,192 byte clusters; this file would require 13 clusters. If these clusters are contiguous then to read this file requires one positioning task and one sequential read of 100,000 bytes. If the 13 clusters are broken into four fragments, then three additional accesses are required to read the file, which could easily double the amount of time taken to get to all the data.

Defragmenting a very fragmented hard disk will often result in tangible improvements in the "feel" of the disk. To avoid excessive fragmentation, defragment on a regular basis; usually once every week or two is sufficient. See the system care guide for more.


http://www.storagereview.com/guide/clustFragmentation.html

I happened to have Anandtech's (yeah, Anand has a CS degree) 'Guide to PC Gaming Hardware' lying around, and it clearly states on P.410-411 the need for defragging. I am not going to type out the whole 2 pages, but one sentence stood out:

Defragmenting your hard disk should be done as frequently as possible

Now, don't tell me that AT is not a reputable source!

No, actually, logic DOESN'T show any such thing. It's one of those things that is deceptively seductive, but falls apart quickly when you start digging past the surface.

The quote from that Microsoft article talks about server files, and I specifically stated I wanted evidence regarding Joe Average who does things like browse the web, read/write email, and little else.


It clearly mentions workstations, maybe you are wilfully ignoring it.

And who the heck says that the average user only browses the web and does email? Maybe that description applies to a few. But millions of 'average' users play games, browse the web, do email, use applications like Photoshop, P2P clients, MS Office, store/view photos from digital cameras, create/play music and video files on their PCs. All these activities lead to fragmentation. Heck, mere installation/uninstallation of programs creates fragmentation. See a few lines above.

I also automatically disqualify anything from ZDNet/Cnet. Even before Cnet bought ZDNet from Ziff-Davis, it had absolutely no credibility on anything more technical than choosing a desktop wallpaper. Cnet is the same. I also had to create an account to get the whitepaper, which I'm unwilling to do. Any kind of access restrictions on the information immediately causes me to be suspicious. If the person(s) who collected the data stands by their work, they shouldn't have any problem putting it up for the world to see. Putting conditions on being able to see that data makes it suspect. Which is of course assuming Cnet/ZDNet had any concept whatsoever of conducting a scientific study.

The one thing I will credit you on, is that the summary talked about the effects on web browsing. Though I somehow suspect it has more to do with web server performance rather than client side browsing. I will also give you credit for not posting a link to some study done by the makers of Disk Keeper like someone did the last time I asked this question. So, you've at least done a better job than that person.


LOL what rubbish...so now you are the final arbiter of the credibility of well respected sites? Just because your claims have been proved to be totally wrong by Microsoft and other sources, you are hiding behind stupid excuses of lack of Zdnet/Cnet's reliability. If you don't want to look up information, suit yourself, but don't claim that information does not exist just because you are unwilling to look it up.

Anyway, WTH are you doing on Cnet forums if you disdain it so much?

And the same could be said for defragmenting. I may be some "random guy", but I include a defense of my position along with, so it's not just me saying, "Don't bother defragmenting, because I, some random guy on the internet, said so!"

The 'defense' of your position consists of pseudoscience, leaps of logic, misinformation and some plain good old fashioned disingenuity. (yeah, cnet is a disreputable site and you want MS kernel developers to prove things to you..lol)!

Why don't you provide a few links from scientific studies done by reputed MS kernel developers (lol!) that defragging provides no benefit at all.

Those who want to defrag can defrag, those who don't want to, need not. Not my problem anymore.

I tried to clearly explain the benefits of defragging, but if someone wants to believe that defragging is a big, bad thing that's going to kill your PC ..please feel free.

No more in this thread from me.

Collapse -
Well...
by Jackson Douglas / August 17, 2007 8:29 AM PDT
In reply to: Alright
This is my final (long) post on the subject, since I really have no interest in wasting my time on this silly discussion anymore. I don't think you have any clear understanding of what you are talking about, and are merely throwing around ideas/concepts without understanding them or their relevance to the discussion at hand.

You're entitled to your opinion, but it sounds more like you're trying to give yourself an easy out if the discussion starts going very pear shaped on you.

Also, forget about the consoles/linux whatever. Let's stick to HDDs + fragmentation for now.

Fair enough. Though I only brought Linux up as an example of something better, I didn't intend it to become a topic of discussion.

It is largely the spindle motor that generates the most heat from the work it has to do to keep the platter(s) spinning. The heat output from the actuator arm moving around may not be much. BUT, remember that moving the arm around excessively also requires power, and increased power consumption = increased heat loss. (see intel link further down) So there will be a contribution, but may not be as much as from the spinning platters.

Compared to the motor keeping the platters moving, the actuators are absolutely negligible. You'd need insanely accurate tools to even measure the difference. I also think you mean that increased power consumption makes for increased heat generation. I'll give you the benefit of the doubt on that, and assume it was a typo.

Now, if you want to be incredibly anal about things, and try to maximize the efficiency of the drive arm movement... Well, I'd say it's hard to argue with your base thesis, but at the same time you have no way of knowing where the files are placed on the physical drive, so you can't know if you're really making things better with defragmenting, making them worse, or not making any change worth mentioning.

Even if all individual files are grouped together, what good does that do you if you need to read File A, File B, and File C, in that order, and File A is at the first sector of the drive, File B is in the final few sectors, and File C is somewhere in the middle? Not a whole lot would be my answer.

I know that the platter angular velocity is not variable. But when the actuator arm assembly has to do more work unnecessarily moving the arm across the platter many times a minute, it will increase wear more and reduce performance. WHY subject it to extra work unnecessarily ?

Actually, the angular velocity IS variable, it's the rotational velocity that is constant. Angular velocity (and I'm sure some physics people will be grinding their teeth over this) is the relative difference in velocity between the inner most and outer most edges of a circular object. The inside of a record or CD, for example, has a much shorter distance to travel in a single rotation compared to the outer edges, so the further out you travel from the center, the faster things have to travel to "keep up" with the center. Incidentally, this is why you don't tend to see CD-ROM drives over about 52X, since the forces acting on the disc cause it to shatter. It's also a limiting factor in how fast you can get HDD platters spinning.

Anyway, physics lesson completed... Without knowing where files will be physically located on the drive, you can't really say whether or not defragmenting improves anything. Individual files may all be contiguous on the drive, but if they're still scattered all about the physical surface, the drive arm still has to work more than it should.

And personally, I'm much more concerned about the ball bearings in the motor driving the platters giving out than I am a simple actuator. The motor spinning the platters is far more complex, and involves considerably more moving parts, giving it considerably more potential points of failure. And while I've heard of a few drives having the motor giving out on them, I don't think I've ever heard of one where the heads stop working. Not to say it hasn't or doesn't happen, but odds are that the motor is going to give out long before anything else.

Areal density is currently ~ 150 Gbits/sq inch for ~300 GB drives (higher with PRT) and expected to increase as PRT finds its way into more consumer level drives. BUT, file sizes have also grown in parallel. The average home user deals with much larger file sizes than even 3-4 years ago, with games, photos and video files ever increasing in size. Do you have numbers from 'scientific studies' to prove that increases in areal density have overcome the negative effects of fragmentation or is it merely your guess? All evidence points to the contrary.

I was just using easy numbers for the purpose of an example. If I was mistaken in assuming you could figure that out on your own, I apologize.

But this is simple logic... Platter sizes remain fixed, but you pack the data on them more densely. Therefor, the drive arms have the same total surface area to cover, meaning that the probability of the drive arm having to move a great distance to get a given file is lower with higher capacity drives.

Think of it another way... Compared to large print books, regular books contain a higher number of words per page. So, the probability that a specific phrase is on any given page is higher with the regular book as opposed to the large print book.

You are totally wrong here! There is only a SINGLE read/write head for one surface of a platter. The second r/w head is on the opposite surface of the platter. Show me a single desktop drive with multiple r/w heads for the same platter surface .

I was unclear on this point, and that is my fault.

What is more, the actuator axis for those multiple heads is the same so the heads will move as a single unit synchronously, but on opposite surfaces of the platter. Therefore, having multiple heads is not an advantage, since they are not on the same surface. In fact, the industry trend is towards reducing the number of arms/heads and platters since having multiple arms stresses out the actuator assembly. Google, and all this will be clear. Earlier, drives used to have more platters and more arms/heads, but the preference these days is to greatly minimize the number of moving components.

Yet the technology driving them has improved significantly. The average drive today takes 10ms or less to find any given file. That's faster than you can blink your eyes. This means that the head can cover the entire radius of the platter in 10ms or less, and when you figure that even laptop drives have those platters spinning at a rate of 70 times a second at the slowest, and a more typical desktop 7200rpm desktop drive would be 120 times per second, getting the file is not the performance bottleneck. Even if you were able to manage perfect fragmentation, where each bit of the file were exactly opposite of the last bit, the differences would be barely noticeable to a person.

Think about it... Every second, there are 120 chances for the read/write head to be over the desired spot. Assuming it takes 10ms for the head to go from one edge to the other, you could still do it about 10X per second. The extra .2X will leave out to account for angular velocity and various other imperfections in the system. Personally, I wish they'd adopt a method of storing data vertically instead of horizontally, so we could just take angular velocity out of the mix of things to contend with, but I digress.

Point is, fragmentation or no, the drive is quite capable of saturating the IDE bus. All that ATA/133 crap is just a burst rate. Average sustained transfer rates of cached data is on the order of 40MB/s IIRC, and for uncached data it's like 20MB/s. Any modern drive should have no problem exceeding that. Actually, those figures are for Linux, which quite simply has a far more efficient disk I/O subsystem than Windows. So, figures for Windows would likely be lower, but none of that really does much to change the point, that fragmentation is not holding the drive performance back. At least with IDE drives. With some of the newer SATAII drives with the peak burst rate around 1Gbps... Maybe, but they also tend to have 10K rpm motors in them which should negate a lot of that.

I also think that by the time you have a system bus for HDDs capable of exceeding what a drive is capable of outputting, we'll likely all be using solid state HDDs, and the argument would be purely academic.

Your basic assumptions about drive logic are also not right. The drive will still collect data for a single fragmented (or otherwise) file in the order it was written, but even if that were not so , the actuator arm still has to move across the platter working needlessly to collect all the fragments rather than reading it off contiguously. The IDEAL scenario is when the file is read/written sequentially as a contiguous unit.

Arguments about that being an incredibly anal, or even obsessively compulsive, way of looking at things... You're only considering fragmentation at the individual file level. What about more generalized file fragmentation? Like my File A, File B, and File C scenario from above. That seems like a far more beneficial thing to be looking at to me. Even if the individual files are all fragmented, having all the files necessary for Program X in one little segment of the drive would seem to have far more benefit. It would reduce the amount of head movement far more than your beloved method.

Of course, most people aren't really that concerned about how much the head has to move about the drive... We've gotten away from the very simple fact that most people are only concerned with the end results in terms of performance. On that front, fragmentation doesn't significantly impact performance.

Wrong again! The bottleneck is not the ATA bus width, but the mechanical nature of the operation of the arm+ head moving across the platter.

Wasted movement = lower performance!

Greater unnecessary movement between tracks + rotational latency + settle time = More unnecessary decrease in performance.


On paper you're correct, but we're talking about amounts of time that are imperceptible to the human brain. And for the sake of argument, let's say it's not. I'll be incredibly generous, and say that over the course of an average work day, you manage to do the same amount of work as me one minute faster as a result of my heavily fragmented drive. So, what would be your big plans for this extra minute you have? It's not like it stacks... Where if you save one minute per day, after about 2 months you've done an extra hour's worth of work compared to me.

On the other hand, it's possible that as a result of my reduced performance due to fragmentation, I catch more errors in my work, and am generally happier at work because I'm not trying to work as fast as I can, all day, like a machine. In our rush for greater efficiency, we often forget that it's those inefficiencies that make life enjoyable.

Your numbers are off again. Fastest SATA (2.0) drives have theoretical burst speeds of 3.0Gbits/sec (~300 MBytes/sec), with sustained transfer rates of about 50-80 MBytes/sec. Newer Hitachi and Seagate drives with the 32 MB cache advertise something closer to 100 Mbytes per second sustained transfer IIRC.

Parallel ATA-133 bus ~ 133 Mbytes/sec theoretical max

Serial ATA 2.0 bus ~ 300 Mbytes/sec theoretical max (see above)

PCI bus (32 bit/33MHz) ~ 133Mbytes/sec

Draw your own conclusions!

Ofcourse, the FSB/HyperTransport is far faster always.


<sigh> I would have thought you brighter than to fall for promotional material with data collected under ideal lab conditions, and then cherry picked results. Maybe it's just having a marketing degree, I know how those people think and operate.

When Sony was hyping the PlayStation 2, they were talking about how many triangles per second the system was capable of pumping out... Something like 2 million/second I think, but the actual figure isn't important. Point is, it was quite capable of pumping out that many triangles, if the entire console's resources were devoted to doing nothing but pumping out triangles. Naturally that figure is completely meaningless in the real world, since any game would be devoting some of those resources to other things. You don't think that HDD manufacturers are doing exactly the same sort of thing? Cherry picking the best results from all the test runs, done on a system with more than ample RAM and CPU power to keep things moving along. Hardly representative of what you find in Joe Average's home or office.

Your force numbers ought to come out as 2.8kN and 0.46 kN if my rough calculations are right. Since you are merely using example numbers, I will not dwell on this.

Now, let me ask: what are the actual forces experienced, and by what components during

(1) Defragging a drive for a short time

(2) Running a fragmented drive for a long time without defragging.

Unless you have the hard numbers for these, your claims that defragging is worse for the drive over the long run than letting it run fragmented, have NO value.

In fact, the link below debunks your claim as a myth.


Okay, so the example got away from me... The basic idea is, you don't know where the files are located on the physical surface of the drive. So even if individual files are in contiguous blocks, the files themselves are still likely to be scattered every which way on the drive. So, even with defragmenting, you can't claim that you're reducing head movement significantly. If you want to be incredibly anal about things, and if it might possibly save even a single movement it's worth the effort, then by all means knock yourself out. I'm just saying you should consider what you have to do in order to achieve that.

Defragmenting a drive causes the arm to move back and forth at a near constant rate for an extended period of time. That is a lot of stress for it to have to endure in a short period of time, compared to a small amount of stress over a longer period of time. That's where the acceleration analogy came in.

Without quoting the bit you quoted, this again is confusing file level fragmentation with what I'll hereto call program fragmentation. Where if Program X contains three files, A, B, and C... Unless Files A, B, and C are in close proximity on the platter, the fact that they are contiguous means little.

It's also once again getting away from the fact that to Joe Average, they don't care if the drive heads are making a single movement or 10,000 movements, they only want whichever is faster. And I've yet to see any scrap of evidence that fragmentation has any significant impact on programs that aren't disk intensive.

Red herring.
Cognitive dissonance.


How is any of that related to cognitive dissonance? Cognitive dissonance is a theory regarding how people rationalize things to themselves that they are uncomfortable with. It's a fun theory, but it has about as many detractors as it has supporters, and there are I think two competing theories to explain the same base phenomenon.

---

Since I'm running short on time, I'm going to skip over much of the rest, since it's just rehashing the same things we've covered earlier.

---

Huh? WTH?! Under 1 MB is size?!!! This is not 1995! Even a 128kbps mp3 file is a few MB! A jpeg from a 5 mp digital camera is about 2 MB. A divx file for a 2 hr movie can easily run to over 700 MB. It is fallacious to claim that the 'average' user has mostly <1 MB files! Word or Powerpoint documents can easily hit a few MBs. I am not even touching games, home made movies, downloaded crap etc.

Remember my initial constraints on things... Someone who uses their computer to browse the web, read/write email, and other light tasks. You're describing tasks that, while growing in popularity, are still only performed by a relatively few people.

...

Intel is a largely credible source to me, and I will have to look over that link more when I have some time. Based on the bits you quoted, it does look like it will fall outside the bounds of what I asked you to consider, and look more at the impact of fragmentation on disk intensive tasks, which I do not dispute. I'll reserve final judgment until I've had time to look it over more.

...

Same goes for the other links... I'll have to take a more proper look when time permits, but there seems to be an incorrect assumption being made with a lot of these based on what you're quoting. That being that files somehow up and shift their positions on the drive after being initially written. If I install Program X, and all of the files are written in a single contiguous stream, they aren't then going to spontaneously fragment unless Microsoft has done something incredibly screwy with their filesystems.

Maybe it's just the way the bits you quoted are worded, and the full text would cast it into a different light. When I have time to devote to a proper reading of those, I'll find out one way or the other.

...

LOL what rubbish...so now you are the final arbiter of the credibility of well respected sites? Just because your claims have been proved to be totally wrong by Microsoft and other sources, you are hiding behind stupid excuses of lack of Zdnet/Cnet's reliability. If you don't want to look up information, suit yourself, but don't claim that information does not exist just because you are unwilling to look it up.

<sigh>

Microsoft is not in the business of actually creating anything. They buy small companies nobody has heard of, with an interesting product, and then make it their own. DOS is an excellent example of this. Gates bought DOS from someone else, then turned around and licensed it to IBM. If they can't do that, they simply copy the work done by some other company. Windows would be a good example here. If you look at some of the early windowing systems for DOS, and even Apple's Mac OS from similar time periods, you will see very significant similarities to Windows.

Ziff-Davis is a publishing company first, and everything else a distant second. They are interested in making money, not in the silly ideals taught to people in journalism school about unbiased and objective reporting (completely impossible anyway). If the key to making money one month is to trash Microsoft and praise Linux, you can bet that's what their publications will do. If the winds change and maybe Apple us the flavor of the month, then anything that isn't Apple will be badmouthed.

Cnet is the same way, except they don't have quite the same managerial focus, and are having a difficult time making any money. But the same rules apply. Whatever they think has the highest odds of making them the most money is what they will do.

And why I'm here, is to help educate the ignorant masses. Cnet would just as soon keep them ignorant so they will believe anything they publish as if it were gospel.

...

Why don't you provide a few links from scientific studies done by reputed MS kernel developers (lol!) that defragging provides no benefit at all.

I asked first. I want to see some scrap of credible evidence, gathered in something approaching a scientific way, that shows that non-disk intensive applications suffer significant performance differences based on fragmentation levels, since non-disk intensive tasks make up the bulk of what the average computer user does with their computer.

No matter how many times I ask, all everyone ever seems to give me are links to things involving disk intensive tasks, which I don't dispute. After that, they often start acting rather smug as if they've proven something by posting links in support of things not in contention.

Based on our discussion so far, I'm willing to accept any figures you can collect on your own if you want to do so. If you want to break out the stop watch and test some non-disk intensive tasks in a highly fragmented and defragmented state, I'd love to see it. Like most scientists, being wrong is every bit as exciting to me as being correct. I'm interested in the knowledge, not which side is correct.
Collapse -
(NT) I tend towards Jackson's comments.
by MarkFlax Forum moderator / August 15, 2007 4:42 AM PDT
Collapse -
RE: following, and in this order
by caktus / August 15, 2007 7:10 AM PDT

Pecisely that order. However, assuming by ?pre? windows defrag program you are refering to a premium or pay program, I wouldn't bother with it. But if you already paid for say Diskeeper, I would just use that in place of the Windows defrag tool. In fact, [if] I recall correctly, the Windows defrag tool cannot be used, or even opened while Diskeeper is installed. However, if by "pre" Windows degrag you mean a Boot-defrag (run before Windows Boots/Starts) Then that is a rather complete defrag and no other degrag is needed. If you aren't using a pay tool such as Diskeeper, JkDefrag is a great way to go, and automatically Boot defrags when run. I have used Windows, Diskeeper, Perfect Disk, and some other defrag tools and I am happiest with JkDefrag. In fact, it [may] be as good, if not better than Diskeeper. Often the only [extra] that comes with a pay utility is a price.

While in most cases, defragging just once a month or even every six months is probably fine for most users, I now defrag once a week as it makes defragging go much faster and causes less ware and tear on the HHD. Kind of like taking a break evey five hundred miles durring a cross-country drive, rather than driving straight through. Just so much easier on the machine.

I would only bother with chkdsk about once a month or just when experiencing HHD problems.

Charlie

Collapse -
(NT) Yes, I did mean "before Windows Boots/Starts"
by eddie11013 / August 15, 2007 7:22 AM PDT
Collapse -
(NT) Sooo, Can anyone answer the questions?
by eddie11013 / August 17, 2007 3:00 AM PDT
Collapse -
reply to: Regarding ?Defrag? programs
by caktus / August 17, 2007 6:57 AM PDT

Sorry. Thought what you were basically concerned about 1-5 at the bottom.

1. ?pre? windows (Boot) defrag can be helpful after changing Startup programs and Services. e.g. msconfig and Services under Administrative Tools.

2. ChkDsk diagnoses errors involving disk surface intertity and attempts to repair damaged sectors. Defrag simply rearanges file fragments into a more contiguous order. On ocasion you may receive and error message from defrag informing you that it is necessary to first run ChkDsk. Also, if you find that defrag continually starts over w/o ever completing, it may be helpful to first run ChkDsk.

3. See 2. above.

4. Probably would conflict as ChkDsk requires that no other porgrams be running. This is why ChkDsk must be run during the Boot process and before Windows starts.

5. See 2. and 3. above.

6. See Boot Defragment. However, while the Windows defrg tool is fine for most users, many third party tools do a more thorough job.

7. It is best to defrag with ony one tool, as different tools may not complement one another. And in many cases actuall undo the benefits of one or the other.

8. "registry defrag" I can only assume, refers to Registry optimization. It can restore a miniscule amount of disk space and possibly make a defrag results graphic user interface (GUI) look a tiny bit more contguous. But I don't do it for this reason. The amount of disk space allocated to the Registry file by default I beleive is set to 25% of the Virtual memory size. Unless you are on a lare network of active servers, I don't beleive you you will every have a problem with Registry size. But if you want to increase the memory allocated to the Registry, just increase the size of virtual memory. Or to change the % allocated to the Registry see Low on Registry Quota.

9. =http://www.majorgeeks.com/download2579.html]RegSeeker is a great, free Registry cleaner, better than some pay Registry cleaners. Run the Registry cleaner prior to optimizibg the Registry. For optimizing the Registry I recommend NTREGOPT
NT Registry Optimizer
Registry Optimization for Windows NT/2000/2003/XP/Vista
. While you're there, I also fecommend ERUNT
The Emergency Recovery Utility NT
Registry Backup and Restore for Windows NT/2000/2003/XP/Vista
If you download/install it, read the documention.

10. ?CCleaner? is a great program for cleanig out temp files. Run it before the Registry cleaner.

11. create a system Restore point
CCleaner
Registry cleaner
Registry optimizer
ChkDsk
Boot defrag (No other defragging is necessary)

Charlie

Collapse -
(NT) Finally, "BIG" Thanks, Charlie
by eddie11013 / August 17, 2007 8:02 AM PDT
Popular Forums
icon
Computer Newbies 10,686 discussions
icon
Computer Help 54,365 discussions
icon
Laptops 21,181 discussions
icon
Networking & Wireless 16,313 discussions
icon
Phones 17,137 discussions
icon
Security 31,287 discussions
icon
TVs & Home Theaters 22,101 discussions
icon
Windows 7 8,164 discussions
icon
Windows 10 2,657 discussions

HOLIDAY GIFT GUIDE 2017

Cameras that make great holiday gifts

Let them start the new year with a step up in photo and video quality from a phone.