I just have to say that while I'm sure you mean well, your post is absolutely riddled with technical errors.
First off, Linux is not a spin-off of Unix. It's a completely separate OS that was modeled after Unix, but it shares absolutely no ancestry with Unix. Linux is more or less what you would call a clean room reverse engineering. It's the only one of its kind, at least that I'm aware of. All the BSDs, including Mac OS X (which is based on FreeBSD) can trace their history back to 386BSD which in turn can trace its origins back to the original BSD Unix. Whether or not BSD Unix is officially a Unix I leave to people who really need to get a life to debate. There was a lawsuit, AT&T Bell Labs lost, it happened like 30 years ago, time to let it go.
Next up, have you ever seen how many services (or daemons as they're called on Linux) are running after a default install? Plus, this is the generally accepted way of doing things and is taught in computer science programs the world over. You break your program up into small manageable components to do a specific task that you can debug the crap out of quickly and easily. Then every time you need to do a particular task you run that bit of code. It also reduces the overall amount of code. In object-oriented programming parlance, this is called encapsulation. It means something slightly different in terms of OOP, but it's the same theory applied in a slightly different way. From a software engineering standpoint, Microsoft is doing things more or less exactly right and Windows services are also analogous to Linux kernel modules. There's no clear distinction between what Windows service might be the same as a Linux daemon or a kernel module, they're all lumped in together, but the fact still remains that Microsoft is attempting to keep all user level code out of the kernel proper and allowing it to interface with the kernel via services. Technically both Windows and Linux should be following the Mach kernel design, where this whole idea is taken to a higher level, but as Apple found out quickly with the early versions of OS X, there are serious performance issues with the Mach kernel design so it's kind of a tradeoff.
Linux is in no way simpler than Windows and the whole market share explanation for why Windows gets targeted more is based on an extremely oversimplified understanding of the computer industry and the history of both operating systems. Linux is designed to be a Unix-like operating system and Unix was designed to be a multi-user operating system that was connected to a network 24/7. When Linus was developing Linux early on, he had the benefit of 20 some odd years worth of Unix development experience to draw from (not his own, collectively) such as the POSIX API and all the various attacks that had been launched against Unix systems in that time. DOS and later Windows, were designed specifically for the PC, which were intended to be islands unto themselves. No networks and no more than one user, who owned the machine most likely (as opposed to Unix servers which were tens of thousands of dollars and that's in 1960s to 1980s dollars, so probably hundreds of thousands or even millions of dollars today). Since the person using a PC (which, remember, stands for Personal Computer) was likely the owner, they were given absolute free reign to do whatever they wanted with it. If people wanted to use Windows in the workplace, Microsoft created Windows NT for that. It was mutli-user and allowed for network admins to lock it down pretty tight.
But all of that is really an academic discussion for people who have too much time on their hands to debate. The days of the computer virus and worm are gone, now it's all about making money. First off, you have the general regression of computer skills in the average user. In the early days people had to know how to do some basic coding to make a computer do anything, because all your computer was was a bit of hardware and a BASIC interpreter. Then along comes the age of platforms, like DOS, where now you start seeing APIs: Prepackaged, well-tested and ready to use chunks of code that save you, the developer, from having to recreate a lot of common code. That makes it possible to have software you can buy in a store and install so now you don't necessarily need to know how to program to use a computer. As time has gone by, this general trend has increased and the number of people who have both the skill and desire to write a virus is practically zero. Which is further hampered by the fact that now all versions of Windows in regular use enforce the rule about going through the kernel to access the hardware, just like Linux.
Then people figured out in the late 90s and early 2000s that you could make money off of this stuff. You don't want to destroy someone's computer, you want to have some program sitting silently in the background. It started with companies like KaZaA (remember that name) bundling programs similar to Seti@HOME and allowing the company to sell off access to your computer's CPU to do some distributed number crunching for other companies. Kind of sleazy, but also very pedestrian compared to the depths people have taken it to now. Funneling people to sites where a person gets a cut of revenues, the latest fad of ransoming access to your system, keyloggers stealing passwords and bank account info, etc. Why waste your time just deleting files or something when you could be making money?
Moving along...
While the registry is unique to Windows, the GNOME configuration system is often criticized for being very similar in nature to the Windows registry. Also, the Windows registry is NOT a complex database. It dates back to at least Windows 3.1 before home computers were capable of handling a complex database. It's a very simple flat file database. That's part of the problem. Back in the early 90s when Windows 95 was being built, I'm sure it seemed like a good idea for replacing the old .ini file system, but over time it's become more and more of a liability.
Also, while you're correct that file associations are stored in the registry, you're leaving out the key bit that associations are just a matter of convenience for the user. I can create a file association to open any file I want with any program I want, but if that program doesn't know how to read that type of file, it doesn't do me any good. IBM and Apple tried to do something different, having the OS pick the program based on the file format rather than the extension, but that required a lot of overhead and third party developer buy-in which just never really happened. Plus it presents you with the problem of what if I have MS Word and LibreOffice installed and both can open Word documents.
And of course the drawback to having individual configuration files is that every single program is constantly reinventing the wheel. There's no rule saying Program Y has to do their configuration the same way as Program X, so you wind up with as many different formats as there are programs. One program may use "true" and "false" while another might use "1" and "0" to represent the same thing. Plus there's a high performance overhead associated with opening and closing a file and text files are the worst of the lot. You're multiplying that overhead across every single program, whereas Windows can keep the registry loaded into memory at all times and just do periodic dumps back to disk.
Your argument also completely ignores how most Linux distributions have started using SystemD now, which replaces a lot of the old .conf files or even the .rc files with its own custom format. There's been some serious scope creep going on with SystemD as a single program becomes increasingly monolithic. It's sort of like svchost.exe on Windows, only on an even bigger scale. It's basically on its way to becoming one uber service for the whole of Linux. So like the Windows registry and services all rolled into one.
Defragmentation is another one of those issues that is no longer an issue for virtually everyone. Disk throughput is not the limiting factor in performance probably 95%+ of the time. So it doesn't matter how fragmented the filesystem might be, it's not impacting performance to any significant degree. About the only times it has any effect at all is boot times and starting a program (and technically booting is starting a program). Once the OS or a program is loaded into RAM, fragmentation levels don't even factor in. So every time I see someone getting their panties all in a bunch over this I feel pretty safe in the assumption that the person is a poser. They're just slightly less clueless than the next person, but think they're some kind of titan of knowledge because they can repeat (incorrectly most of the time) something they saw someone else say and only half understood (if the rest of us are lucky). Used to be people would tell you never to install more than 512MB of RAM because computers only came with 256K of L2 cache. They'd tell you you would take a performance hit because 256K wasn't enough to store all the memory addresses. It was true in the literal sense, but it also ignored one very key detail. Uncached RAM would still be hundreds of times faster than the alternative of the HDD. Sure it might be thousands of times faster if it were cached, meaning there was a performance hit, but you would still have a net performance gain by adding more RAM.
Finally, upgrading Linux is just as easy/difficult as Windows. There's plenty of opportunity for something to go wrong in either scenario, but the vast majority of the time it goes off without a hitch. Technically speaking, there are probably far more opportunities for something to go wrong with Linux since so much of the supporting code is third-party. The Linux kernel is just one tiny part and clearly you never lived through the a.out to ELF, Libc5 to Glibc, or LILO to Grub transitional periods where upgrading a Linux system was extremely difficult and it was generally easier to just wipe the system clean and start over. I doubt you even lived through the XFree86 to X.Org transition and the X11 to Wayland transition is gearing up and looks like it'll be a boatload of fun for everyone, plus there's the ongoing SystemD transition.