>>I'm thinking at the moment that nothing will sound better than the original CD.

The short answer: yes, unless you can get the 96 kHz master from the recording studio.

The long answer (but still a gross overgeneralization):

You can?t get something from nothing. There are some electronic tricks that manufacturers use to make sure that it all the audio information on the CD is recovered properly (for example, upsampling), but you can?t recover information that simply isn?t there. Look up ?Nyquist sampling theorem? and ?aliasing? online. Here?s one link:

http://en.wikipedia.org/wiki/Nyquist-Shannon_sampling_theorem

From the wikipedia:
?when sampling a signal (e.g., converting from an analog signal to digital), the sampling frequency must be greater than twice the bandwidth of the input signal in order to be able to reconstruct the original perfectly from the sampled version.?

Before music makes it into your CDs, anti-aliasing filters remove any content above 22.05 kHz from the audio signal. If we didn?t do this to our analog signal before the A/D converter samples it at 44.1 kHz, then any signal at 23 kHz will show up at 21 kHz; any signal at 30 kHz will show up at 14 kHz, etc. It?s like a mirror at 22.05 kHz, which is half the sampling rate. This nasty effect is called aliasing.

Many recording studios now sample at 96 kHz; in this case anti-aliasing filters can have a much higher cut-off frequency. But even if they did that, the music still has to be sample-rate-converted to 44.1 kHz before it goes into the CD. This process also filters out information above 22.05 kHz or else there will be aliasing.

So now you?re probably wondering why we don?t sample at 40 kHz if human hearing goes up to only 20 k. If we used Fs = 40 kHz, we?d have to have a very steep analog filter with unity gain at around 20 kHz and almost zero gain immediately above that. It?s very hard to design such a high-order filter, let alone one that doesn?t mess up the sound with phase problems. It?s safer to sample at 44.1k, 48k, 88.1k, 96k, 176.2k, or 192k. But that?s not the real reason:

From www.hsdal.ufl.edu/Projects/ IntroDSP/notes/Lesson_3%20Sampling.doc

?The real reason for use 44.1 KHz actually comes from solid engineering and not arbitrary assumptions. The video recording industry had previously established standards that included:
? NTSC - 490 lines/frame, 3 samples/line, 30 frames/sec. = 44,100 Sa/s.
? PAL - 588 lines/frame. 3 samples/line, 25 frames/sec = 44,100 Sa/s.
Both standards were used and supported with applicable hardware (44.1 kHz ADCs). While engineers could have specified any multimedia sampling frequency greater than 40 kHz, they choose 44.1 kHz because there was already a plethora of low-cost 44.1 kHz video converters in existence. While purest may claim that this represents a careless disregard of applied science and mathematics, it is nevertheless an example of excellent engineering.?

If you?re interested, read a book (or take a course) on digital signal processing or digital filtering. It?s fun stuff, but there?s a lot of math (Z-transforms, FFT?s, etc).