Sample frequency of a CD is 44.1 kHz. To sample 22 kHz, you need twice that frequency.
A tone of 22 kHz might have harmonics, which will influence the sound wave, but because the sampling frequency doesn't detect those, these harmonics are lost.
A pure sine wave of 22 kHz will be very roughly digitized on a CD. Sampling means approximating the true analog waveform. Some sounds have very quick transitions in their waves and will not be represented correctly by a digitized wave, they will become distorted because they change too quickly for the sampling rate.
I think the sampling rate of CD's is too low, but at the time it was what was technologically achievable.
It occurred to me the other day that a data-compressed codec might actually sound better than a CD, if the codec is one that filters out the high frequencies and then reconstructs them from inference.
For real high frequencies, say above 10 kHz, unless a tone is an exact divisor of 44.1 kHz, the sampled voltage level is going to be significantly different for every oscillation, and in fact for the front and back end of each oscillation. What effect does that distortion have on the listener? If a codec reconstructs the highs, then might they not be more constant, more stable? Since the ear can't discern frequencies accurately above 4 kHz, the inaccuracy of the compressed codec might not be that significant. No traditional musical instrument produces a fundamental much above 4 kHz (the piano having the highest), and I question whether anyone can really discern the difference between the highest 2 or 3 keys on an 88 key piano.
So with these codecs the topmost notes of a piccolo and a piano might be hard to differentiate, but the overall experience might be clearer.
* braces for response *