Megapixels are the number of pixels the camera is capturing.
Megabytes is the size of the picture on the actual device.
These two don't match and can vary from device to device depending on numerous things...
As you probably know all pictures that digital devices produce are made by splitting the picture up into thousands of little squares, each square contains three sections of information - RED colour, GREEN colour and BLUE colour, each square is known a pixel. The more pixels you have the more detail you get from the picture.
In something like 14.2mp mode you could take a picture and blow it up to probably about the first floor of a house, with very little blockiness appearing in the picture, so unless your taking pictures that you want to put on the side of building 14.2 is a bit overkill anyway (although it does have an advantage that you can use it to "digital zoom" on something when the distance exceeds your manual zoom).
Just to give you an idea a full HD TV has a resolution of 1920x1080 (1080p) which is 1920x1080 pixels (or 2,073,600 pixels which is only 2mp!). So anything beyond about 5mp is really overkill for just standard photos that are printing onto paper anyway, unless you want to do some work on them like cutting bits of the image and using that as a mini-picture in itself (the same way digital zoom works).
As for this Mb to Mp conversion ratio, this all depends on the file format of the camera. Different cameras use different compression methods, the majority use JPG, some use GIF, some of the mega expensive cameras store it as RAW/BMP data.
The best format to go for is the RAW/BMP data as that will give you all the data with absolutely 100% no loss of data whatsoever, and this would give you an accurate size guide for your photo. RAW data basically splits the picture up into pixels and stores each pixel with it's own red green and blue value on a rating between 0 and 255 for each colour (obviously even at this rate there is some loss here as some colours might exceed the 255 threshold, or some colours might have a .5 rating, although as your eyes can actually only see 24-bit colour and that is 32-bit colour, then there really isn't an issue there). The only problem is RAW data does waste an awful lot of space, and really again is overkill for anyone taking just standard pictures, maybe for something like if you were using your camera to record the next Steven Speilberg blockbuster then you might want to use RAW data to store these files, but most of the time JPG or GIF are fine formats.
JPG and GIF both are what is known as "lossy" compression formats. The way they work is that when us take a picture in JPG the data is encoded in such a way to reduce space, this is done by giving repetitive information in the picture a code so that instead of repeating redundant data, a key is built to tell it to use this data, rather than taking up a lot of space. The easiest way to describe this is if you were to do the sum 1 divided by 3 and someone told you to right the answer out in full, you would probably tell them to go and take a running jump, because you'd be there forever more (1/3 is 3.3333333 and you'd be writing 3 continuously for infinity), so instead you shorten the answer to either 3.3 re-occuring or 3.3 with a dot over the top of the second 3 to show people that this 3 re-occurs. And JPEG does a very similar thing, if for example you took a picture at night of the stars, the only information that would need to be stored is the information about where the picture is white (where the stars are), the rest of the sky is black, so instead of storing countless entries of "000 000 000" JPG would realise this and just give it a code like B so data which would appear like this
000 000 000, 000 000 000, 000 000 000
000 000 000, 111 111 111, 000 000 000
000 000 000, 000 000 000, 000 000 000
might in actual fact be stored like this
B, B, B
B, 111 111 111, B
B, B, B
thus removing all the redundant 0's and shortening them to 1 letter, and also taking up a lot less space.
That is obviously only a very basic example and for JPG to actually produce these results it actually performs mathematical calculations on the data to shrink it, unlike the example above.
So therefore the amount of megapixels will not match the amount of megabytes the picture will take up (in RAW a 14.2mp picture would probably take up 3 times that anyway as it has to store three bytes per pixel (R,G,B), and different cameras with different compression methods all store information differently. Some cameras also have a fine mode to use a lower compression as well.
One important thing to remember though is if you edit the picture on your PC, start by first saving the originals in BMP format, and then save from that into GIF or JPG because every time you save a GIF/JPG the compression algorithm loses quality every time, keeping the original in BMP format keeps it with as much information as the original file has.