Amdahl's Rule Of Thumb is that 1 byte of memory and 1 bit per second of I/O are required for each instruction per second supported by a computer. This also goes by the title Amdahl's Other Law. (from Wikipedia, the Free Encyclopedia)
so 1 megabyte of data would = 10^6 (1,000,000) bytes
you'd have to derive the I/O bandwidth on the mainboard, both in terms of RAM and CPU FSB (go with the weakest link for good measure)
and then look at the MIPS or GFlops figures for a given processor
honestly I'm guessing theres going to be a basic derived formula in the book, or at least a guideline, given that I've never seen a text book be so ambiguous
For a math book problem my wife is working on, we're trying to get a comparison of how fast two different processors can process a megabyte of data. We've searched processor specs and different side-by-side comparisons but haven't found anything that puts this in those simple terms, which is what we need for this general book. Is there anyone who can point us to a formula for taking the gigahertz capabilities of a given processor to get the seconds per megabyte processed?