X

Help! My program is truncating my numbers!

If you use large numbers or very long decimal values in some programs, you may see them get truncated. Why?

Topher Kessler MacFixIt Editor
Topher, an avid Mac user for the past 15 years, has been a contributing author to MacFixIt since the spring of 2008. One of his passions is troubleshooting Mac problems and making the best use of Macs and Apple hardware at home and in the workplace.
Topher Kessler
3 min read

If you use popular computational programs such as spreadsheet workhorses Microsoft Excel and Apple Numbers, you might run into an issue in which, when using large number values, the system will appear to truncate the values of numbers entered.

For instance, if you open Excel and enter the value "12345678901234567890" into a cell, the program will change this to "12345678901234500000," where every value past the 15th character is set to zero. This happens in Apple's Numbers spreadsheet and other programs as well.

While the significance of the last numbers is perhaps questionable, it does limit how precise calculations can be in these programs.

The problem here takes us back to a related discussion of the old 32-bit-vs.-64-bit discussion that cropped up when computer processors were transitioning to higher-precision and higher-value number handling capabilities. At the time, hardware memory addressing was the overall limitation; 32-bit systems could only allocate up to 4GB of memory for a program, a limit that greatly encroached on demand for many programs.

Excel 32-bit number limit
If you enter the number with more than 15 units, 32-bit programs like Numbers and Excel may truncate them to values they can handle. Screenshot by Topher Kessler/CNET

While the hardware limitations of computers have been taken care of with the switch to 64-bit, there is still the issue of how the software may be coded. Even though, as with the CPU, the program's bitness overall does not prevent it from handling large numbers, how it is programmed may limit the number size it can handle.

The maximum size of a number in a 32-bit memory address is 2^32 or 4,294,967,296, which is only 10 units in length. This offers a value range that is acceptable for many calculations, but clearly not for all, especially when dealing with extensive and precise calculations.

To overcome this maximum, programming includes a number type called a "double" where the system combines adjacent 32-bit memory locations into one large one, which artificially bypasses the 32-bit ceiling and gives the program 64 bits to work with for each number. While this should give the option of vastly larger numbers to work with, the true maximum value is far less than 2^64. In the formatting of a double-precision number, there is one bit reserved for the sign (positive or negative) and eleven bits used for the exponent, leaving 53 bits to represent the "significand" or "significant digit" component, which results in a number that is, for practical purposes, 15 units long despite the zeros at the end.

Therefore, in a program like Excel or Numbers that store numbers as double-precision floating point values, you will have a 15-unit number, including any decimal point.

Of course, the idea of relevance of significant digits beyond 15 units is arguable for many computations; however, if you do need precision beyond 15 units, you simply will not be able to do so with a program that is coded in this manner.

Update, 4:23 p.m., PT: Removed discussion erroneously attributing this detail to the program's bitness.



Questions? Comments? Have a fix? Post them below or e-mail us!
Be sure to check us out on Twitter and the CNET Mac forums.