Apple Flunks First Grade Math
Something happened today which shook the very foundations of what I’ve always believed about computers. See, maybe this was just a crazy notion, but I was always under the impression that if there was ONE thing computers did well, it was math. Simple math, algebra, geometry, calculus… it didn’t matter. Computers have always been equation solving machines. Or so I thought.
As it so happened, I was catching up on three months of procrastinated Quicken transactions and I had a slight discrepancy in my numbers. I typed in Command-Space “cal” to launch the built-in Apple calculator via LaunchBar in order to check my figures. Here is the equation I typed in:
… and here is the garbage Apple babbled back at me:
What? How is that possible? I’m subtracting two decimal numbers and the result is a repeating decimal? Thinking something was wrong, I began experimenting by simplifying the equation:
Convinced I had the calculator in some whacked-out Reverse Polish mode or something, I began checking the menus. The only relevant menu item was a setting called “Precision” which went from 0 to 16 and was defaulted at 12. How about Precision “Infinity”? I want my damned calculator to be precise enough to subtract simple decimals and apparently 12 isn’t enough to do this. As it turns out, “Precision” is a bit of a misnomer for this setting because it just represents how many decimals you want to see before the number gets rounded. Anyway, that still doesn’t explain why an equation which needs no rounding to begin with is giving me a repeating decimal.
Upon more experimentation, I discovered the following:
- The error doesn’t seem occur on numbers less than 1000.
- The error only occurs on some numbers greater than 1000.
- The error doesn’t seem to occur on addition, but only subtraction.
- The principal software engineer at my company couldn’t tell me how this was even possible.
And so there you have it… what was once simple is now apparently difficult again, thanks to the otherwise brilliant piece of engineering that is OS X Panther. I’m sure the explanation has something to do with floating-point calculations, whatever the hell those are, but that doesn’t make this bug the least bit more acceptable. My worst nightmare is that the repeating decimal answer actually is the correct answer from a computing standpoint but most computers are smart enough to round it for us, knowing what we really want. That would really alter my perceptions of low-level computing quite a bit.
On the bright side, we finally found something PCs are better than Macs at.
75 comments on “Apple Flunks First Grade Math”. Leave your own?
This error is most probably due to the way computers store floating point numbers. As computers do not think with decimals but with binary numbers, these rounding mistakes do happen.
There are exact number storing formats available, but they are much more cumbersome to use and coders of simple applications as calculators do not generally bother.
Sounds plausible, but that is setting the bar extremely low, don’t you think? I mean, the bar is pretty much just laying on the ground at that point.
I would think any calculator would use a number-storing format which could produce 100% accurate results on simple two-decimal arithmetic. When I think of rounding errors, I think more of legitimate repeating decimals like the square root of 2 or something.
Yes, it is to do with floating point precision.
If you want an explanation check this out.
It can be solved by using integer math rather than floating point. Windows had the same problem with its calculator, but it was upgraded in XP which fixed the problem.
Oh, and I am really really surprised your software “engineer” did not know this. It is really basic introductory computer science stuff.
Thing is, if even this math is wrong, how can we trust the rest?
Sounds like someone needs a new principal software engineer. I find it hard that he doesn’t know much about floating point arithmetic operations.
And “precision” isn’t a misnomer at all.
This situation is not limited to the Macintosh calculator program. It occurs on any system using floating point precision, including handheld calculators. The culprit here is the number 0.1. A perfectly straightforward decimal number, in binary it has an infinite repeating representation. In base two floating-point arithmetic, the number 0.1 lies strictly between two floating-point numbers and is exactly representable by neither of them.
Funny you should mention Netscape, because I actually thought of their frame sizing algorithm when I was thinking about this bug. I know it’s probably unrelated but it just reminded me of it. Basically, if you are using a frameset and you specify your frame dimensions in pixels, Netscape (at least up through version 4) would convert your pixels to a window size percentage, and then convert it back to pixels resulting in rounding errors all the time.
Kim and David,
Thanks for the Sun link. It looks like it explains floating points pretty thoroughly, but sheesh, 105 pages of engineer-speak. All I know is that even if this problem is legitimate, these errors should never make it to the consumer. As for our principal software engineer, I’m pretty sure his astonishment had more to do with not knowing how a calculator which had been around since 1984 could suddenly have this bug, as opposed to not knowing what floating point operations are. I mean, what could have possibly changed in Panther that would have introduced such a bug in the calculator? That is more the question…
While it may be correct that the repeating decimal representation of a floating point number is more “accurate” in the sense that it’s a more accurate representation of the internal number used in the calculator program, that strikes me as answering the wrong question. The job of the calculator is not to expose its messy internals to the world, but do the best job possible in reflecting the user’s intent, recognizing that design compromises are necessary.
In this case, the user entered two numbers with specified precision of 10^-2. The computer then spit out a number with an apparent precision of 10^-12. All the digits beyond .01 are made up. Nonsense. Artifacts. Subtracting (or adding) two numbers does not increase the number of fractional digits that are relevant (note that dividing does), so displaying those digits is incorrect. When presented with two numbers like the ones shown and asked to perform an addition or subtraction operation, the calculator should only show the answer with the same degree of precision. (Although a truly comprehensive floating point solution would also indicate the degree of error in the calculation — i.e., 9533.14 +/- 1×10^-14)
My closest analogy is in high school chemistry class when we were asked to weigh samples and perform calculations using the results. The calculations would be good to +/- 2 digits of accuracy (tolerance of about 1%), but my handy pocket calculator could spit out 8 digit calculation results. I’d hand in the paper with all those digits neatly written out at which point the teacher would patiently explain that all the digits past the first two or three were useless and not worth the ink in which they were written.
I can’t imagine it would add a great deal of code to have the calculator keep track of the number of significant digits entered and use it in calculating the number of significant digits to display on output.
This is a very common problem in computing – it’s by no means restricted to Apple’s calculator. One of the best explanations I’ve seen is this one in the Python manual.
I was distracted while writing my previous post and just now realized I forgot to include the information of greatest value to you: How to eliminate the problem.
Set the precision to 2.
I’m surprised this isn’t the default, as far more users can be expected to need to reconcile their check registers as opposed to needing numbers for structural load calculations or titration values.
Jeff, you forget one crucial thing: we programmers are lazy. The standard math operations in any language are based on the basic integer and floating point operations as they are fast. And if you can select between
c = a + b
you’d probably select the first one.
Awwww…. you had a great piece going, and you had to soil it by throwing in a tired “Macs are better than PCs” roundabout comment! Come on, man! ;-p (That bug is quite nutty though.)
Calculators (any math software, really) should not behave this way. There are a lot of ways to make sure math is done properly (Binary Coded Decimal is not exactly a new invention). Doc Griffin – I challenge you to find one handheld calculator that shows this kind of error. TI and HP figured out 50 years ago that users wouldn’t (and shouldn’t) accept such nonsense.
Code Junkie, maybe not quite 50 years ago, but certainly by 1972, they had the issue under control. :-)
Actually, it doesn’t have anything to do with addition or subtraction, just the output number representation. Type 9533.14 and hit return and you’ll get a funky number (most non-zero 2 digit decimals after 9533 seem to be a problem: 11, 12, 13, 14).
They can’t even use the right floating point number in their calculator? The floating point number issue is something you figure out in your first year of Computer Science.
Oh well, its just on the Mac so its only affects 3% of computer owners. Not a big deal!
odd. the bug is in the newest version of calculator (3.1) but not in the previous version (3.0) (comparison here). 3.0 also doesn’t have a precision setting.
More to the point, it looks to me like you really do have some serious procrastination problems with getting Quicken up to date. You just need to focus on plowing through. Do the do. Gerd up thy lions!
Apple in a Calculator Computation Conundrum Shocker!
So maybe now Apple will come out with an all new version of Calculator… finally, one with quartz extreme, wifi and iPod compatibility. ;)
Now if they would just make Stickies able to control the Airport Express…
You don’t even have to set the precision down to 2 to correct the issue, setting the precision to anything less than 12 fixes this on my iBook. And, setting the precision to anything greater than 11 re-introduces the problem. This is definitely sloppy coding on someone’s part.
Initially I thought it was a sloppy coding too, but realize it really is not a bad error. The program probably should cut down the number of digits it displays to 10 digits. It also should round the number to 10 significant digits, before it display the number on the screen (10 digits is what I have seen on the calculators. I believe they can safely increase the number of digits up to 13 digits). Only reason we don’t see this problem on the calculators is, most calculators (at least with the scientific calculators I have used) can only display 10 significant digits, so they calculate to 13 significant digits and then round the number to 10 significant digits, before they display the number.
But mine works perfectly (just tried your numbers). I’m using 10.3.5’s Calculator.app marked v3.1. Something is amiss with your setup. My “Precision” is set to “12” and the Calculator is in basic mode just like in your screenshot.
I take that back. I’m getting results like you had posted. The difference? The FIRST calculation worked out. After that… Voila. Try it!
Therein lies the answer… Simply go into “View | Precision…” and set it to “2” (like it should be for 2-digit precision as used with dollars and cents). Fixed, and works like it should.
Thanks everyone for your insight regarding this bug. Here are my conclusions, which could of course be wrong:
1. This is clearly a newly introduced bug, and it is not “the correct behavior” for any calculator despite any floating-point explanations.
2. The number .1 seems to cause the floating point problems here.
3. The problem appears to be related to Apple’s Calculator version 3.1 which apparently introduced new features, including the “Precision” menu. The default precision setting of 12 (or higher) seems to consistently induce the problem.
4. The best immediate user fix is to simply bump your precision setting down to 10 or 11. Bumping it down to 2 is not necessary, and in fact, it’s bad because there are clearly circumstances where one would need a few decimal places in their numbers… like the fraction 1/8 (.125).
5. Some people have mentioned that Apple should be using integer math instead of decimal math. Sounds good to me, if that’s more accurate. We’re not talking about tiny 50 cent calculators here. We’re talking about OS X on a strong processor, which should clearly be able to handle any “new processor load” a better math system would introduce.
6. The best comment comes from Jeff O: “The job of the calculator is not to expose its messy internals to the world, but do the best job possible in reflecting the user’s intent, recognizing that design compromises are necessary.”
7. The second best comment comes from Tink: “More to the point, it looks to me like you really do have some serious procrastination problems with getting Quicken up to date. You just need to focus on plowing through. Do the do. Gerd up thy lions!” Indeed Tink, indeed.
Its a simple bug. (SK is right
Instead of rounding, the Calculator truncates.
Its just that simple…
How can this possibly happen??? :D
Same for all our OS X builds, but not in our OS 9 boxes! It puts that as .14 like it should be. I forwarded a link to this site to apple and some friends over there, hope you don’t mind :)
And apple responds already and says check out this link:
Damnit Brady, that is exactly the sort of answer I would not expect from Apple. Thanks for the link.
I can’t believe they are giving a techie explanation to an interface problem. Very un-Applelike. My guess is that they will probably start defaulting people to 10 levels of precision now, so the bug stays hidden.
Very un-applelike indeed, I’d agree that this will probably be hidden in things to come… apple never likes to show it’s flaws:)
I use my trusty 1979-issue Commodore LCD calculator. Still running on the original batteries it came with. Never trust an overkill tool for a simple task.
-he who stacks pork
Do you really have over 9k in your checking account?
In the terminal type: bc
then type: 9533.24-215.10
answer returned is: 9318.14
type control-d to exit…
You durn kids with your fancy GUI calculators…
Seems to me you’re runnign Panther. On my Jaguar machine it [the equation] comes out just fine, so it must be a problem with Panther, not Apple Hardware.
Other than that, use PCalc by TLA Systems. It’s much better than the Apple calculator.
I’d type either “halt” or “quit”, as control-d doesn’t seem to work for me.
A bird told me that this is fixed in an upcoming release.
I’ve seen that on PC’s too. GWBasic used to do that all the time. check out http://www.drscheme.com if you want a system that can do perfect precision math. (note that mathematically 1 and .9999999(repeating forever) are the same number)
Dude, this is not something you should worry about. I wouldn’t even call it a bug. Remember: a floating point calculator, like a slide rule, is supposed to give you an approximation of the true infinitely-precise result. You are worrying about one ten-billionth of a penny? Don’t worry about it!
Yes it is true that many other floating point calculators, including the 1984 Mac desk accessory, can present the illusion of infinite precision in more situations than this one can manage. But I can guarantee that it’s actually very easy to get into the same kind of “repeating decimal” situation, with any of those other calculators–not quite this easy, but still very easy. You just have to stop worrying about getting results that are “only” accurate to one trillionth or so.
Interesting article about this on the windows calculator:
It might also give some perspective on why nobody ever bothers with these things.
It’s not a bug. As others have pointed out, it represents a tiny rounding error introduced in the conversion from base 10 to base 2.
Base 10 is a mostly arbitrary base to use anyway. What does base 10 have going for it other than we have 10 fingers and 10 toes anyway? The calculator could have been written to use base 10 numbers internally, but this would probably have been a lot of extra effort for a freebie app anyway.
BTW, I’m not sure where your kids go to school, but I’m pretty sure my first-grader won’t be learning long addition with decimals this year.
It may be “a tiny rounding error introduced in the conversion from base 10 to base 2,” but that doesn’t mean it isn’t wrong from the user’s point of view…or any math teacher’s point of view.
You might want to try this really nice (arbitrary precision) calculator instead. I looks nicer than the Apple Calculator and it has it’s own very clever math routines.
This might make things more clear:
When you see the number “5,” your computer sees “101.” When you see the number “.5,” your computer sees “0.1,” since to the right of the decimal, instead of “tenths, hundredths, thousandths,” you have “halves, fourths, eighths, sixteenths.” So just like you are unable to represent numbers like 1/3 entirely accurately in decimal form, the computer cannot represent other numbers, like 1/10 in binary form entirely accurately. Due to the chip’s inherent inability to store every number 100% correctly, there are tiny errors in the arithmetic, called “floating point errors” or “truncation errors.” Now there are (crudely speaking, CS’s don’t jump on me for these statements being only weakly true–this is just a simple explanation) two ways of doing math: the chip’s inherent math, or writing your own software math. The former is many, many times faster than the latter, but you can tweak the latter to be more accurate. OSX calculator is simply using the oridinary chip routines to do its math rather than something more sophisticated, which it should , because no one uses the calculator to do 10,000,000 calculations. It’s not a “bug” so much as it is a poor choice of implementation.
I’ll declare my interest first – Masters Degree in Mathematics, 2001 ;)
Anyway, I agree that this is related to an interesting and very, very old maths conundrum:
0.999999999r = 0.1
The two are technically identical, and yes, technically what is going on here is “correct” from a binary vs. decimal standpoint. Mathematicians love getting hold of obscure stuff (like the equation above) and showing people that weird things happen with even basic numbers. However, regardless of the “reasoning” behind the above Calculator displays, one fact remains:
This is a software bug
Why is it a bug? Because software is more than simple calculations. As someone above suggested, if your software forces your user to deal with hardware limitations, then this is poor software. In the case of the originally-cited issue, the answer given may be technically “correct” from a mathematical “it’s right because…” point of view, but from an application standpoint, it’s just plain wrong.
Example – I walk into a shop, and buy something for 99p. I hand over Â£1. The shop keeper has a choice – give me 1p or give me Â£0.09999999r. Mathematically, both answers are correct, but in APPLICATION one is clearly impossible and non-sensical.
This is a design bug, and needs to be addressed, regardless of the hardware reasons – or excuses – behind it.
How or why is interesting to know. I ran into this problem the other night while doing my home finances. The reason for the wierd results didn’t help me balance my books, it just made my Mac app useless for my third grade math needs.
>0.999999999r = 0.1 The two are technically identical, and yes, technically what is going on here is “correct” from a binary vs. decimal standpoint. 0.1 in every math course I’ve ever seen. There is an order of magnitude difference!
Doug, your Mac app is not “useless” for your third grade math needs. You just need to learn how to round a decimal number to the nearest penny. You need to learn that anyway, if you want to buy or sell anything in many parts of the world. For example, suppose you want to buy a CD that costs $11.99, in a place with 7% sales tax. Your calculator will tell you the total is $12.8293, but of course in real life you’ll just round up and pay the full $12.83. Go ahead and learn enough about the decimal system to do the rounding mentally; you’ll be happier that way.
I just changed my display format to binary. What’s everybody flipping out about??
I watched a tv programme comparing a top-of-the-line Pc and a top-of-the-line mac in 3 simple tests:
1: frame rate during a shoot-em-up – the pc had the faster frame rate.
2: how quickly they could add 3 photoshop filters to a picture – The pc was quicker, much to the presenters suprise.
3: drop them both off a bridge and try using them – The pc worked after a new mother board was bought from a high street shop. The mac however wasn’t so simple, as you can’t buy parts, they had to send it away and pay a recognised apple person to do it for them – pc cheaper and easier to fix
How’s that for finding something a pc is better at than a mac?
Old version Netscape have this same problem…
How about this: Set Precision to 16 then type
Yep, looks like a bug to me. Or type
and watch those rounding errors jump around as you type. Cool.
Since Calculator uses floating-point arithmetic, the calculation is the expected result. Gotta love that comment straight from Apple. Hands up everyone who expected the result from my two NON-calculations?
Remember the old Pentium ads? Intel inside, just don’t divide. How will I be able to hold my head up around PC owners now?
On my PowerBook G3 Lombard, running Panther, the error does not occur. Are you still having this problem on your G4?
Looks like a few fuzzies need to get back to CS101.
This was not new to Panther (although it may not have been in previous versions of OS X); it’s been in the original Calculator from OS 9 for a much longer time. In fact, just launch the original Calculator (which will launch Classic); it’s in Macintosh HD:System Folder:Apple Menu Items. Try the following:
Should be 0 right? The output it gives is even more bizarre than your original example. It’s been this way for as long as I can remember (back to System 7.1 at least, I believe).
I just checked these on Tiger (Mac OS X 10.4), and didn’t find these problems. So, although Apple says it is a floating point math problem (which it is) they fixed it anyway.
I think I tried all the permutations listed here. Interestingly, the last one (by nabziF), gives you a very small number (which, when I copy, only comes out at -0 instead of -2.77555756156e-17) IFF you enter it as he has it typed. If, however you do:
– .1 =
You get 0.
If you do it with the Calculator widget, as nabziF typed it, you get 0.
Different coders or different back ends…
I disagree, Ryan. The problem is/was most definitely with the user interface, and about time that they fixed it. Sure, the cause of that problem is the way floating point numbers work, but since they were working exactly how floating point numbers do work, it’s a categorical mistake to say that it was wrong.
And since even first year programming students are taught about that sort of stuff, you could say the programmers were the cause of the floating point numbers being the cause of the user interface problem.
Just tried this on 10.3.9 and calculator version 3.2.1 and it seems to have gone whatever your precision setting…
Error here is very small… I am reminded of the Department of Justice case against Microsoft, and they threw out several briefs the company submitted… it seems briefs need to be kept below 100,000 words (really not that brief). Microsoft’s were over limit.
It turns out Word miscounted the words… when they took the same file and used Word on a Mac, they got the correct count, which was over 100,000.
Software bugs will always be with us. Just don’t think that it’s only the mac that has some counting flaws.
How can this possibly happen??? :D
I constantly use the calculator feature on my computer. I use it to solve problems for me on boring algebra homework. If what you say is true on all apple computers, would my calculator “help” be stupid and not helping me at all.
thanks for the help,
I think this outrageous
People need to learn that calculators are the future
We must progress
If we must go back to the past let be for what we have done wrong back then and fix whatever we did
I say rise above the past and look to the future for if you don’t it will swallow you whole
Change the 0.0099999999 on the calculator to 0.01, and you have the correct answer :)
floating point precision bug in mac calendar
Floating point madness
Floating point madneess: Mike Davidson runs across a “1.01 != 1.01” type of issue in a calculator, and the comments pull up many good references on how various applications handle the difficulties describing non-binary numbers in binary memory. The top…
Apple Flucks 1st Grade Math
the answer should be zero but the calc shows -2.449293598294706e-16
os x 10.3.9
Today’s calculator version 4.0.6. solved the addition problem of “1938 + 65” to equate 2000. After selection show paper tape and then pushing recalculate it came up with the correct answer 2003. What an embarrassment for Apple’s math team. I’m running the Operating system 10.4.11
On the Mac bug site they wrote:
For an advanced discussion of why this happens, see “What Every Computer Scientist Should Know About Floating-Point Arithmetic”
Interesting though that the computer scientists at Apple do not know this, and that they even point out what they should know, but don’t, on their help site.
Jeff, you forget one crucial thing: we programmers are lazy. The standard math operations in any language are based on the basic integer and floating point operations as they are
Here is the problem I am having with the Apple Calculator. I can’t get it to display numbers in the format I would like.
Here is what its is doing if calculate 13.5 / 1539 it shows me 8.7719e-03
While that is technically a correct answer I think I want tit to display it in the format I have become accustomed to like .0087719
What do I need to do to get it from this format 8.7719e-03 to this format .0087719
Right now I have OSX 10.4.11, Calculator version 4.0.6 but I have the same problem with Leopard too
Can anyone tell me how to do the following?
Using Calculator, with input a real number between 0 and 1 for example, how do I get the angle for which the sin equals that real number?
That is, how do I enter arcsin (or any other arc-trig) to determine the angle? Thanks.
So stupid. There has to be a way to do this with software. Maybe something that handles both sides of the decimal separately? Then it wouldn’t need to calculate the decimal points as fractions. Regardless of the reason, it’s absolutely, 100.00000000000002% inexcusable.
Mark from the year 2016 checking in. The floating point problem persists. Someone remember to test this in 2028 (I mean, 2027.99999999999999) to see if Apple has gotten around to fixing it yet.