A Floating Point Discussion

So, I have a friend that’s a software engineer and her and I love to get into discussions about tech things. The other day she sends me a link to a youtube video that showed someone proving 0.1 + 0.2 showing as 0.30000001192092896 in C with just one word attached to it, “why?” So, we had a discussion about this and she thought I should share it in a post, so here it is. By the way, you should totally connect with her, she’s awesome! Alexis


A:

why? https://www.youtube.com/watch?v=TQDHGswF67Q

P:

It's because of a mixture of how you fractions are handled in a binary numbering system and the limitation of decimal places to store them, causing things like chopping

It's a fairly math-y thing, but this write up is pretty good as they show the proof behind the logic https://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html

In terms of him showing a difference between C and Python, that's just because of the fact Python has an arithematic library built into it to handle decimal numbers in base 10

C and C++ don't have that inherently,  you need to include it. otherwise it just does it in binary, causing what looks like the inaccurate arithmetic to us

here is an even more in-depth version of his code that shows how a and b are stored to 17 places

$ ./floats   
a = 0.10000000149011612
b = 0.20000000298023224
a+b = 0.30000001192092896

so as you can see, even just assigning a to 0.1, the library doesn't assume you mean 0.10000000000000

because binary counting of decimals doesn't work that way

if you do the math of 1/10 in binary

you may wonder why not fix it? because it's unnecessary 99% of the time. the scientists that needed the precision included the necessary libraries to fix that. otherwise, it's faster to just let the CPU keep things in binary

and again, another point towards python being slow, because all of this bloat is in there out of the box

That kind of help?

A:

That helps sooo much.

P:

yay! :D

That genuinely makes me really happy, because that's a really confusing thing to a lot of people, so anytime one more person gets it, it's a huge win

and it legit does cause weird issues that people don't realize

ooo, like here's a great example! check this out!

let me code it up, one sec

this shit drives people crazy, because again, if you're going to do complex decimal arithmetic, you need to have the right libaries, otherwise some simple operations work as expected, but things aren't consistent at all, even if you think you're forcing it.

here is the code:

#include <stdio.h>
int main(int argc, char **argv) {
    float a = 0.1;
    float b = 0.2;
    printf("a = %.17f\n", a);
    printf("b = %.17f\n", b);
    printf("a+b = %.17f\n", a+b);
    float d = a+b;
    float e = 0.3;
    if (d == e) {
        printf("Yes, 0.3 is in fact the same as 0.3\n");
    } else {
        printf("Nope. 0.3 doesn't equal 0.3, because computers actually fucking suck at math...\n"); }
    printf("d = %.17f\n", d);
    printf("e = %.17f\n", e);
    float f = 0.300000000000000;
    if (d == f) {
        printf("Yes, 0.3 is in fact the same as 0.3\n");
    } else {
        printf("Nope. 0.3 doesn't equal 0.3, because computers actually fucking suck at math...\n");
    }
    printf("f = %.17f\n", f);
    return 0;
}

here is the output:

$ ./floats
a = 0.10000000149011612
b = 0.20000000298023224
a+b = 0.30000001192092896
Yes, 0.3 is in fact the same as 0.3
d = 0.30000001192092896
e = 0.30000001192092896
Yes, 0.3 is in fact the same as 0.3
f = 0.30000001192092896

people end up running into a lot of trouble, especially in things like game development where a point in space typically has decimal precision, if you start doing math with those floating point numbers while not in base 10, you're going to get results you don't expect. like in this example, 0.1 + 0.2 and 0.3 are both the same, but if you actually did a different arithmetic operation to get to 0.3, shit gets weird

like this...

I added this to the bottom:

float g = 0.7;
float h = 0.4;
float i = g-h;
printf("i = %.17f\n", i);

the result?

i = 0.29999998211860657

so if you compared 'd' to 'i', it would say they don't match, even though logically they're both 0.3 in our minds

what's interesting to me is the theory behind it. a lot of people dont' think about the fact that fractions typically aren't real numbers. a real number is finite, many fractions aren't. so in the case of like 2/3, it's .6 repeating right? well, it actually doesn't end, so at some point you need to round to 7 to finish writing it out, and your accuracy in your math is only to where you decide to give up keeping track of decimal places. the number representing 2/3 is actually not real. it doesn't exist in the material world lol. so how in the hell are computers supposed to handle that? the fact that computer scientists even came up with a solution is amazing to me

A:

You could take this thread as-is and publish

For real

P:

alright, deal

Next
Next

Delving into audio programming; DSP 101