If i’m comparing ints with floats, it is my fault in the first place
TL;DR:
In Python, following returns False.
9007199254740993 == 9007199254740993.0
The floating point number 9007199254740993.0 is internally represented in memory as 9007199254740992.0 (due to how floating point works).
Python has special logic for comparing int with floats. Here it will try to compare the int 9007199254740993 with the float 9007199254740992.0. Python sees that the integer parts are different, so it will stop there and return False.
Comparing floats for equality is generally a bad idea anyways.
Floats should really only be used for approximate math. You need something like Java’s BigDecimal or BigInteger to handle floating point math with precision.
Looks like this is the equivalent for Python:
Comparing is fine, but it should be fuzzy. Less than and greater than are fine, so you basically should only be checking for withing a range of values, not a specific value.
Do we have a js type situation here
Probably more like the old precision problem. It ecists in C/C++ too and it’s just how fliats and ints work.
I dont think comparisons should be doing type conversion if i compare a float to an int i want it to say false cos types are different.
I don’t think that’s how most programmers expect it to work at all.
However most people would also expect 0.1+0.2==0.3 to return true, so what do I know.
Floating point is something most of us ignore until it bites us in the ass. And then we never trust it again.
I have to admit: If you (semi-)regularly use floating point comparisons in programming, I don’t know why you would ever expect 0.1 + 0.2 == 0.3 to return true. It’s common practice to check
abs(a - b) < tol
, wheretol
is some small number, to the point that common unit-testing libraries have built-in methods likeassertEqual(a, b, tol)
specifically for checking whether floats are “equal”.