Difference between floats and ints in Javascript?

Although there is only one type of number in Javascript many programmers like to show that their code works with floating point numbers as well as integers. The reason for showing the decimal point is for documentation.

var isNegative = number < 0 || number == 0 && 1 / number < 0;

This works exactly the same as in the Closure Library. But some programmers reading the code would think that it only worked with integers.

Addendum:- I've recently come accross an article by D. Baranovskiy who makes many criticisms of the Google Closure library and points out that “It’s a JavaScript library written by Java developers who clearly don’t get JavaScript.” He points out more examples of this type confusion, in color.js https://github.com/google/closure-library/blob/master/closure/goog/color/color.js

https://www.sitepoint.com/google-closure-how-not-to-write-javascript/


(A lot has changed since 2011 when this answer was posted - see updates below)

2019-June Update

BigInt has been out in V8 (Node.js and Chromium-based browsers) since May 2018. It should land in Firefox 68 - see the SpiderMonkey ticket. Also implemented in WebKit.

BigDecimal hasn't been implemented by any engine yet. Look at alternative library.

2015 Update

It's been over 4 years since I wrote this answer and the situation is much more complicated now.

Now we have:

  • typed arrays
  • asm.js
  • emscripten

Soon we'll have:

  • WebAssembly with the spec developed on GitHub

It means that the number of numeric types available in JavaScript will grow from just one:

  • 64-bit floating point (the IEEE 754 double precision floating-point number - see: ECMA-262 Edition 5.1, Section 8.5 and ECMA-262 Edition 6.0, Section 6.1.6)

to at least the following in WebAssembly:

  • 8-bit integer (signed and unsigned)
  • 16-bit integer (signed and unsigned)
  • 32-bit integer (signed and unsigned)
  • 64-bit integer (signed and unsigned)
  • 32-bit floating point
  • 64-bit floating point

(Technically the internal representations of all integer types are unsigned at the lowest level but different operators can treat them as signed or unsigned, like e.g. int32.sdiv vs. int32.udiv etc.)

Those are available in typed arrays:

  • 8-bit two's complement signed integer
  • 8-bit unsigned integer
  • 8-bit unsigned integer (clamped)
  • 16-bit two's complement signed integer
  • 16-bit unsigned integer
  • 32-bit two's complement signed integer
  • 32-bit unsigned integer
  • 32-bit IEEE floating point number
  • 64-bit IEEE floating point number

asm.js defines the following numeric types:

  • int
  • signed
  • unsigned
  • intish
  • fixnum
  • double
  • double?
  • float
  • float?
  • floatish

Original 2011 answer

There is only one number type in JavaScript – the IEEE 754 double precision floating-point number.

See those questions for some consequences of that fact:

  • Avoiding problems with javascript weird decimal calculations
  • Node giving strange output on the sum of particular float digits
  • Javascript infinity object