Exponentials in python x.**y vs math.pow(x, y)

Using the power operator ** will be faster as it won’t have the overhead of a function call. You can see this if you disassemble the Python code:

>>> dis.dis('7. ** i')
  1           0 LOAD_CONST               0 (7.0) 
              3 LOAD_NAME                0 (i) 
              6 BINARY_POWER         
              7 RETURN_VALUE         
>>> dis.dis('pow(7., i)')
  1           0 LOAD_NAME                0 (pow) 
              3 LOAD_CONST               0 (7.0) 
              6 LOAD_NAME                1 (i) 
              9 CALL_FUNCTION            2 (2 positional, 0 keyword pair) 
             12 RETURN_VALUE         
>>> dis.dis('math.pow(7, i)')
  1           0 LOAD_NAME                0 (math) 
              3 LOAD_ATTR                1 (pow) 
              6 LOAD_CONST               0 (7) 
              9 LOAD_NAME                2 (i) 
             12 CALL_FUNCTION            2 (2 positional, 0 keyword pair) 
             15 RETURN_VALUE         

Note that I’m using a variable i as the exponent here because constant expressions like 7. ** 5 are actually evaluated at compile time.

Now, in practice, this difference does not matter that much, as you can see when timing it:

>>> from timeit import timeit
>>> timeit('7. ** i', setup='i = 5')
0.2894785532627111
>>> timeit('pow(7., i)', setup='i = 5')
0.41218495570683444
>>> timeit('math.pow(7, i)', setup='import math; i = 5')
0.5655053168791255

So, while pow and math.pow are about twice as slow, they are still fast enough to not care much. Unless you can actually identify the exponentiation as a bottleneck, there won’t be a reason to choose one method over the other if clarity decreases. This especially applies since pow offers an integrated modulo operation for example.


Alfe asked a good question in the comments above:

timeit shows that math.pow is slower than ** in all cases. What is math.pow() good for anyway? Has anybody an idea where it can be of any advantage then?

The big difference of math.pow to both the builtin pow and the power operator ** is that it always uses float semantics. So if you, for some reason, want to make sure you get a float as a result back, then math.pow will ensure this property.

Let’s think of an example: We have two numbers, i and j, and have no idea if they are floats or integers. But we want to have a float result of i^j. So what options do we have?

  • We can convert at least one of the arguments to a float and then do i ** j.
  • We can do i ** j and convert the result to a float (float exponentation is automatically used when either i or j are floats, so the result is the same).
  • We can use math.pow.

So, let’s test this:

>>> timeit('float(i) ** j', setup='i, j = 7, 5')
0.7610865891750791
>>> timeit('i ** float(j)', setup='i, j = 7, 5')
0.7930400942188385
>>> timeit('float(i ** j)', setup='i, j = 7, 5')
0.8946636625872202
>>> timeit('math.pow(i, j)', setup='import math; i, j = 7, 5')
0.5699394063529439

As you can see, math.pow is actually faster! And if you think about it, the overhead from the function call is also gone now, because in all the other alternatives we have to call float().


In addition, it might be worth to note that the behavior of ** and pow can be overridden by implementing the special __pow__ (and __rpow__) method for custom types. So if you don’t want that (for whatever reason), using math.pow won’t do that.


The pow() function will allow you to add a third argument as a modulus.

For example: I was recently faced with a memory error when doing

2**23375247598357347582 % 23375247598357347583

Instead I did:

pow(2, 23375247598357347582, 23375247598357347583)

This returns in mere milliseconds instead of the massive amount of time and memory that the plain exponent takes. So, when dealing with large numbers and parallel modulus, pow() is more efficient, however when dealing with smaller numbers without modulus, ** is more efficient.


Just for the protocol: The ** operator calls the built-in pow function which accepts an optional third argument (modulus) if the first two arguments are integer types.

So, if you intend to calculate remainders from powers, use the built-in function. The math.pow may give you false results:

import math

base = 13
exp = 100
mod = 2
print math.pow(base, exp) % mod
print pow(base, exp, mod)

When I ran this, I got 0.0 in the first case which obviously cannot be true, because 13 is odd (and therefore all of it's integral powers). The math.pow version uses limited accuracy which causes an error.

For sake of fairness, we must say, math.pow can be much faster:

import timeit
print timeit.timeit("math.pow(2, 100)",setup='import math')
print timeit.timeit("pow(2, 100)")

Here is what I'm getting as output:

0.240936803195
1.4775809183

Some online examples

  • http://ideone.com/qaDWRd (wrong remainder with math.pow)
  • http://ideone.com/g7J9Un (lower performance with pow on int values)
  • http://ideone.com/KnEtXj (slightly lower performance with pow on float values)