How do you get the magnitude of a vector in Numpy?
If you are worried at all about speed, you should instead use:
mag = np.sqrt(x.dot(x))
Here are some benchmarks:
>>> import timeit
>>> timeit.timeit('np.linalg.norm(x)', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0450878
>>> timeit.timeit('np.sqrt(x.dot(x))', setup='import numpy as np; x = np.arange(100)', number=1000)
0.0181372
EDIT: The real speed improvement comes when you have to take the norm of many vectors. Using pure numpy functions doesn't require any for loops. For example:
In [1]: import numpy as np
In [2]: a = np.arange(1200.0).reshape((-1,3))
In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 4.23 ms per loop
In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 18.9 us per loop
In [5]: np.allclose([np.linalg.norm(x) for x in a],np.sqrt((a*a).sum(axis=1)))
Out[5]: True
The function you're after is numpy.linalg.norm
. (I reckon it should be in base numpy as a property of an array -- say x.norm()
-- but oh well).
import numpy as np
x = np.array([1,2,3,4,5])
np.linalg.norm(x)
You can also feed in an optional ord
for the nth order norm you want. Say you wanted the 1-norm:
np.linalg.norm(x,ord=1)
And so on.
Fastest way I found is via inner1d. Here's how it compares to other numpy methods:
import numpy as np
from numpy.core.umath_tests import inner1d
V = np.random.random_sample((10**6,3,)) # 1 million vectors
A = np.sqrt(np.einsum('...i,...i', V, V))
B = np.linalg.norm(V,axis=1)
C = np.sqrt((V ** 2).sum(-1))
D = np.sqrt((V*V).sum(axis=1))
E = np.sqrt(inner1d(V,V))
print [np.allclose(E,x) for x in [A,B,C,D]] # [True, True, True, True]
import cProfile
cProfile.run("np.sqrt(np.einsum('...i,...i', V, V))") # 3 function calls in 0.013 seconds
cProfile.run('np.linalg.norm(V,axis=1)') # 9 function calls in 0.029 seconds
cProfile.run('np.sqrt((V ** 2).sum(-1))') # 5 function calls in 0.028 seconds
cProfile.run('np.sqrt((V*V).sum(axis=1))') # 5 function calls in 0.027 seconds
cProfile.run('np.sqrt(inner1d(V,V))') # 2 function calls in 0.009 seconds
inner1d is ~3x faster than linalg.norm and a hair faster than einsum
Yet another alternative is to use the einsum
function in numpy for either arrays:
In [1]: import numpy as np
In [2]: a = np.arange(1200.0).reshape((-1,3))
In [3]: %timeit [np.linalg.norm(x) for x in a]
100 loops, best of 3: 3.86 ms per loop
In [4]: %timeit np.sqrt((a*a).sum(axis=1))
100000 loops, best of 3: 15.6 µs per loop
In [5]: %timeit np.sqrt(np.einsum('ij,ij->i',a,a))
100000 loops, best of 3: 8.71 µs per loop
or vectors:
In [5]: a = np.arange(100000)
In [6]: %timeit np.sqrt(a.dot(a))
10000 loops, best of 3: 80.8 µs per loop
In [7]: %timeit np.sqrt(np.einsum('i,i', a, a))
10000 loops, best of 3: 60.6 µs per loop
There does, however, seem to be some overhead associated with calling it that may make it slower with small inputs:
In [2]: a = np.arange(100)
In [3]: %timeit np.sqrt(a.dot(a))
100000 loops, best of 3: 3.73 µs per loop
In [4]: %timeit np.sqrt(np.einsum('i,i', a, a))
100000 loops, best of 3: 4.68 µs per loop