python.array versus numpy.array
It all depends on what you plan to do with the array. If all you're doing is creating arrays of simple data types and doing I/O, the array module will do just fine.
If, on the other hand, you want to do any kind of numerical calculations, the array module doesn't provide any help with that. NumPy (and SciPy) give you a wide variety of operations between arrays and special functions that are useful not only for scientific work but for things like advanced image manipulation or in general anything where you need to perform efficient calculations with large amounts of data.
Numpy is also much more flexible, e.g. it supports arrays of any type of Python objects, and is also able to interact "natively" with your own objects if they conform to the array interface.
Small bootstrapping for the benefit of whoever might find this useful (following the excellent answer by @dF.):
import numpy as np
from array import array
# Fixed size numpy array
def np_fixed(n):
q = np.empty(n)
for i in range(n):
q[i] = i
return q
# Resize with np.resize
def np_class_resize(isize, n):
q = np.empty(isize)
for i in range(n):
if i>=q.shape[0]:
q = np.resize(q, q.shape[0]*2)
q[i] = i
return q
# Resize with the numpy.array method
def np_method_resize(isize, n):
q = np.empty(isize)
for i in range(n):
if i>=q.shape[0]:
q.resize(q.shape[0]*2)
q[i] = i
return q
# Array.array append
def arr(n):
q = array('d')
for i in range(n):
q.append(i)
return q
isize = 1000
n = 10000000
The output gives:
%timeit -r 10 a = np_fixed(n)
%timeit -r 10 a = np_class_resize(isize, n)
%timeit -r 10 a = np_method_resize(isize, n)
%timeit -r 10 a = arr(n)
1 loop, best of 10: 868 ms per loop
1 loop, best of 10: 2.03 s per loop
1 loop, best of 10: 2.02 s per loop
1 loop, best of 10: 1.89 s per loop
It seems that array.array is slightly faster and the 'api' saves you some hassle, but if you need more than just storing doubles then numpy.resize is not a bad choice after all (if used correctly).