How do I read CSV data into a record array in NumPy?
Use pandas.read_csv
:
import pandas as pd
df = pd.read_csv('myfile.csv', sep=',', header=None)
print(df.values)
array([[ 1. , 2. , 3. ],
[ 4. , 5.5, 6. ]])
This gives a pandas DataFrame
which provides many useful data manipulation functions which are not directly available with numpy record arrays.
DataFrame
is a 2-dimensional labeled data structure with columns of potentially different types. You can think of it like a spreadsheet or SQL table...
I would also recommend numpy.genfromtxt
. However, since the question asks for a record array, as opposed to a normal array, the dtype=None
parameter needs to be added to the genfromtxt
call:
import numpy as np
np.genfromtxt('myfile.csv', delimiter=',')
For the following 'myfile.csv'
:
1.0, 2, 3
4, 5.5, 6
the code above gives an array:
array([[ 1. , 2. , 3. ],
[ 4. , 5.5, 6. ]])
and
np.genfromtxt('myfile.csv', delimiter=',', dtype=None)
gives a record array:
array([(1.0, 2.0, 3), (4.0, 5.5, 6)],
dtype=[('f0', '<f8'), ('f1', '<f8'), ('f2', '<i4')])
This has the advantage that files with multiple data types (including strings) can be easily imported.
Use numpy.genfromtxt()
by setting the delimiter
kwarg to a comma:
from numpy import genfromtxt
my_data = genfromtxt('my_file.csv', delimiter=',')