Pandas to_csv always substitute long numpy.ndarray with ellipsis
In some sense this is a duplicate of printing the entire numpy array, since to_csv simply asks each item in your DataFrame for it's __str__
, so you need to see how that prints:
In [11]: np.arange(10000)
Out[11]: array([ 0, 1, 2, ..., 9997, 9998, 9999])
In [12]: np.arange(10000).__str__()
Out[12]: '[ 0 1 2 ..., 9997 9998 9999]'
as you can see when it's over a certain threshold it prints with ellipsis, set it to NaN:
np.set_printoptions(threshold='nan')
To give an example:
In [21]: df = pd.DataFrame([[np.arange(10000)]])
In [22]: df # Note: pandas printing is different!!
Out[22]:
0
0 [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13,...
In [23]: s = StringIO()
In [24]: df.to_csv(s)
In [25]: s.getvalue() # ellipsis
Out[25]: ',0\n0,"[ 0 1 2 ..., 9997 9998 9999]"\n'
Once changed to_csv
records the entire array:
In [26]: np.set_printoptions(threshold='nan')
In [27]: s = StringIO()
In [28]: df.to_csv(s)
In [29]: s.getvalue() # no ellipsis (it's all there)
Out[29]: ',0\n0,"[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14\n 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29\n 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44\n 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59\n 60 61 # the whole thing is here...
As mentioned this is not usually a good choice of structure for a DataFrame (numpy arrays in object columns) as you lose much of the pandas speed/efficiency/magic sauce.