Prevent numpy from creating a multidimensional array
This behavior has been discussed a number of times before (e.g. Override a dict with numpy support). np.array
tries to make as high a dimensional array as it can. The model case is nested lists. If it can iterate and the sublists are equal in length it will 'drill' on down.
Here it went down 2 levels before encountering lists of different length:
In [250]: np.array([[[1,2],[3]],[1,2]],dtype=object)
Out[250]:
array([[[1, 2], [3]],
[1, 2]], dtype=object)
In [251]: _.shape
Out[251]: (2, 2)
Without a shape or ndmax parameter it has no way of knowing whether I want it to be (2,)
or (2,2)
. Both of those would work with the dtype.
It's compiled code, so it isn't easy to see exactly what tests it uses. It tries to iterate on lists and tuples, but not on sets or dictionaries.
The surest way to make an object array with a given dimension is to start with an empty one, and fill it
In [266]: A=np.empty((2,3),object)
In [267]: A.fill([[1,'one']])
In [276]: A[:]={1,2}
In [277]: A[:]=[1,2] # broadcast error
Another way is to start with at least one different element (e.g. a None
), and then replace that.
There is a more primitive creator, ndarray
that takes shape:
In [280]: np.ndarray((2,3),dtype=object)
Out[280]:
array([[None, None, None],
[None, None, None]], dtype=object)
But that's basically the same as np.empty
(unless I give it a buffer).
These are fudges, but they aren't expensive (time wise).
================ (edit)
https://github.com/numpy/numpy/issues/5933, Enh: Object array creation function.
is an enhancement request. Also https://github.com/numpy/numpy/issues/5303 the error message for accidentally irregular arrays is confusing
.
The developer sentiment seems to favor a separate function to create dtype=object
arrays, one with more control over the initial dimensions and depth of iteration. They might even strengthen the error checking to keep np.array
from creating 'irregular' arrays.
Such a function could detect the shape of a regular nested iterable down to a specified depth, and build an object type array to be filled.
def objarray(alist, depth=1):
shape=[]; l=alist
for _ in range(depth):
shape.append(len(l))
l = l[0]
arr = np.empty(shape, dtype=object)
arr[:]=alist
return arr
With various depths:
In [528]: alist=[[Test([1,2,3])], [Test([3,2,1])]]
In [529]: objarray(alist,1)
Out[529]: array([[Test([1, 2, 3])], [Test([3, 2, 1])]], dtype=object)
In [530]: objarray(alist,2)
Out[530]:
array([[Test([1, 2, 3])],
[Test([3, 2, 1])]], dtype=object)
In [531]: objarray(alist,3)
Out[531]:
array([[[1, 2, 3]],
[[3, 2, 1]]], dtype=object)
In [532]: objarray(alist,4)
...
TypeError: object of type 'int' has no len()
A workaround is of course to create an array of the desired shape and then copy the data:
In [19]: lst = [Test([1, 2, 3]), Test([3, 2, 1])]
In [20]: arr = np.empty(len(lst), dtype=object)
In [21]: arr[:] = lst[:]
In [22]: arr
Out[22]: array([Test([1, 2, 3]), Test([3, 2, 1])], dtype=object)
Notice that in any case I would not be surprised if numpy behavior w.r.t. interpreting iterable objects (which is what you want to use, right?) is numpy version dependent. And possibly buggy. Or maybe some of these bugs are actually features. Anyway, I'd be wary of breakage when a numpy version changes.
On the contrary, copying into a pre-created array should be way more robust.