Importance of Kronecker product in quantum computation
The "Kronecker product", better known as the tensor product, is the natural notion of a product for spaces of states, when these are considered properly:
A space of states is not a Hilbert space $\mathcal{H}$, but the projective Hilbert space $\mathbb{P}\mathcal{H}$ associated to it. This is the statement that quantum states are rays in a Hilbert space.
Now, why does the physical notion of combining the spaces of states of individual systems into a space of states of the combined system correspond to taking the tensor product? The reason is that we want every action of an operator (which are linear maps) on the individual states to define an action on the combined state - and the tensor product is exactly that, since, for every pair of linear maps $ T_i : \mathcal{H}_i \to \mathcal{H}$ (which is a bilinear map $(T_1,T_2) : \mathcal{H}_1 \times \mathcal{H}_2 \to \mathcal{H}$) there is a unique linear map $T_1 \otimes T_2 : \mathcal{H}_1 \otimes \mathcal{H}_2 \to \mathcal{H}$.
Alternatively, concentrating more on the projective nature of the spaces of states, we observe that $\lvert \psi \rangle$ and $a \lvert \psi \rangle$ are the same state for any $a \in \mathbb{C}$. Therefore, denoting the sought-for phyiscal product by $\otimes$ (i.e. not assuming it is the tensor product), we must demand that $$\lvert \psi \rangle \otimes \lvert \phi \rangle = (a\lvert \psi \rangle) \otimes \lvert \phi \rangle = a (\lvert \psi \rangle \otimes \lvert \phi \rangle)$$ since the states produced by $\lvert \psi \rangle$ and $a\lvert \psi \rangle$ must yield the same state, i.e. map onto the same projective state. This obviously fails for the cartesian product, since the pair $(a\lvert \psi \rangle,\lvert \phi \rangle)$ is not a multiple of the pair $(\lvert \psi \rangle,\lvert \phi \rangle)$, but it is true for the tensor product.
ACuriousMind's Answer pretty much summed up the reasons, which are essentially mathematical.
If you want to grasp the "physical significance", then I suggest you should work through an example: think of two quantum systems, each with three base states: $\left.\left|1\right.\right>$, $\left.\left|2\right.\right>$ and $\left.\left|3\right.\right>$. The set of linear superpositions in one of these quantum spaces is the set of unit magnitude vectors of the form $\alpha_1\,\left.\left|1\right.\right>+\alpha_2\,\left.\left|2\right.\right>+\alpha_3\,\left.\left|3\right.\right>$, where $\alpha_1^2+\alpha_2^2+\alpha_3^2=1$. Your states are going to be $3$-component vectors and they live in three dimensional spaces.
Now when we combine these two systems, the base states don't combine in a Cartesian product to give a six dimensional space. No, individually, each quantum system stays in its own space spanned by $\{\left.\left|1\right.\right>, \,\left.\left|2\right.\right>,\, \left.\left|3\right.\right>\}$ whilst the other one can be in any state in its own space spanned by its own versions of $\{\left.\left|1\right.\right>, \,\left.\left|2\right.\right>,\, \left.\left|3\right.\right>\}$.
So, with system 1 in state $\left.\left|1\right.\right>$, system 2 can be in any state of the form $\alpha_1\,\left.\left|1\right.\right>+\alpha_2\,\left.\left|2\right.\right>+\alpha_3\,\left.\left|3\right.\right>$. So the set of combined quantum states where system 1 is in state $\left.\left|1\right.\right>$ is a three dimensional vector space. A different 3-dimensional vector space of combined states arises if system 1 is in state $\left.\left|2\right.\right>$ with system 2 in an arbitrary $\alpha_1\,\left.\left|1\right.\right>+\alpha_2\,\left.\left|2\right.\right>+\alpha_3\,\left.\left|3\right.\right>$ state. Likewise for the set of combined states with system 1 in state $\left.\left|3\right.\right>$.
So our combined system has nine base states: it is a vector space of 9 dimensions, not 6. Lets write our base states for the moment as $\left.\left|i,\,j\right.\right>$, meaning system 1 in base state $i$, system 2 in base state $j$. Now, write a superposition of these states as a nine dimensional column vector stacked up as three lots of three: the first 3 elements are the superposition weights of the $\left.\left|1,\,j\right.\right>$, the next 3 the weights of $\left.\left|2,\,j\right.\right>$ and the last three the weights of the $\left.\left|3,\,j\right.\right>$. This is what a matrix representation of a general combined state will be.
Now, suppose we have a linear operator $T_1$ that acts on the first system alone, and a linear operator $T_2$ that acts on the second alone. These operators on the individual states have $3\times 3$ matrices. Then an operator on the combined system has a $9\times 9$ matrix. If you form the matrix Kronecker product $T_1\otimes T_2$, then this is the matrix of the operator that imparts the same $T_1$ to the three $\left.\left|i,\,1\right.\right>$ components, the three $\left.\left|i,\,2\right.\right>$ components and the three $\left.\left|i,\,3\right.\right>$ components and likewise imparts the same $T_2$ to the three $\left.\left|1,\,j\right.\right>$ components, the three $\left.\left|2,\,j\right.\right>$ components and the three $\left.\left|3,\,j\right.\right>$ components. This is what ACuriousMind means when he says:
we want every action of an operator (which are linear maps) on the individual states to define an action on the combined state - and the tensor product is exactly that, since, for every pair of linear maps $ T_i : \mathcal{H}_i \to \mathcal{H}$ (which is a bilinear map $(T_1,T_2) : \mathcal{H}_1 \times \mathcal{H}_2 \to \mathcal{H}$) there is a unique linear map $T_1 \otimes T_2 : \mathcal{H}_1 \otimes \mathcal{H}_2 \to \mathcal{H}$.
I work through a further detailed example for two coupled oscillators in my answer here.