Question about cross product and tensor notation

You can define a tensor agrees with the Levi-Civita symbol for orthogonal coordinate systems but that has the correct components for non-orthonormal systems.

This, and other results, can be derived in the setting of clifford algebra.

Clifford algebra deals with a "quotient" of the tensor algebra--an interesting subset, if you will, of tensors that correspond to vectors, planes, and other objects that can often be interpreted geometrically.

To facilitate this, clifford algebra introduces a "geometric product" of vectors, which has the following laws:

  1. If two vectors $a, b$ are orthogonal, then $ab = -ba$ under the product.
  2. The product of a vector with itself is a scalar, i.e. $aa = |a|^2$.
  3. The product is associative: $(ab)c = a(bc)$ for all vectors $a, b, c$.
  4. The product is distributive over addition: $a(b+c) = ab + ac$.

From this definition, we can build up various objects that are not vectors but are produced from products of vectors under the geometric product.

With the geometric product in place, consider two vectors $a, b$, and write $b = b_\parallel + b_\perp$, the parts of $b$ parallel and perpendicular to $a$, respectively. Now then, we can write the product $ab$ as

$$ab = a b_\parallel + a b_\perp$$

The first term, $a b_\parallel$ is a scalar: $b\parallel = \alpha a$ for some scalar $\alpha$, and $aa = |a|^2$, a scalar, under rule 2.

The second term cannot be reduced, but we know from rule 1 that it anticommutes: $a b_\perp = -b_\perp a$. This is just like the cross product.

Indeed, if you write out this product with components, you get the following:

$$ab = (a^1 b^1 + a^2 b^2 + a^3 b^2) + (a^1 b^2 - a^2 b^1) e_1 e_2 + \ldots = a \cdot b + \frac{1}{2} a^i b^j e_i e_j$$

(summation implied). The latter term is called a bivector and is traditionally denoted $a \wedge b$.

You might have noticed now that we have at least three different kinds of objects: vectors, scalars, and bivectors. In clifford algebra, we number these objects by the number of vectors needed to form them, and we call this number the grade of the object. Scalars are grade-0, vectors grade-1, and bivectors grade-2. In 3d, you can also form a grade-3 object, a trivector. One choice might be $\epsilon = e_1 e_2 e_3$.

Now, what happens when you multiply a bivector with $\epsilon$?

First, the result must be a vector. Each bivector can be written as a linear combination of $e_1 e_2, e_2 e_3, e_3 e_1$, and $\epsilon$ has all of those in it. You can see that $e_1 e_2 \epsilon = e_1 e_2 e_1 e_2 e_3 = -e_3$ (use rule 1 for anticommuting swaps and rule 2 for same vectors to annihilate). The same holds for all other terms.

By convention, then, we can define a product

$$a \times b = -\epsilon (a \wedge b)$$

This coincides with the usual definition of the cross product. You can verify this term by term of you like; it's not that interesting to do algebraically, but geometrically, one comes to understand that multiplication by $\epsilon$ produces orthogonal complements of subspaces: a vector goes to its complementary plane, a plane to its normal vector, and so on. That is why I called this 3-vector $\epsilon$, as well: its components are those of the correct Levi-Civita tensor (not the symbol) that should have different components in nonorthonormal coordinate systems. And this is exactly what is meant in differential forms parlance when one uses the Hodge star operator.

Outside of 3d, the dual of a bivector is no longer a vector (4d: a bivector has another bivector totally complementary to it), and so the cross product as we typically imagine it no longer makes sense.


To answer the first part of your question: The first equation you have is incorrect as written for the following reason: the cross product $A \times B$ is a vector independent of any basis. On the right hand side, you have (in einstein summation convention) the components of this cross product in a cartesian basis. To set the equation right, you'll have to introduce the cartesian basis vector on the right hand side, $$ A \times B = \epsilon_{ijk} A_j B_k \hat{e}_i $$ where $\hat{e}_i$ is a cartesian basis vector. It is obvious that this expression is valid only for a cartesian basis.


Levi-Civita tensor is okay for any coordinate system, including curvilinear ones.

For any three coordinates $q^i$ ($i = 1,2,3$), the location vector can be uniquely represented as function of these coordinates $\boldsymbol{r}=\boldsymbol{r}(q^i)$. Then basis (“tangent”) vectors are $\boldsymbol{r}_i \equiv \partial_i \boldsymbol{r}$ (in Leibnitz’s notation $\partial_i \equiv \frac{\partial}{\partial q^i}$, $\boldsymbol{r}_i = \frac{\partial}{\partial q^i} \boldsymbol{r} = \frac{\partial \boldsymbol{r}}{\partial q^i}$). Then cobasis (dual basis, “cotangent” basis) vectors $\boldsymbol{r}^i$ can be found using fundamental property of cobasis $\boldsymbol{E} = (\sum_i)\, \boldsymbol{r}^i \boldsymbol{r}_i = (\sum_i)\, \boldsymbol{r}_i \boldsymbol{r}^i \,\Leftrightarrow\: \boldsymbol{r}^i \cdot \boldsymbol{r}_j = \delta^i_j$, where $\boldsymbol{E}$ is the bivalent “unit” tensor which is neutral to dot product operation (another names for this the same thing are “metric” tensor and “identity” tensor), and $\delta^i_j$ is Kronecker’s delta.

Here comes the trivalent Levi-Civita (“volumetric”, “trimetric”) tensor:

$${^3\!\boldsymbol{\epsilon}} = (\sum_{i,j,k})\, \boldsymbol{r}_i \times \boldsymbol{r}_j \cdot \boldsymbol{r}_k \; \boldsymbol{r}^i \boldsymbol{r}^j \boldsymbol{r}^k = (\sum_{i,j,k})\, \boldsymbol{r}^i \times \boldsymbol{r}^j \cdot \boldsymbol{r}^k \; \boldsymbol{r}_i \boldsymbol{r}_j \boldsymbol{r}_k$$

with its components $\boldsymbol{r}_i \times \boldsymbol{r}_j \cdot \boldsymbol{r}_k \equiv \epsilon_{ijk}$ or $\boldsymbol{r}^i \times \boldsymbol{r}^j \cdot \boldsymbol{r}^k \equiv \epsilon^{ijk}$.

Then for some two vectors $\boldsymbol{a} = (\sum_i)\, a_{i} \boldsymbol{r}^i = (\sum_i)\, a^{i} \boldsymbol{r}_i$ and $\boldsymbol{b} = (\sum_i)\, b_{i} \boldsymbol{r}^i = (\sum_i)\, b^{i} \boldsymbol{r}_i$

$$\boldsymbol{a} \times \boldsymbol{b} = (\sum_i)\, a_i \boldsymbol{r}^i \times (\sum_j)\, b_j \boldsymbol{r}^j = (\sum_{i,j})\, a_i b_j \: \boldsymbol{r}^i \times \boldsymbol{r}^j,$$

where cross product of cobasis vectors (as well as cross product of basis vectors, if you prefer to use another decomposition with second set of components) comes from definition of components of Levi-Civita tensor:

$$\boldsymbol{r}^i \times \boldsymbol{r}^j \cdot \boldsymbol{r}^k = \epsilon^{ijk} \,\Leftrightarrow\: (\sum_k)\, \boldsymbol{r}^i \times \boldsymbol{r}^j \cdot \boldsymbol{r}^k \boldsymbol{r}_k = (\sum_k)\, \epsilon^{ijk} \boldsymbol{r}_k \,\Leftrightarrow\: \boldsymbol{r}^i \times \boldsymbol{r}^j \cdot \boldsymbol{E} = (\sum_k)\, \epsilon^{ijk} \boldsymbol{r}_k$$ and finally $\boldsymbol{r}^i \times \boldsymbol{r}^j = (\sum_k)\, \epsilon^{ijk} \boldsymbol{r}_k$.

Thus

$$\boldsymbol{a} \times \boldsymbol{b} = (\sum_{i,j,k})\, a_i b_j \, \epsilon^{ijk} \, \boldsymbol{r}_k = \boldsymbol{b} \boldsymbol{a} \cdot \! \cdot \, {^3\!\boldsymbol{\epsilon}} = - \boldsymbol{a} \boldsymbol{b} \cdot \! \cdot \, {^3\!\boldsymbol{\epsilon}}$$