Cross product in higher dimensions
Yes. It is just like in dimension $3$: if your vectors are $(t_1,t_2,t_3,t_4)$, $(u_1,u_2,u_3,u_4)$, and $(v_1,v_2,v_3,v_4)$, compute the formal determinant:$$\begin{vmatrix}t_1&t_2&t_3&t_4\\u_1&u_2&u_3&u_4\\v_1&v_2&v_3&v_4\\e_1&e_2&e_3&e_4\end{vmatrix}.$$ You then see $(e_1,e_2,e_3,e_4)$ as the canonical basis of $\mathbb{R}^4$. Then the previous determinant is $(\alpha_1,\alpha_2,\alpha_3,\alpha_4)$ with\begin{align*}\alpha_1&=t_4u_3v_2-t_3u_4v_2-t_4u_2v_3+t_2u_4v_3+t_3u_2v_4-t_2u_3v_4\\\alpha_2&=-t_4u_3v_1+t_3u_4v_1+t_4u_1v_3-t_1u_4v_3-t_3u_1v_4+t_1u_3v_4\\\alpha_3&=t_4u_2v_1-t_2u_4v_1-t_4u_1v_2+t_1u_4v_2+t_2u_1v_4-t_1u_2v_4\\\alpha_4&=-t_3u_2v_1+t_2u_3v_1+t_3u_1v_2-t_1u_3v_2-t_2u_1v_3+t_1u_2v_3\end{align*}It's a vector orthogonal to the other three.
I followed a suggestion taken from the comments on this answer: to put the entries $e_1$, $e_2$, $e_3$, and $e_4$ at the bottom. It makes no difference in odd dimension, but it produces the natural sign in even dimension.
Following another suggestion, I would like to add this remark:$$\alpha_1=-\begin{vmatrix}t_2&t_3&t_4\\u_2&u_3&u_4\\v_2&v_3&v_4\end{vmatrix}\text{, }\alpha_2=\begin{vmatrix}t_1&t_3&t_4\\u_1&u_3&u_4\\v_1&v_3&v_4\end{vmatrix}\text{, }\alpha_3=-\begin{vmatrix}t_1&t_2&t_4\\u_1&u_2&u_4\\v_1&v_2&v_4\end{vmatrix}\text{ and }\alpha_4=\begin{vmatrix}t_1&t_2&t_3\\u_1&u_2&u_3\\v_1&v_2&v_3\\\end{vmatrix}.$$
My answer is in addition to José's and Antinous's answers but maybe somewhat more abstract. In principle, their answers are using coordinates, whereas I'm trying to do it coordinate-free.
What you are looking for is the wedge or exterior product. The exterior power $\bigwedge^k(V)$ of some vector space $V$ is the quotient of the tensor product $\bigotimes^k(V)$ by the relation $v\otimes v$. To be somewhat more concrete and less abstract, this just means that for any vector $v\in V$ the wedge product $v\wedge v=0\in\bigwedge^2(V)$. Whenever you wedge vectors together, the result equals zero if at least two of the factors are linearly dependent. Think of what happens to the cross product in $\mathbb{R}^3$.
In fact, let $e_1,e_2,\ldots,e_n$ be a basis of an inner product space $V$. Then $e_{i_1}\wedge e_{i_2}\wedge \ldots \wedge e_{i_k}$ is a basis for $\bigwedge^k(V)$ where $1\leq i_1 < i_2 < \ldots < i_k\leq n$.
If $V=\mathbb{R}^3$ then $v \wedge w$ equals $v \times w$ up to signs of the entries. This seems a bit obscure because technically $v\wedge w$ should be an element of $\bigwedge^2(\mathbb{R}^3)$. However, the latter vector space is isomorphic to $\mathbb{R}^3$. In fact, this relation is true for all exterior powers given an orientation on the vector space. The isomorphism is called the Hodge star operator. It says that there is an isomorphism $\star\colon\bigwedge^{n-k}(V)\to\bigwedge^{k}(V)$. This map operates on a $(n-k)$-wedge $\beta$ via the relation $$ \alpha \wedge \beta = \langle \alpha,\star\beta \rangle \,\omega $$ where $\alpha\in\bigwedge^{k}(V)$, $\omega\in\bigwedge^n(V)$ is an orientation form on $V$ and $\langle \cdot,\cdot \rangle$ is the induced inner product on $\bigwedge^{k}(V)$ (see wiki). Notice that the wiki-page defines the relation the other way around.
How does all this answer your question you ask? Well, let us take $k=1$ and $V=\mathbb{R}^n$. Then the Hodge star isomorphism identifies the spaces $\bigwedge^{n-1}(\mathbb{R}^n)$ and $\bigwedge^{1}(\mathbb{R}^n)=\mathbb{R}^n$. This is good because you originally wanted to say something about orthogonality between a set of $n-1$ linearly indepedent vectors $v_1,v_2,\ldots,v_{n-1}$ and their "cross product". Now let us exactly do that and set $\beta :=v_1 \wedge v_2 \wedge \ldots \wedge v_{n-1}\in\bigwedge^{n-1}(\mathbb{R}^n)$. Then the image $\star\beta = \star(v_1 \wedge v_2 \wedge \ldots \wedge v_{n-1})$ is a regular vector in $\mathbb{R}^n$ and the defining condition above implies for $\alpha=v_i\in\mathbb{R}^n=\bigwedge^{1}(\mathbb{R}^n)$ $$ v_i \wedge (v_1 \wedge v_2 \wedge \ldots \wedge v_{n-1}) = \alpha \wedge \beta = \langle \alpha,\star\beta \rangle \,\omega = \langle v_i,\star\beta \rangle \,\omega. $$ However, the left hand side equals zero for $i=1,2,\ldots,n-1$, so that the vector $\star\beta$ is orthogonal to all vectors $v_1,v_2,\ldots,v_{n-1}$ which is what you asked for. So you might want to define the cross product of $n-1$ vectors as $v_1 \times v_2 \times \ldots \times v_{n-1} := \star(v_1 \wedge v_2 \wedge \ldots \wedge v_{n-1})$.
Maybe keep in mind that the other two answers implicitly use the Hodge star operation (and also a basis) to compute the "cross product in higher dimension" through the formal determinant which is encoded in the use of the wedge product here.
You can work out the cross product $p$ in $n$-dimensions using the following:
$$p=\det\left(\begin{array}{lllll}e_1&x_1&y_1&\cdots&z_1\\e_2&x_2&y_2&\cdots&z_2\\\vdots&\vdots&\vdots&\ddots&\vdots\\e_n&x_n&y_n&\cdots&z_n\end{array}\right),$$ where $\det$ is the formal determinant of the matrix, the $e_i$ are the base vectors (e.g. $\hat{i},\hat{j},\hat{k}$, etc), and $x,y,\ldots,z$ are the $n-1$ vectors you wish to "cross".
You will find that $x\cdot p=y\cdot p=\cdots=z\cdot p=0$.
It's wonderful the determinant produces a vector with this property.