When do 3D random walks return to their origin?
For a fairly robust intuitive argument, think of a random walk in $\mathbb{R}^d$ as the "product" of $d$ one-dimensional walks in $\mathbb{R}^1$. For a (finite variance) random walk in $\mathbb{R}^1$, the probability the random walk is within $O(1)$ of the origin after $n$ steps scales like $n^{-1/2}$. If the $d$-dimensional random walk were to literally just be the independent product of $d$ one-dimensional walks, this would mean that in $\mathbb{R}^d$ the probability the random walk is near the origin after $n$ steps would be about $n^{-d/2}$, and indeed, this answer is correct. Roughly speaking, then, the reason random walk changes behavior between $d=2$ and $d=3$ is that this is when $\sum_n n^{-d/2}$ switches from divergent to convergent.
This intuition suggests that if your walk is "truly" at least $(2+\epsilon)$-dimensional for some $\epsilon > 0$, then it should be transient (if you're willing to accept this intuition of $n^{-d/2}$ behavior for fractional $d$). Terry Lyons has derived a necessary and sufficient condition for the transience of a reversible Markov chain which I think formalizes and extends this intuition. He in particular uses it to prove a necessary and sufficient condition for the transience of simple random walk on "wedges" in $\mathbb{Z}^d$. Specializing his result even further, he mentions that, letting $\Omega$ be the subgraph of $\mathbb{Z}^3$ with $$ \Omega=\{(x,y,z) \in \mathbb{Z}^3, y \leq x, x \leq (\log(z+1))^{\alpha}\} $$ then the simple random walk on $\Omega$ is transient whenever $\alpha > 1$. (The same would be true for any finite variance random walk constrained to lie in $\Omega$, though I'm not sure Terry Lyons' theorem will prove this in full generality.) The graph $\Omega$ is just a very slight "fattening" of part of $\mathbb{Z}^2$, and the walk is already transient. In a sense, random walks in $\mathbb{Z}^2$ only "just" fail to be transient, and if you go above $\mathbb{Z}^2$ in any way you will immediately be transient.
It is always transient in dimensions 3 and higher - see Theorem T1 in Section 8 of Spitzer's classical book "Principles of random walk" (2nd ed.).
Are you talking about fixed biases?
If the bias is not a fixed value like the matrices in $n$-dimensions I described above, you could have the probabilities be a function of their location in the $\mathbb{Z}^n$ lattice or a function of the current positions distance from the origin. In that case, you end-up simulating physical scenarios, such as the motion of a charged particle in an electric field, or gravitational attraction. If you used an inverse-square (to the distance) law, simulating gravity, you'd end up with a lunar-crrash-lander or a satellite-orbiting type of simulation. Are you talking about fixed biases?
(original answer below, valid for a fixed unbiased, or for a fixed-value biased random walker, where the bias is not a function of the random walker's position)
Joseph, the envelope (furthest reachable limit) of an unbiased random walk on an $n$-dimensional lattice at time step $t$ is the region containing the origin and $|d_1| + |d_2| + ... + |d_n| \le t$. So for $n=1$, that region is the line segment $-t \le x \le +t$ equivalent to $|x| \le t$.
For $n=2$, the envelope region is the diamond-shaped area $|x| + |y| \le t$, or the region bounded by the four lines $x+y=1, x+y = -1, x-y=1, x-y=-1$ or equivalently, the four lines $y=x+1, y=x-1, y= (-x)+1, y= (-x)-1$.
For $n=3$, the envelope region is the octohedral shape on the $\mathbb{Z}^3$-lattice contained within $|x|+|y|+|z| \le t$.
The probability density region of the unbiased random walk in $n$-dimensions approaches the $n$-dimensional gaussian.
So for $n=1$, the region of the envelope grows linearly as $t$, and for $n=2$, the region of the envelope grows proportionately to $t^2$, etc., growing proportionately to $t^n$ for $n$ dimensions. Once $n \gt 2$, the rate of the growth of the envelope rapidly overtakes the rate of the average distance traveled, and it becomes very unlikely that the unbiased random walker will return to the origin. That's how I've understood it to be. A reference off the top of my head would be Margulis and Toffoli's Cellular Automata Machine book from 1984 or 1985, as it gives a good description of cellular automata models of diffusion in $1$ and $2$-dimensions, and I believe in $3$-dimensions also, thought I am not certain. I believe that's where I remember reading about the "envelope"; and I remember running my own programmed simulations to draw the envelope and probability distributions for 1-d and 2-d.
In 1-dimensions, the probability distribution at time step $t$ are the convolutions of $[0.5, 0.0, 0.5]$ with itself, and the envelope is the region of this resultant convolution where the probabilities are non-zero. It's also equivalent to the Binomial expansion, or every other row of pascal's triangle divided by the sum of the elements of that row:
1
1 2 1 divided by four
1 4 6 4 1 divided by 16
1 6 15 20 15 6 1 divided by 64
And the $2$-dimensional version is the $2-d$ convolution of the 2-d matrix $M_2$
0 1 0
1 0 1
0 1 0
with itself $t$ numbers of times (center the matrix at the origin as an image matrix, divide it by 4, and do a 2-dimensional convolution with $M_2$ for each time step to see the probability distribution evolving).
Similarly, in $3$-d, the $M_3$ matrix is the three-dimensional matrix {$M_a \div 6; M_b \div 6; M_c \div$ 6} consisting of the three $2$-d matrices $M_a, M_b, M_c$ which I'll type out
$M_a$=
0 0 0
0 1 0
0 0 0
$M_c=M_a$
$M_b=$
0 1 0
1 0 1
0 1 0
And the 3-d probability distribution at time step $t$ is the $t$-th convolution of $M_3 \div 6$ with itself.
If you try a few steps of the $3-d$ convolution, you'll see that the probability density at the center quickly goes to zero. Once the number of dimensions is greater than $2$, the unbiased random walker is more likely to move further away in other dimensions where it's closer to the origin, rather than get closer to the origin in the dimensions where it's already further away.