Calculating the inverse of a multivector
Not sure where you got the idea that inverses should involve duality. Usually this is done merely through reversion. Let $B^\dagger$ denote the reverse of $B$. Then the inverse is
$$B^{-1} = \frac{B^\dagger}{B B^\dagger}$$
For a bivector, $B^\dagger = -B$. I believe this works for any object that can be written as a geometric product of vectors (i.e. that can be factored; which is why it works for rotors and spinors), but don't quote me on that. Of course in mixed signature spaces, anything that has a null factor is not invertible.
An algorithm to calculate the inverse of a general multivector:
Start with an invertible general multivector (X) of Clifford's geometric algebra over a space of orthogonal unit vectors. Post-multiply repeatedly by a "suitable" Clifford number until a scalar (S) is reached; let the product of the post-multipliers be (R).
Then we have (X)(R) = (S)
Pre-multiply both sides by the required inverse (I) and divide by the scalar (S) and we have:
(R)/(S) = (I) which was to be determined.
For a "suitable" general multivector or Clifford number we try the Reverse or the Clifford conjugate. I notice that (X)(Xrev), for instance, results in only grades invariant to reversion; perhaps this would have been obvious beforehand to a mathematician. This elementary process works up to dimension 5, but fails at dimension 6. I have since seen 2 or 3 papers on the web which seem to agree with this result - but no one comments on it. Above dimension 5 it seems something more sophisticated is needed.
An example in dimension 5: (A)(Arev) = (B) gives grades B0 + B1 + B4 + B5.
B0 is the scalar and B1 is a vector; B4, B5 are the 4-vector and the pseudo-scalar.
In dimension 5 the pseudo-scalar commutes with all vectors and squares to +1 ; as a result we can use duality to re-arrange (B) as a paravector with coefficients in the duplex numbers (also known as hyperbolic, perplex or Study numbers) - that is as D0 + D1
Multiply by D0 - D1 to reach a duplex number which is readily reduced to a scalar.
For dimension 6 and above I found the following "in principle" process - but it doesn't look to me like an efficient one:
In dim 6 (and above) arrange (X) as A+Bn where (A) and (B) are in dim 5 and n is one of the unit vectors (e6) for instance. Post multiply by C+Dn so as to remove e6 in the result. This can be done by something looking rather like a projection operator as discussed by Bouma. Repeat the process to step down the dimensions. I don't see why this shouldn't be extended to as high a dimension as required.
Given multivector $a$ that you want to invert, the function $x \mapsto a x$ is a linear transformation (of the algebra viewed as a $2^n$ dimensional vector space), right?
So, an uninspired but reliable way to find $a^{-1}$ would be to express $a$ as a $2^n$ by $2^n$ matrix $A$ (that is, the matrix $A$ whose $2^n$ columns are $a$ times each of the $2^n$ basis multivectors), and solve the linear equation $A x = 1$; then the solution $x$ is the desired $a^{-1}$. If there is no solution, then $A$ doesn't have an inverse, which means $a$ doesn't have an inverse.
Note that this always gives the answer if there is one, even in cases where the $a^†/({aa}^†)$ method fails due to ${aa}^†$ not being a scalar (where $a^†$ denotes the reverse of $a$). For example, if $a=2+e_1$, then $a^{-1}=(2-e_1)/3$.