Exponential decay of voltage potential difference
This is not an answer, but too long for a comment.
Consider a doubly infinite matrix $L = (q_{ij})_{i,j \in \mathbb{Z}}$ with entries $q_{ij} = -e^{-|i - j|}$ when $i \ne j$, and $q_{ii} = 2 e / (1 - e)$; here $i, j \in \mathbb{Z}$. The symbol of this matrix (i.e. the Fourier series with coefficients $e^{-|j|}$, except at $j = 0$) is: $$ \psi(x) = \frac{e^2 - 1}{e^2 - 2 e \cos x + 1} - \frac{e + 1}{e - 1} . $$ The symbol of $L^\dagger$ is thus $1 / \psi(x)$ (in the principal value sense), which has a singularity of type $1 / x^2$ at $x = 0$. It follows that in this case $$ a_{kl} = \frac{1}{2 \pi} \int_{-\pi}^{\pi} \frac{(e^{i x} - e^{2 i x}) (e^{i k x} - e^{i l x})}{\psi(x)} \, dx . $$
In general, the above expression will only have power-type decay as $k,l \to \infty$.
However, for this particular choice of $L$, things simplify a lot. The pseudo-inverse $L^\dagger = (u_{ij})_{i,j \in \mathbb{Z}}$ can be found explicitly, and its entries are $u_{ij} = C_1 - C_2 |i - j|$ when $i \ne j$ and $u_{ii} = C_3$ for appropriate constants $C_1$, $C_2$, $C_3$. Consequently, $a_{kl} = 0$ when $k, l > 2$.
I do not have a clear intuition about what happens in the one-sided case (that is, if we consider an infinite matrix $L$ with entries indexed by $i, j \in \{1, 2, \ldots\}$), let alone the bounded case (with $i, j \in \{1, 2, \ldots, n\}$). My wild guess would be that the symmetry breaks, and there is no hope for any closed-form formula.
However, a quick numerical experiment suggests strongly that we still have $a_{kl} = 0$! More precisely, the entries $u_{ij}$ of $L^\dagger$ apparently satisfy $$ u_{ij} = v_{\max\{i,j\}} + v_{\max\{n+1-i,n+1-j\}}, v_{n-i} + v_j\} + \tfrac{1}{4} |i - j| \qquad (i \ne j) $$ for an appropriate vector $v_i$. I find this extremely surprising!
Here is the code in Octave, in case anyone is interested. First, we construct $L$ and its pseudo-inverse (denoted U
here):
n = 10; # size of the matrix
A = toeplitz(exp(-(0:n-1)));
L = diag(A * ones(n,1)) - A; # matrix L
U = pinv(L); # pseudo-inverse L^\dagger
Next, we verify that the mixed second-order difference of $L^\dagger$ is a tri-diagonal matrix:
D = U(1:n-1, 1:n-1) - U(1:n-1, 2:n) ...
- U(2:n, 1:n-1) + U(2:n, 2:n); # second-order difference of U
This already shows that $L^\dagger$ has the desired structure, but we can verify this directly. First two lines are to extract the vector $v_i$, the other two define the matrix Z
with entries
$$
u_{ij} - v_{\max\{i,j\}} - v_{\max\{n+1-i,n+1-j\}}, v_{n-i} + v_j\} - \tfrac{1}{4} |i - j| \qquad (i \ne j)
$$
which should be zero except on the diagonal:
X = U - 0.25 * abs(repmat(1:n, n, 1) - repmat(1:n, n, 1)');
V = X(:, 1) - 0.5 * X(n, 1);
I = repmat(1:n,n,1);
Z = X - V(max(I, I')) - V(max(n + 1 - I, n + 1 - I'));
Edit: This turns out to be quite simple. Observe that $a_{1i} / a_{2i} = q$ does not depend on $i \in \{3, 4, \ldots, n\}$. Thus, if $x_1 = 1$, $x_2 = -q$ and $x_i = 0$ for $i \in \{3, 4, \ldots, n\}$, then we clearly have $L x = c e_1 - c e_2$, where $c = \sum_{i=3}^n a_{1i}$. It follows that $$L^\dagger (e_1 - e_2) = c^{-1} x + \operatorname{const}.$$
I leave the previous version of this answer below, as it provides a way to evaluate $L^\dagger$ explicitly.
I cannot say I understand what is really going on here, but at least I have a proof that $a_{ij} = 0$ when $i, j \ge 3$. (I leave my previous comment/answer, as it contains some related stuff that is not included here.)
Notation: Every sum is a sum over $\{1, 2, \ldots, n\}$. We write $q = e^{-1}$ (and actually any $q \in (0, 1)$ will work). Given a vector $x = (x_i)$ we write $$ \Delta x_i = x_{i+1} + x_{i-1} - 2 x_i $$ if $1 < i < n$.
Given a vector $(x_i)$, we have $$ Lx_i = \sum_j q^{|i-j|} (x_i - x_j) = b_i x_i - \sum_j q^{|i-j|} x_j ,$$ where $$ b_i = \sum_j q^{|i-j|} = \frac{1 + q - q^i - q^{n+1-i}}{1 - q} $$ Therefore, when $1 < i < n$, we have $$ \begin{aligned} \Delta Lx_i & = \Delta (b x)_i - \sum_j (q^{|i-j+1|}+q^{|i-j-1|}-2q^{|i-j|}) x_j \\ & = \Delta (b x)_i - (q + q^{-1} - 2) \sum_j q^{|i-j|} x_j + (q^{-1} - q) x_i \\ & = \Delta (b x)_i + (q + q^{-1} - 2) L x_i - (q + q^{-1} - 2) b_i x_i + (q^{-1} - q) x_i \\ & = (q + q^{-1} - 2) L x_i + b_{i+1} x_{i+1} + b_{i-1} x_{i-1} - ((q + q^{-1}) b_i - (q^{-1} - q)) x_i . \end{aligned} $$ A short calculation shows that $$ b_{i+1} + b_{i-1} = ((q + q^{-1}) b_i - (q^{-1} - q)) $$ (which looks somewhat miraculous, but there must be some insightful explanation for that). Thus, $$ \Delta Lx_i = (q + q^{-1} - 2) L x_i + b_{i+1} (x_{i+1} - x_i) + b_{i-1} (x_{i-1} - x_i) . $$ Suppose that $x_i = L^\dagger y_i$ for some vector $(y_i)$ such that $\sum_i y_i = 0$. Then $L x_i = L L^\dagger y_i = y_i$. Write $c = q + q^{-1} - 2$. We then have $$ \Delta y_i - c y_i = b_{i+1} (x_{i+1} - x_i) + b_{i-1} (x_{i-1} - x_i) . $$ In particular, the following claim follows.
Proposition 1: If $1 < i < n$, $y_{i-1} = y_i = y_{i+1} = 0$ and $x_i = x_{i+1}$, then $x_{i-1} = x_i$.
The above result will serve as an induction step. To initiate the induction, we need to study the $i = n$, which is slightly different. In that case: $$ \begin{aligned} Lx_{n-1} - Lx_n & = b_{n-1} x_{n-1} - b_n x_n - \sum_j (q^{|n-j-1|}-q^{|n-j|}) x_j \\ & = b_{n-1} x_{n-1} - b_n x_n - (q^{-1} - 1) \sum_j q^{|n-j|} x_j + (q^{-1} - q) x_n \\ & = b_{n-1} x_{n-1} - b_n x_n + (q^{-1} - 1) L x_n - (q^{-1} - 1) b_n x_n + (q^{-1} - q) x_n \\ & = (q^{-1} - 1) L x_n + b_{n-1} x_{n-1} - (q^{-1} b_n - (q^{-1} - q)) x_n . \end{aligned} $$ This time we have $$ q^{-1} b_n - (q^{-1} - q) = b_{n-1} , $$ and hence $$ Lx_{n-1} - Lx_n = (q^{-1} - 1) L x_n + b_{n-1} (x_{n-1} - x_n) . $$ Again we consider $x_i = L^\dagger y_i$ for some vector $(y_i)$ such that $\sum_i y_i = 0$, and we write $d = q^{-1} - 1$. We then have $$ (y_{n-1} - y_n) - d y_n = b_{n-1} (x_{n-1} - x_n) . $$ As a consequence, we have the following result.
Proposition 2: If $y_{n-1} = y_n = 0$, then $x_{n-1} = x_n$.
For $y = e_1 - e_2 = (1, -1, 0, 0, \ldots)$, we immediately obtain the desired result.
Corollary: If $y = e_1 - e_2$ and $x = L^\dagger y$, then $x_3 = x_4 = x_5 = \ldots = x_n$. Consequently, $a_{ij} = x_i - x_j = 0$ whenever $i, j \ge 3$.
Another consequence of the above result is that if $L^\dagger = (u_{ij})$, then $$u_{i+1,j+1}-u_{i,j+1}-u_{i+1,j}+u_{i,j} = 0$$ whenever $i + 1 < j$ or $j + 1 < i$. Moreover, it should be relatively easy to use Propositions 1 and 2 to evaluate $u_{ij}$ explicitly, and in particular to prove that $$ u_{ij} = v_{\max\{i,j\}} + v_{\max\{n+1-i,n+1-j\}} + \tfrac{1}{4} |i - j| $$ when $i \ne j$, where $(v_i)$ is an explicitly given vector (in terms of products/ratios of $b_i$, I guess).
Final remark: there is a corresponding result in continuous variable: the Green function for the operator $$L f(x) = \int_0^1 e^{-q |x - y|} (f(x) - f(y)) dy$$ has zero mixed second-order derivative. The proof follows exactly the same line, and is in fact somewhat less technical.