Do convolution and multiplication satisfy any nontrivial algebraic identities?
I think the answer to the original question (i.e. are there any universal algebraic identities relating convolution and multiplication over arbitrary groups, beyond the "obvious" ones?) is negative, though establishing it rigorously is going to be tremendously tedious.
There are a couple steps involved. To avoid technicalities let's restrict attention to discrete finite fields G (so we can use linear algebra), and assume the characteristic of G is very large.
Firstly, given any purported convolution/multiplication identity relating a bunch of functions, one can use homogeneity and decompose that identity into homogeneous identities, in which each function appears the same number of times in each term. (For instance, if one has an identity involving both cubic expressions of a function f and quadratic expressions of f, one can separate into a cubic identity and a quadratic identity.) So without loss of generality one can restrict attention to homogeneous identities.
Next, by depolarisation, one should be able to reduce further to the case of multilinear identities: identities involving a bunch of functions $f_1, f_2, ..., f_n$, with each term being linear in each of the functions. (I haven't checked this carefully but it should be true, especially since we can permit the functions to be complex valued.)
It is convenient just to consider evaluation of these identities at the single point 0 (i.e. scalar identities rather than functional identities). One can actually view functional identities as scalar identities after convolving (or taking inner products of) the functional identity with one additional test function.
Now (after using the distributive law as many times as necessary), each term in the multilinear identity consists of some sequence of applications of the pointwise product and convolution operations (no addition or subtraction), evaluated at zero, and then multiplied by a scalar constant. When one expands all of that, what one gets is a sum (in the discrete case) of the tensor product $f_1 \otimes ... \otimes f_n$ of all the functions over some subspace of $G^n$. The exact subspace is given by the precise manner in which the pointwise product and convolution operators are applied.
The only way a universal identity can hold, then, is if the weighted sum of the indicator functions of these subspaces (counting multiplicity) vanishes. (Note that finite linear combinations of tensor products span the space of all functions on $G^n$ when $G$ is finite.) But when the characteristic of $G$ is large enough, the only way that can happen is if each subspace appears in the identity with a net weight of zero. (Indeed, look at a subspace of maximal dimension in the identity; for $G$ large enough characteristic, it contains points that will not be covered by any other subspace in the identity, and so the only way the identity can hold is if the net weight of that subspace is zero. Now remove all terms involving this subspace and iterate.)
So the final thing to do is to show that a given subspace can arise in two different ways from multiplication and convolution only in the "obvious" manner, i.e. by exploiting associativity of pointwise multiplication and of convolution. This looks doable by an induction argument but I haven't tried to push it through.
The very short answer is yes, provided that you also allow yourself a little linear algebra. But then again you rejected David's answer, so you may not be happy with mine. I'll try to convince you that my answer is both trivial and also deep, and that it doesn't depend on more structure than what you've allowed.
The short answer
For the purposes of my answer, I will pretend that the group $G$ is finite (I won't pretend it's abelian, because I don't need it to be). There are versions of what I'm going to say for, at least, compact groups and algebraic groups, but subtleties emerge which I will ignore. Let $R$ be the ring of functions on $G$. Since $G$ is finite, $R$ is finite-dimensional. (If $G$ is algebraic, $R$ is like a polynomial ring, and if $G$ is compact, $R$ has a good topology, and any constructions must be completed. This is what I mean by "subtleties".)
The ring of functions on $G$ has a canonical nondegenerate pairing: $\langle f,g\rangle = \int fg$. Being nondegenerate, the pairing has an "inverse", which is an element of the tensor product $R\otimes R$. Explicitly, pick any orthonormal basis of the pairing, e.g. the basis $\{\delta_x: x\in G\}$, where $\delta_x(y)=1$ if $y=x$ and $0$ otherwise. Then the inverse is the sum over the basis of the tensor square of each item. So for my basis, it is $\sum_{x\in G} \delta_x\otimes\delta_x$. But it should be emphasized that the inverse to the pairing does not actually depend on the basis. Since I don't have better notation, though, I'll work in the basis for my answer. The better description is in terms of the physicists' abstract index notation, or Penrose's birdtracks.
Then convolution and multiplication are related by the following:
$$\langle f_1\cdot f_2, f_3*f_4\rangle = \sum_{x_1} \sum_{x_2} \sum_{x_3} \sum_{x_4} \langle f_1, \delta_{x_1}*\delta_{x_2}\rangle \langle f_2, \delta_{x_3}*\delta_{x_4}\rangle \langle \delta_{x_4}\cdot\delta_{x_2}, f_3\rangle \langle \delta_{x_3}\cdot\delta_{x_1}, f_4\rangle $$
The long answer
Let $X$ be a set (or more generally a "space"). Write $C(X)$ for the ring of functions on $X$, and $\mathbf k[X]$ for the collection of linear combinations of points in $X$. (I'll write $\mathbf k$ for the ground field; everything I say will work over any field, but you can take it to be the reals or complexes if you want. The meaning of the word "space" probably depends on your ground field.)
What types of operations are these? Well, $C()$ is a contravariant functor, and $\mathbf k[]$ is a covariant one, both from the category of SETS (or SPACES) to the category VECT. Let's start with $\mathbf k[]$ because it's covariant. It's actually better than a functor: it's a monoidal functor, meaning that if you start with a cartesian product you end up with a tensor product: $k[X\times Y] = \mathbf k[X\otimes Y]$. Actually, so is $C()$, although if you work with non-finite sets you have to complete the tensor product. In fact, these two operations are intimately related: for any space $X$, $C(X)$ is naturally (i.e. functorially in $X$) the dual space to $\mathbf k[X]$, so that $C() = \mathbf k[]^*$. Thus, we can basically completely understand $C()$ by understanding $\mathbf k[]$, or vice versa.
Since SETS (or SPACES) is cartesian, every object is a coalgebra in a unique way. I'll spell this out. An algebra in a monoidal category is an object $X$ with a "multiplication" map $X \times X \to X$ satisfying various conditions. The word is chosen so that in VECT, my notion of algebra and yours match. In SET, an algebra is a monoid. Anyway, a coalgebra is whatever you get by turning all the arrows around. For intuition, think about VECT, where a coalgebra is whatever the natural structure on the dual vector space to an algebra is. (Write the multiplication map as a big matrix from the tensor square of your algebra to your algebra, and think about its transpose.)
The canonical coalgebra structure on a set $X$, by the way, is given by the diagonal map $\Delta : X \to X \times X$, where $\Delta(x) = (x,x)$.
Since $\mathbf k[]$ is a monoidal functor, it takes algebras to algebras and coalgebras to coalgebras. Thus for any set $X$, the vector space $\mathbf k[X]$ inherits a coalgebra structure. Thus, dually, $C(X)$ inherits an algebra structure (you can say this directly: a monoidal contravariant functor turns coalgebras into algebras). In fact, this is precisely the canonical algebra structure you're calling "." on the ring of functions.
Well, let's say now that $X$ is an algebra in SETS, i.e. a monoid (e.g. a group). Then $\mathbf k[X]$ inherits an algebra structure, and equally $C(X)$ has a coalgebra structure. But actually it's a bit better than this. Since any set is a coalgebra in a unique way, the algebra and coalgebra structures on $X$ get along. I'll write $*$ for the multiplication in $X$. Then when I say "get along" what I mean is:
$$\Delta(x) * \Delta(y) = \Delta(x*y)$$
where on the left-hand-side I mean the component-wise multiplication in $X \times X$.
Well, $\mathbf k[]$ is a functor, so it preserves this equation, except that the coalgebra structure on $\mathbf k[X]$ is not trivial the way $\Delta$ is in SETS. Anything that is both a coalgebra and an algebra and that satisfies an equation like the one above is a bialgebra. You can check that the equation is well-behaved under dualizing, so that $C(X)$ is also a bialgebra if $X$ is an algebra.
Ok, so how does all this connect with your question? What's going on is that for sufficiently good spaces, e.g. finite sets, there is a canonical identification between the vector spaces $\mathbf k[X]$ and $C(X)$ for any $X$. This identification breaks various functoriality properties, though. But anyway, if $G$ is a finite group, then we can consider $\mathbf k[G]$ and $C(G)$ to be the same vector space $R$, and pretend that it just has two separate ring structures on it.
But doing this obscures the bialgebra property. If I'm only allowed to reference the two multiplications, and not their dual maps, then to write the bialgebra property requires explicitly referring to the canonical pairing (what I called $\int = \langle,\rangle$ before) and its inverse. Then the bialgebra property becomes the long equation I wrote in the previous part.
Final remarks
I should also mention that a group has not just a multiplication but also identities and inverses. These give another equation. In the basis from the first section, the unit in $R$ for $\cdot$ is the function $1 = \sum_{x\in G} \delta_x$, and the unit for $*$ is $\delta_e$, where $e$ is the identity in $G$. These satisfy the equation:
$$\delta_e \otimes 1 = \sum_{x_1} \sum_{x_2} (\delta_{x_1} * \delta_{x_2}) \otimes (\delta_{x_1} \cdot \delta_{x_2^{-1}})$$
where ${x_2^{-1}}$ is the inverse element to $x_2$. You should be able to recognize the inverse to the canonical pairing in there. Again, the equation is simpler in better notation, e.g. indices or birdtracks, and does not depend on a choice of basis. A bialgebra satisfying an equation like the one above is a Hopf algebra.
Another thing I should mention is that there are similar stories at least for compact groups, but you have to think harder about what "the inverse to the canonical pairing" is. (On a compact group, there is a canonical pairing of functions, given by Haar measure.) In fact, I think a story like this can be told for other spaces, where you change what you mean by $\mathbf k[]$ and $C()$, in the first case expanding the notion of linear combination and in the second case restricting the type of function. Then you should put the word "quasi" in front of everything, because the coalgebra structure, the inverse to the pairing, the units, etc. all require completions of your vector spaces.
And there may be special equations for abelian groups. In abelian land, the Fourier/Pontryagin transform does the following: it recognizes the (now commutative) ring $\mathbf k[G]$ as a ring of functions on some other space: $\mathbf k[G] = C(G^*)$.
But the overall moral is that really convolution and multiplication are going on in different vector spaces; it's just that you have a canonical pairing that you can't tell the spaces apart. And if you insist on conflating the two spaces, then you should allow the canonical pairing and its inverse as basic algebraic operations.
All right, I think I can finish the proof now. I will prove that there are no nontrivial identities on $\mathbb{R}^d$ for any $d$. This proof makes heavy use of the first part of Terry Tao's post (reducing to multilinear identities), but I'll use a different argument to finish it, since I guess I'm just more familiar and comfortable with real vector spaces than with finite groups. It should be possible to complete Terry's line of argument to get a proof for sufficiently large finite groups, which my proof won't cover. Moreover, as Theo pointed out in a comment to his answer, deforming the domain nonlinearly screws up convolution while leaving the other operations intact, and it should be easy to use that to show no identities can hold. In any case, this is a community wiki post, so anybody can make additions or simplifications.
First, by Terry Tao's observations, it suffices to consider multilinear identities of the form $$c_1F_1(f_1,\ldots,f_n) + \cdots + c_kF_k(f_1,\ldots,f_n) = 0$$ where each $F_i$ is a "multilinear monomial," i. e., a composition of multiplication and convolution in which each of $f_1,\ldots,f_n$ appears exactly once. (The original question didn't allow scalar multiplication, but it doesn't introduce any difficulty.) To summarize the argument: by applying the distributive laws as much as necessary and using an easy scaling argument, it suffices to consider identities that are homogeneous in each argument, i. e., sums of monomials in which each argument appears some fixed number of times. To reduce this further to the multilinear case, suppose we have some putative identity of the form $F(f_1,\ldots,f_m) = 0$ that is homogenous of degree $n_i$ in $f_i$ for all $i$. For the moment, consider $f_2,\ldots,f_n$ to be fixed, so we have a homogeneous degree-$n_1$ functional $T(f_1)$ of $f_1$. The polarization identity states that if we define a new functional $S$ by $$S(g_1,\ldots,g_{n_1}) = \frac{1}{n_1!}\sum_{E\subseteq \{1,\ldots,n\_1\}} (-1)^{n_1-|E|} T\big(\sum_{j\in E} g_j\big),$$ then $S$ is a (symmetric) multilinear function of $g_1,\ldots,g\_{n_1}$ and $S(f_1,\ldots,f_1) = T(f_1)$. Thus, the identity $F(f_1,\ldots,f_m)=0$ is equivalent to the identity $G(g_1,\ldots,g_{n_1},f_2,\ldots,f_n) = 0$, where $G$ is obtained from $F$ by the polarization construction applied on the first argument. Repeating the construction for $f_2,\ldots,f_m$, we obtain an equivalent multilinear identity $H(g_1,\ldots,g_n)=0$ (where $n=n_1+\cdots+n_m$).
Let's fix a nomenclature for monomials: let $C(f_1,\ldots,f_n)=f_1*\cdots*f_n$ and $M(f_1,\ldots,f_n)=f_1\cdot \cdots \cdot f_n$. A monomial is a C-expression if convolution is the top-level operation or an M-expression if multiplication is the top-level expression. $f_1,\ldots,f_n$ are atomic expressions and are considered both M-expressions and C-expressions. We consider two monomials to be identical if they can be obtained from one another by applying the associative and commutative laws for multiplication and convolution. With this equivalence relation, each equivalence class of monomials can be written uniquely in the form $C(A_1,\ldots,A_n)$ or $M(B_1,\ldots,B_n)$ (up to permuting the $A$s or the $B$s), where the $A$s are M-expressions and the $B$s are C-expressions. At this point, we have made maximal use of the algebraic identities for the convolution algebra and the multiplication algebra, so now we have to prove that there are no identities whatsoever of the form $$c_1F_1(f_1,\ldots,f_n) + \cdots + c_kF_k(f_1,\ldots,f_n) = 0$$ where the $c_i$ are nonzero scalars and the $F_i$ are distinct multilinear monomials.
For all $a>0$, let $\phi_a:\mathbb{R}^d\to \mathbb{R}$ be the gaussian function $\phi_a(x)=e^{-a\|x\|^2}$. We'll prove if the $F_i$ are distinct and the $c_i$ are nonzero, then $$c_1F_1(\phi_{a_1},\ldots,\phi_{a_n}) + \cdots + c_kF_k(\phi_{a_1},\ldots,\phi_{a_n})= 0$$ cannot hold for all $a_1,\ldots,a_n>0$. It's easy to see that $\phi_a\cdot\phi_b = \phi_{a+b}$ and $\phi_a*\phi_b = (\pi(a+b))^{d/2}\phi_{(a^{-1}+b^{-1})^{-1}}$. Therefore, if we define $S(a_1,\ldots,a_n)=a_1+\cdots +a_n$ and $P(a_1,\ldots,a_n)=(a_1^{-1}+\cdots+a_n^{-1})^{-1}$, and $F$ is a multilinear monomial, then $F(\phi_{a_1},\ldots,\phi_{a_n}) = R_F(a_1,\ldots,a_n)^{d/2}\exp(-Q_F(a_1,\ldots,a_n)\|x\|^2)$, where $R_F$ is a rational function and $Q_F$ is a rational function composed of $S$ and $P$. In fact, if $F$ is written as a composition of $C$ and $M$, then $Q_F(a_1,\ldots,a_n)$ is obtained from $F(\phi_{a_1},\ldots,\phi_{a_n})$ simply by replacing all the $C$s by $P$s, the $M$s by $S$s, and $\phi_{a_i}$ by $a_i$ for all $i$. Therefore, it makes sense to define P- and S-expressions analogously to C- and M-expressions. A PS-expression in $a_1,\ldots,a_n$ is a composition of $P$ and $S$ in which each of $a_1,\ldots,a_n$ appears exactly once. Equivalence of PS-expressions is defined exactly as for C/M monomials; in particular, equivalence of PS-expressions is apparently a stronger condition than equality as rational functions.
The main lemma we need is that it actually isn't a stronger condition: if $F$ and $G$ are distinct multilinear monomials in $n$ arguments, then $Q_F$ and $Q_G$ are distinct rational functions. In other words, distinct PS-expressions define distinct rational functions. (Note that this is false if the adjective "multilinear" is dropped.) To prove this, first note that although $Q_F$ and $Q_G$ are initially defined as functions $(0,\infty)^n\to (0,\infty)$, they extend continuously $[0,\infty)^n\to [0,\infty)$. If $D=\{i_1,\ldots,i_k\}$ is a subset of $\{1,\ldots,n\}$ and $Q$ is a PS-expression in $n$ variables, then $D$ is called a prime implicant of $Q$ if $Q(a_1,\ldots,a_n) = 0 $ when $a_{i_1},\ldots,a_{i_k}$ are all set to zero, but no proper subset of $D$ has this property. Let $I(Q)$ be the set of prime implicants of $Q$. It's easy to show that $I(P(Q_1,\ldots,Q_m))$ is the disjoint union of $I(Q_1),\ldots,I(Q_m)$, and $I(S(Q_1,\ldots,Q_m))$ is the set of all $D_1 \cup \cdots \cup D_k$, where $D_i\in I(Q_i)$. (It's important here that none of the variables $a_1,\ldots,a_n$ appears in more than one $Q_i$.) Define the implicant graph of $Q$ as the undirected graph with vertices $1,\ldots,n$ and an edge between $i$ and $j$ if some prime implicant of $Q$ contains both $i$ and $j$. It's easy to see that the implicant graph of an S-expression is connected, and if $Q_1,\ldots,Q_m$ are S-expressions, then the connected components of the implicant graph of $P(Q_1,\ldots,Q_m)$ are the implicant graphs of $Q_1,\ldots,Q_m$. This immediately implies that a P-expression cannot define the same function as an S-expression, so it suffices to show that distinct S-expressions induce distinct rational functions, and distinct P-expressions do. Actually, it suffices to show that distinct P-expressions define distinct expressions, since $P$ and $S$ are exchanged by the involution $\sigma(a)=a^{-1}$: $\sigma(P(a,b))=S(\sigma(a),\sigma(b))$. That different P-expressions induce different functions now follows by induction on the number of variables, since the implicant sets of the S-expressions $Q_i$ are uniquely determined by the implicant set of $P(Q_1,\ldots,Q_m)$ by considering connectivity as above.
The rest of the proof is easy: if the $F_i$ are distinct multilinear monomials, then the $Q_{F_i}$ are distinct rational functions. This implies that for some $a_1,\ldots,a_n$, the $Q_{F_i}(a_1,\ldots,a_n)$ are all distinct positive numbers, since distinct rational functions can't agree on a set of positive Lebesgue measure. To get a contradiction, suppose the $c\_i$ are all nonzero and the identity $\sum_i c_i F_i(f_1,\ldots,f_n)=0$ holds. Then $$\sum_i c_i F_i(\phi_{a_1},\ldots,\phi_{a_n}) = \sum_i c_i R_{F_i}(a_1,\ldots,a_n)^{d/2} \exp(-Q_{F_i}(a_1,\ldots,a_n)\|x\|^2) = 0$$ for all $x$. Without loss of generality, the $Q_{F_i}(a_1,\ldots,a_n)$ are increasing as a function of $i$. But then for large enough $x$, the first term dominates all the others, so the sum can't be zero unless $c_1=0$: a contradiction. This completes the proof.