What is the purpose of free variables in first order logic?
Consider the informal argument 'Everyone loves pizza. Anyone who loves pizza loves ice-cream. So everyone loves ice-cream.' Why is that valid?
Roughly: Pick someone, whoever you like. Then, s/he loves pizza. And so s/he loves ice-cream. But sh/e was arbitrarily chosen. So everyone loves ice-cream.
This informal argument can be spelt out with plodding laboriousness with explicit commentary like this:
Everyone loves pizza. (That's given)
Anyone who loves pizza loves ice-cream. (That's given too)
Take some arbitrary person, call her Alice. Then Alice loves pizza (From 1)
If Alice loves pizza, she loves ice-cream (From 2)
Alice loves ice-cream (From 3, 4, by modus ponens)
But Alice was an arbitrary representative person, so what applies to her applies to anyone: so everyone loves ice-cream.
Now consider the formal analogue of this proof (using $F$ for 'loves pizza" etc.)
$\forall xFx\quad\quad\quad\quad$ Premiss
$\forall x(Fx \to Gx)\quad$ Premiss
$Fa\quad\quad\quad\quad\quad$ From 1., Univ. Instantiation
$(Fa \to Ga)\quad\quad$ From 2., Univ. Instantiation
$Ga\quad\quad\quad\quad\quad$ From 3, 4 by MP
$\forall xGx\quad\quad\quad\quad$ From 5, Univ. Generalization
(UG on $a$ is legimitate as $a$ appears in no assumption on which (5) depends, so can be thought of as indicating an arbitrary representative member of the domain.)
So note, that we need here some symbol playing the role of $a$, not a true constant with a fixed interpretation, not a bound variable, but (as common jargon has it) a parameter which stands in, in some sense, for an arbitrary element of the domaain.
Now, in some syntaxes, variables $x$, $y$ etc. only ever appear bound, as part of quantified sentences. And parameters are typographically quite distinct, $a$ and $b$, etc. This is the usage in Gentzen, for example.
But another tradition (probably more common, but not for that reason to be preferred), we are typographically economical, and re-cycle the same letters as both true variables and as parameters. In other words, we allow the same letters to appear both as "bound" variables and as "free" variables. Then the formal proof will look like this:
$\forall xFx\quad\quad\quad\quad$ Premiss
$\forall x(Fx \to Gx)\quad$ Premiss
$Fx\quad\quad\quad\quad\quad$ From 1., Univ. Instantiation
$(Fx \to Gx)\quad\quad$ From 2., Univ. Instantiation
$Gx\quad\quad\quad\quad\quad$ From 3, 4 by MP
$\forall xGx\quad\quad\quad\quad$ From 5, Univ. Generalization
This is a superficial difference, however: the role of the free (unbound) variable is as a parameter. And we've seen, even in informal arguments, we use expressions like "s/he" or even "Alice" as parameters. Looked at in that light, there should be no mystery about why we need parameters in a formal logic too. And in one syntax, unbound variables play the role of parameters.
Categorical logic implies a very natural way to think of free variables: they're just another kind of element.
There is a notion of a "generalized element" of an object $X$ in a category, which is simply any morphism whose codomain is $X$. Ordinary elements of sets can be viewed as generalized elements, by identifying an element $a \in X$ with the function
$$ f : \{ \varnothing \} \to X : \varnothing \mapsto a $$
The identity map on $X$, interpreted as a generalized element, turns out to be an excellent way to capture the notion of an "indeterminate variable" or a "generic element".
When we have a collection of free variables $x_1, \ldots, x_n$ of our domain of discourse, and we make an interpretation into the set $U$, we interpret them as the generalized elements corresponding to the projection maps $U^n \to U$.
We can correspondingly interpret the relation $R$ as being a particular map
$$ R : U^2 \to \{ \text{true}, \text{false} \} $$
and $R(x,y)$ is the corresponding generalized element of $\{ \text{true}, \text{false} \}$.
"What the difference between a free variable and the usage of $∃x$ ? I mean, isn't $∃x$ the same as some random variable?"
So the issue is:
"What are free variables actually useful for ?"
I think that it may be usefulto start from:
"What are quantifiers useful for ?"
Consider an example from elementary algebra :
$(x + y)^2 = x^2 + 2xy + y^2$.
Usually, we do not use quantifiers to state this "law", but it is implicitly universally quantified; i.e. tha "intended" meaning of the law is :
for every two numbers $x$ and $y$, the above identity holds; i.e.
$\forall x \ \forall y \ [(x + y)^2 = x^2 + 2xy + y^2]$.
Thus, we may think that, in the usual mathematical practice, we may dispense with the universal quantifier.
But we can consider a different algebraic example:
$x^2-2x +1=0$.
In this case we are not asserting that the equation holds for all values of $x$.
When we ask for the solutions (if any) of equations, we are asking if the existential quantification of the equation holds, i.e. if
$\exists x \ (x^2-2x +1=0)$.
The conclusion is that free variables in "usual" mathematical contexts can be treated as implicitly universally quantified.
In other words, it is not possible to express all the "interesting" facts we want to express simply with universal quantifiers upfront.
About:
"$\exists x$ is the same as some random variable",
we have that the existential quantifier is needed in order to express the fact that in a certain "domain of discourse" there are some objects (at least one) which satisfy a certain property.
For example, we have that in the domain $\mathbb N$ of natural numebrs, there is an object (a number) which satisfy the identity : $x = 0$. Precisely, the number $0$.
Thus, we are licensed to state the following true fact about $\mathbb N$ :
$\exists x \ (x = 0)$.
Of course, bound variables can be replaced without changing the meaning of the formula; thus we can write: $\exists y \ (y = 0)$, and the two formulae express the same fact about $\mathbb N$.
But we have to take care of "interactions" between quantifiers; we cannot start from $\exists x \ (x = 0)$, "existentially instantiate" it (incorrectly) to derive e.g. $(y = 0)$ and then apply UG to conclude with: $\forall y \ (y = 0)$, that is clearly false.
This kind of details are formalized with the rules of the proof system, and different "styles" of proof system can differ on the details, but all must formalize our "intuitive" understanding of what typs of inference are "correct".
Peter Smith's answer above gives a beautiful explanation of the role of free variables in first-order logic proofs.