What is the point of model theory?
Model theory is an excellent tool for talking about the semantics of formal languages, particularly first order logic with equality and with constants, functions and non-logical predicates, by building models out of sets. You can do other, fancier things with it like considering models of set theory itself, but I don't understand those applications and can't comment on them.
One interesting fact from model theory is the existence of nonstandard models of arithmetic. I think this is a good motivating example because it shows that axiomatizations of seemingly simple things like arithmetic do not pin down a semantics as much as we would like.
I can try to answer this from the perspective of someone who found out about model theory through the proof theory and programming language theory route. I don't know the subject well; I'm just a beginner. As a meta note, I think Noah Schweber is right and you should phrase your question more politely.
Basically, a model is a structure that assigns interpretations to various symbols and has a domain of discourse. The interpretations are all drawn from our universe of sets in our underlying set theory.
A symbol is an abstract component in a signature. It can be a constant symbol (like a nullary function), a function symbol, or a predicate symbol. Functions and predicates can have different arities. Normally, there's a single domain of discourse, but you can generalize this to multiple sorts if you want to.
So, symbols in signatures can be used in two ways. They can be used in your formal language. For instance, if $f$ is a function symbol, you can express properties about $f$ in your language.
$$ \forall x \mathop. \forall y \mathop. f(x, y) = f(y, x) $$
The function symbol $f$ is also supplied an interpretation in a given structure.
A set of sentences using non-logical symbols taken from a particular signature can be used to define a theory. A structure where all the sentences in a theory are true is called a model of the theory.
But, structures are also made of sets. This means that their domains have cardinalities.
This means that you can talk about the spectrum of a theory, which is the number of models up to isomorphism for each cardinality.
The two results that Noah Schweber mentioned in a comment constrain the shape of spectra of a theory for infinite cardinalities. NOTE: I'm counting models up to isomorphism here, not "raw" models.
The Löwenheim–Skolem theorem says that if your theory has an infinite model, it has at least one model of every infinite cardinality.
Morley's categoricity theorem says that if your theory has exactly one infinite model for some uncountable cardinality $\kappa$, then it has exactly one infinite model for all uncountable cardinalities.
If you wish to understand your own question, you need to learn some history of how mathematics became the mathematics of today.
Some accounts of the priority debate between Newton and Leibniz suggest that British mathematics pursued a logical calculus in the spirit of Newton's achievements. Meanwhile, Continental mathematics developed the calculus along Leibnizian methods. One may understand the work of Augustus de Morgan with regard to symbolic algebras in this context. Arguably, explicit accounts of formalism begin with his categories "peculiar symbols," "rules of operation," and "meanings." He refers to algebras with meanings as "significant algebras." He points out that an individual who discovers a symbolic algebra with no instruction of intended meanings might invent meanings which are no less significant than those pragmatically conveyed from teacher to student.
It is important to understand that the significance of Skolem's criticism of Zermelo falls under this approach to studying mathematics. And, logicism as introduced by Frege rejected this formalist view. And, while Russell's logicism differed from Frege's, it also rejected this formalist view.
Chang and Keisler describe model theory as "logic + universal algebra." Of course, the symbolic algebras studied by de Morgan fall under universal algebra today. But, as logic developed in the late nineteenth century, rules of inference became axiomatized--- that is, logic now had "rules of operation" in the sense described by de Morgan. However, these rules of operation act upon formulas. For formalists, Skolem's criticism of Zermelo had been a first step toward understanding the "definite properties" of mathematical statements in the form of first-order formulas.
Your questions about proof theory speak to the influence of David Hilbert. His interest in formalism seems to have been motivated by the desire to ensure that mathematics could not be responsible for contradictions that might arise in applications of mathematics. A secondary motivation had been to sever mathematics from its association with spatiotemporal intuition. Hilbert did not protest Kant's reduction to a priori intuition nearly as much as others in his day. He simply maintained that Kant attributed too much to a priori intuition. His book, "Foundations of Geometry" had been significant for eradicating temporal verbs from geometric reasoning. And, because the nineteenth century produced a plurality of geometries, he demonstrated that "peculiar symbols" and "rules of operation" could, indeed, support different "meanings."
When Hilbert turned his attention to mathematical logic some time later, Russell had already published several works. Hilbert had been aware of Russell's work. But, he also retained his concerns about consistency. His first attempts to ground mathematical induction attracted criticism from Poincare over circularity. His response to this had been a limited invocation of a priori intuition. In Hilbert's case it would be the immediate sensible impression made by symbols on a page. You could not have proof theory without this. Formalists maintain that Hilbert's metamathematical induction is weaker than the mathematical induction in the object language. Metamathematical enumeration consists of alternating strokes and plus signs with only strokes as initial and terminal symbols. Formulas are concatenations of the "peculiar symbols." Proofs are the concatenations of formulas according to the "rules of operation" given by the logical calculus.
Of course, there is no reason for platonists to respect the subtleties Hilbert needed to circumvent Poincare's objections. And, there is no reason to actually distinguish formalism from platonism with regard to Goedel's work. Hilbert needed inductive structure for the purpose of verification. His metamathematics specifically assumes an inseparable relationship between arithmetic and logic. Goedel's approach to recursion is clearly platonistic. And Hilbert's presuppositions are vulnerable to the arithmetization of syntax.
In so far as model theory is understood to arise from universal algebra, "meanings" in semantics ought to relate to one another in much the same way that algebraic realizations relate to one another through homomorphisms and isomorphisms. In an appendix to his book on mathematical logic, Kreisel divides foundational studies into semantical foundations and combinatorial foundations. His usage may be compared with the study of groups as is done in combinatorial group theory. In combinatorial group theory one studies presentations (de Morgan's rules of operations). In combinatorial foundations one begins with signatures (de Morgan's peculiar symbols).
This is important for understanding Skolem's objections to Zermelo. Above all, Zermelo's set theory had failed to be categorical. The signature of Zermelo's theory admitted more than just a single interpretation (de Morgan's meanings). This is certainly a problem for a theory claiming to be a foundation for mathematics.
There are two particularly important features of first-order logic relating to infinities. One is the compactness theorem. The other is the Lowenheim-Skolem theorem. First-order logic is the only logic closed under conjunction, unary negation, and existential quantification which satisfies both of these theorems. And, while one might not require a complete logic, the completeness of first-order logic speaks to the artifact of Hilbert's concern for the study of consistency.
Not every logic is complete. Your willingness to set aside semantics in favor of proof theory is only possible by virtue of a history in which Hilbert emphasized the need to consider the possibility that mathematics might contain hidden inconsistencies. That is, it is only possible because first-order logic is complete.
Today we have many logics which are axiomatized. And, we understand the term "axiomatic" in a very general way. This had not been true of Hilbert. He used the term in contradistinction with "genetic." He recognized that logicism as practiced by Cantor and Dedkind was inherently constructive. Because constructive mathematics could be interpreted in terms of temporal succession, he wanted to divorce mathematics from constructive approaches.
This relates to platonism in the following sense. Aristotle proceeded from syntactic analysis on the basis of use cases. He distinguishes between demonstrative argumentation and dialectical argumentation. Demonstrative argumentation relates to pedagogy, definition, and "first principles." To a large extent the "exactitude" of ordinary mathematics arises from the idiosyncratic use of definition in mathematics. But, demonstrative argumentation is described in terms of "ordered knowledge claims." It is genetic. Hilbert's view of mathematics breaks this. In the modern first-order paradigm this divergence from Aristotle's account lies with Beth's definability theorem.
Dialectical argumentation is subdivided into other use cases. It covers legal argumentation (forensic use), political argumentation (rhetorical use), and counterfactual argumentation (analytical use). When people ask whether or not truth in mathematics can be distinguished from simple belief, they are asking if analytical use can really be distinguished from rhetorical use. One place they differ lies with the relationship to definition. Analytically, one cannot prove a definition, and, any counterexample destroys a definition. Unfortunately, Hilbert's approach obfuscates this distinction as well.
What remains is a logic with both existential import and the necessary truth of reflexive equality statements. And, this is precisely what is needed for a platonist to study mathematical objects as if they were objects of experience. These two features admit interpretation of the law of identity as a metaphysical presupposition. One can weaken this by distinguishing metaphysics and ontology. Ontologically, the law of identity can be understood as arising from the formal condition that two occurrences of a denoting symbol denote uniformly.
To understand the significance of Goedel's platonism, look at the SEP entry on "Recursive Functions" at
Plato-dot-Stanford-dot-edu
In Section 1.2 you will find discussion of Skolem's paper on primitive recursive arithmetic. When speaking of Skolem's assumptions on the body of that section, Item ii) will explain how Skolem assumed that provably equal descriptive functions could be substituted for one another. Let me emphasize the expression "provably equal." This speaks to the very description theory that the Hilbert school worked to eliminate from mathematics.
I have worked out non-propositional inference rules by which Skolem's use of description theory is realized. It cannot be done using the usual first-order paradigm. This is because it must manage existential import in a manner similar to negative free logic. And, in particular , numbers are "ordered" according to existential priority as described by Aristotle in his book "Categories." Aristotle describes this on the same section where he explains how demonstrative argumentation is characterized by the ordering of knowledge statements according to what is prior and what is posterior.
If you scroll down to Goedel's contribution to the study of recursive functions, you can see that his formulation had been immediately effective for his questioning of the Hilbert program. The SEP entry goes on to explain that primitive recursive arithmetic had to be extracted from the Goedellian approach. Apparently this clarification had been left to Church and Rosser.
For what it is worth, Goedel's platonism is well known by his own admission. But, I think that comparing the effectiveness of Goedel's development of recursion to Skolem's careful criticism of how quantifiers were being understood in classical logic exemplifies its significance.
With regard to your question about "temporal verbs," you can pick up any translation of Euclid's "Elements." He commonly speaks of "constructing" elements of diagrams as his proofs proceed. Any variant of the word 'produce' would qualify. Proposition 16 reads:
"In any triangle, if one of the sides be produced, the exterior angle is greater than either of the interior and opposite angles."
Good luck with your studies. For what it is worth, Eric Schecter's book on mathematical propositions is an excellent resource for understanding the application of mathematics to the study of logic independent from foundational programs. You will not, for example, have your study of logic undermined by the ulterior motive of teaching you Goedel's incompleteness theorems before you learn about a plurality of logics. As Schecter points out, ordinary mathematical practice can apply different logics in different circumstances. The desire to exclude irrelevant assumptions speaks to relevance logic rather than classical logic. And, the need to have a relation in recursion theory that distinguishes "defined expressions" from "undefined expressions" is clearly related to the principle of indiscernibility of non-existents from negative free logic.