What exactly is a judgement?

I highly recommend reading Martin-Löf's paper referenced by Ulrik Buchholtz in the comments to your question. Apart from that, here are a couple of point that might help, some of which were already made by Andreas Blass in his answer.

A judgement is an act of knowing, or asserting a piece of knowledge about a mathematical object. For instance, a non-traditional single-sorted first-order logic could have the following kinds of judgments:

  1. $t\ \mathsf{term}$, intended meaning "$t$ is a well-formed term"
  2. $P\ \mathsf{prop}$, intended meaning "$P$ is a well-formed formula"
  3. $\Gamma\ \mathsf{hyps}$, indented meaning "$\Gamma$ is a well-formed list of hypotheses"
  4. $\Gamma \vdash P\ \mathsf{holds}$, intended meaning "formula $P$ is derivable under hypotheses $\Gamma$".

You do not normally see all of these because for first-order logic the first three are very simple and they get relegated to an informal explanation of syntax. But notice that there is a difference between the syntactic object "$P \land Q$" and the assertion "$P \land Q$ is a well-formed formula" (a judgment). If you doubt that there is a difference, consider the syntactic object "$P \land (Q \lor R$". We can speak meaningfully about it ("It's missing a closing parenthesis", "It contains two connectives."), but we do not assert "$P \land (Q \lor R$ is a well-formed formula".

Let me emphasize the difference between the sequent $$\Gamma \vdash P$$ and the judgement $$\Gamma \vdash P\ \mathsf{holds}.$$ The sequent is a syntactic object. The judgement is an assertion about the syntactic object. A lot of texts use the same notation for both, and that's where some of the confusion may be coming from. (Also note that many texsts write "$\mathsf{true}$" instead of "$\mathsf{holds}$", but I want to emphasize here that I am not talking about semantic truth, but ratehr about derivability.)

Rules of inference explain how we can build evidenece of assertions. The "syntactic" assertions, such as as "$P$ is a well-formed formula" or "variable $x$ does not occur in $P$" can be explained in terms of inference rules, but you won't see them in logic textbook, because they prefer to explain syntax in an informal way, e.g., "If $P$ is a wff and $Q$ is a wff then $P \land Q$ is a wff." If we want to, we can write down the corresponding formal rule: $$\frac{P\ \mathsf{prop} \qquad Q\ \mathsf{prop}}{(P \land Q)\ \mathsf{prop}}.$$ This says: "If $P$ is a well-formed formula and $Q$ is a well-formed formula, then $P \land Q$ is a well-formed formula." It is different from the rule $$\frac{\Gamma \vdash P\ \mathsf{holds} \qquad \Gamma \vdash Q\ \mathsf{holds}}{\Gamma \vdash P \land Q\ \mathsf{holds}}$$ which says: "If $P$ is derived from hypotheses $\Gamma$ and $Q$ is derived from hypotheses $\Gamma$, then $P \land Q$ can also be derived from hypotheses $\Gamma$."

To see why it might make sense to have formal rules for establishing that something is a wff, consider the rule: $$\frac{ }{\Gamma \vdash t = t\ \mathsf{holds}}$$ It does not say that $t$ must be a well-formed term. It would be more precise to say: $$\frac{\Gamma\ \mathsf{hyps} \qquad t\ \mathsf{term}}{\Gamma \vdash t = t\ \mathsf{holds}}$$ Now the rule is explicit: reflexivity holds only under well-formed hypotheses, and only for well-formed terms. Of course, we need to give rules for $t\ \mathsf{term}$ and $\Gamma\ \mathsf{hyps}$. They would be something like $$ \frac{ }{()\ \mathsf{hyps}} \qquad\qquad \frac{P\ \mathsf{prop} \qquad \Gamma\ \mathsf{hyps}}{(P, \Gamma)\ \mathsf{hyps}} $$ and (assuming that the language has variables $x_k$, a constant $1$ and a binary operation $+$): $$ \frac{k \in \mathbb{N}}{x_k\ \mathsf{term}} \qquad\qquad \frac{ }{1 \ \mathsf{term}} \qquad\qquad \frac{t\ \mathsf{term}\qquad u\ \mathsf{term}}{(t + u)\ \mathsf{term}} $$ I am fudging details about meta-level natural numbers that are used for indexing variable names. You can devise your own solution (for instance use unary representation of numbers, and give inference rules for such a representation).

It is perhaps silly to follow such rigorous formal discipline for first-order logic, but there are formal systems where the discipline is necessary. Dependent type theory is an example, as well as formal systems describing programming languages. These can grow quite complex, and since they also get implemented in practice, it is quite helpful to have all details written down formally.


I think the problem here is that different logical systems can formalize different sorts of information. For example, in traditional deductive systems, the notion of "well-formed formula" is not formalized; it's part of the meta-language. But in other systems, notably Martin-Löf type theory, that notion (or at least a very close relative of it) is included in the formal system by judgements of the form "$A$ prop".

Similar things happen with other notions. For example, the notion of substitution, i.e., "the result of substituting term $t$ for free variable $x$ in formula $\varphi$" is usually available only in the meta-language, but there are systems in which it is also formalized in the object language.

In my (admittedly limited) experience with such things, the word "judgement" is usually used for the object-language formalization of assertions that, in many other systems, would be part of the meta-language. Typical examples would be "$A$ is a proposition," "$T$ is a type," and "$X$ is defined to be $D$."