What is the intuition behind the "par" operator in linear logic?
⅋ also had me baffled for a long time. The intuition I've arrived at is this: You have both an A and a B, but you can't use them together.
Examples of this (considered as entities in a computer program) include:
- A⊸A (= $A⅋A^⊥$) is a function value; you can't get rid of it by connecting its input to its output.
- $A⅋A^⊥$ is a communication channel (future): you can put a value into one end and retrieve it from the other. It is forbidden, however, to do both from the same thread (risk of deadlock, leading to ⊥).
(The computer program analogy may not make sense for everybody, of course - but this has been my approach to the topic, and my basis for intuitions.)
You can read the axioms in an up-right-down manner. The axiom for ⅋ introduction,
$$ \frac{ Δ_1,A ⊢ Γ_1 \quad Δ_2,B ⊢ Γ_2 }{ Δ_1,Δ_2, A⅋B \ ⊢ \ Γ_1,Γ_2 } $$
can thus be read as "given a value of type A⅋B, you can divide your remaining resources into two, transform them into new ones, and combine the results." And A⅋B can not be split in any other way - A and B must become separate and cannot take part in the same transformations.
Please see Linear Logic in wikipedia: the symbol ⅋ ("par") is used to denote multipicative disjunction, whereas ⊗ is used to denote multiplicative conjunction. They are duals of each other. There is also some discussion of this connective.
Variously, one can denote the "par" operation using the symbol $ \sqcup $, which is described in the paper (pdf) "Linear Logic Displayed" as "join-like."
The logical rules for ⊗ and ⅋ are as follows:
If Γ: A: B Δ ⊢Θ, then Γ: A⊗B: Δ ⊢ Θ; conversely, if Γ ⊢ Δ: A and ⊢ B: Θ, then Γ ⊢ Δ: A⊗B: Θ.
Dually, if Γ ⊢ Δ: A: B: Θ, then Γ ⊢ Δ: A⅋B: Θ; conversely, if A: Γ ⊢Δ and Θ: B ⊢, then Θ: A⅋B: Γ ⊢ Δ.
Multiplication distributes over addition if one is a conjunction and one is a disjunction:
A⊗(B⊕C) ≡ (A⊗B)⊕(A⊗C) (and on the other side);
(multiplicative conjunction distributes over additive disjunction)
A⅋(B&C) ≡ (A⅋B)&(A⅋C) (and on the other side);
(multiplicative disjunction over additive conjunction)
Also:
- A⊗$0$ ≡ $0$ (and on the other side);
- A⅋⊤≡⊤ (and on the other side).
Linear implication $A\multimap B$ corresponds to the internal hom, which can be defined as (A⊗$B^⊥)^⊥$. There is a de Morgan dual of the tensor called ‘par’, with A⅋B=($A^⊥⊗B^⊥)^⊥$. Tensor and par are the ‘multiplicative’ connectives, which roughly speaking represent the parallel availability of resources.
The ‘additive’ connectives & and ⊕, which correspond in another way to traditional conjunction and disjunction, are modeled as usual by products and coproducts.
For a nice exploration in linear logic: see this:
(GAME SEMANTICS):
Game semantics for linear logic was first proposed by Andreas Blass (Blass (1992).) The semantics here may or may not be the same as proposed by Blass.
We can interpret any proposition in linear logic as a game between two players: we and they. The overall rules are perfectly symmetric between us and them, although no individual game is. At any given moment in a game, exactly one of these four situations obtains: it is our turn, it is their turn, we have won, or they have won; the last two states continue forever afterwards (and the game is over). If it is our turn, then they are winning; if it is their turn, then we are winning. So there are two ways to win: because the game is over (and a winner has been decided), or because it is forever the other players turn (either because they have no move or because every move results in its still being their turn).
This is a little complicated, but it's important in order to be able to distinguish the four constants:
In ⊤, it is their turn, but they have no moves; the game never ends, but we win.
Dually, in 0, it is our turn, but we have no moves; the game never ends, but they win.
In contrast, in 1, the game ends immediately, and we have won.
Dually, in ⊥, the game ends immediately, and they have won.
The binary operators show how to combine two games into a larger game:
In A&B, is is their turn, and they must choose to play either A or B. Once they make their choice, play continues in the chosen game, with ending and winning conditions as in that game.
Dually, in A⊕B, is is our turn, and we must choose to play either A or B. Once we make our choice, play continues in the chosen game, with ending and winning conditions as in that game.
In A⊗B, play continues with both games in parallel. If it is our turn in either game, then it is our turn overall; if it is their turn in both games, then it is their turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If we have won both games, then we have won overall; if they have won either game, then they have won overall.
Dually, in A⅋B, play continues with both games in parallel. If it is their turn in either game, then it is their turn overall; if it is our turn in both games, then it is our turn overall. If either game ends, then play continues in the other game; if both games end, then the overall game ends. If they have won both games, then they have won overall; if we have won either game, then we have won overall.
So we can classify things as follows:
In a conjunction, they choose what game to play; in a disjunction, we have control. Whoever has control must win at least one game to win overall.
In an addition, one game must be played; in a multiplication, all games must be played.
To further clarify the difference between ⊤ and 1 (the additive and multiplicative versions of truth, both of which we win); consider A⅋⊤ and A⅋1. In A⅋⊤, it is always their move (since it is their move in ⊤, hence their move in at least one game), so we win just as we win ⊤. (In fact, A⅋⊤≡⊤.) However, in A⅋1, the game 1 ends immediately, so play continues as in A. We have won 1, so we only have to end the game to win overall, but there is no guarantee that this will happen. Indeed, in 0⅋1, the game never ends and it is always our turn, so they win. (In ⊥⅋1, both games end immediately, and we win. In A⊗1, we must win both games to win overall, so this reduces to A; indeed, A⊗1≡A.)
Negation is easy: To play A ⊥, simply swap roles and play A.
A game is valid if we have a strategy to win (whether by putting the game in a state where we have won or by guaranteeing that it is forever their turn). The soundness and completeness of this interpretation is the theorem that A is a valid game if and only if ⊢A is a valid sequent. (Recall that all questions of validity of sequents can be reduced to the validity of single propositions.)
When I was doing research in interaction nets and linear logic, my personal intuition for the connectives was via a traditional Hintikka game semantics (I think different from Blass).
To me $A⅋B$ can be interpreted as meaning "Given a refutation of $A$, I can prove $B$ and given a refutation of $B$ I can prove $A$".
This has some nice features: first, it brings out the multiplicative nature of the connective (the "and" in the middle) and second it is clear why $A⅋A^⊥$ is true. Clearly if I am given a refutation of $A$ I can refute $A$. There is nothing non-constructive about it. Third it does not require use of parallel execution (though of course that can be an advantage as well).
In my interpretation tensor $A⊗B$ means "I can prove $A$ and I can prove $B$".
In the above "prove" means "win as verifier" and if there is a context it includes the assumption that I use the context linearly. E.g. if there is an $A$ in the context I will use it exactly once.
I don't recall anyone ever explaining the connectives that way precisely, but it may be equivalent to some other formulation.
EDIT Henning is right I think that the better way to put it is "Given a refutation of $A$, I can prove $B$ or given a refutation of $B$ I can prove $A$".
Par is an assertion that both options are possible and makes no commitment to either, but the surrounding proof context may select one or the other.