Is "tautology" a syntactical notion?
It is not so much a question of which definition, as on what your perspective is. In other words, it depends on what you mean by "syntactical" and "semantical". :-)
From the perspective of model theory, it is convenient to consider "tautology" to be a syntactical concept, because it's a matter of the shape (so to say) of a formula, and not on how the formula's meaning relates to a model at all. So it's a concept that is not particularly interesting from a model theorist's point of view -- he will consider it a background concept that comes from the concept of a formula rather than from the models he's really concerned with, and all that counts as "syntax" for him.
On the other hand propositional logic has its own distinction between syntactic and semantic concepts. Here "semantic" is used about things that are about truth values and evaluation of formulas to a truth value, whereas "syntax" is about picking formulas apart and putting them together in new configurations, as in symbolic proofs. In that world, "tautology" is firmly established as a "semantic" concept. (Or so I thought -- but see also Noah Schweber's answer which shows that he considers the word "tautology" to belong to the syntactic concept even when considering only logic. He and I agree about what is syntactic and what is semantic in that context, but not about the preferred definition of "tautology").
Moral: "syntax" and "semantics" are not crisp technical terms, but are fuzzy categories that you use to structure your theory building within each particular field.
Actual answer
I don't have Keisler's book at hand, so I'm going off of the quoted passage only; I could therefore be misrepresenting what Keisler is actually about. Also, note that my notion of "syntactic" is different from Henning Makholm's.
The relevant clause here is
all the sentence symbols occuring in $\phi$
(emph. mine). Keisler isn't looking at actual models - by which I mean assignments of truth to all the sentence letters - but merely "finite partial models." He's construing these as syntactical, rather than semantic, objects, and I think that's the source of the confusion here. Keisler is taking into account not just the language used to describe the thing in question but also the procedure used to analyze it, which leads him to planting these finite variable assignments on the "syntax" side: they can be fully analyzed via truth-tables, which are completely finitary, as opposed to the a-priori-infinitary complete assignments themselves.
However, I think that identifying "syntactic" and "finitary" is a big mistake. Recall the way we construct a truth table for a formula: we use exactly the same clauses as in the definition of truth of a sentence in an assignment! In fact, in my experience the more one stares at this the more the following alternate approach becomes more natural: take as our semantics all partial assignments of truth values to sentence letters! This is actually something we can do easily - in particular, we modify the definition of satisfaction to say that a partial assignment $a$ makes a sentence $\phi$ true only if every sentence letter occurring in $\phi$ is in the domain of $a$. This results in some minor changes (e.g. we have $a\not\models\phi$ and $a\not\models\neg\phi$ whenever $\phi$ has a sentence letter not in the domain of $a$) but these ultimately aren't hard to deal with.
Whether or not that shift is something that appeals to you, it should be clear why I find Keisler's claim that finite truth value assignments are syntactic to be dubious at best. Finitary, sure, but that's a different thing (in my opinion at least).
Obnoxiously, even granting my point there's still a terminological issue: I've seen "tautology" used to refer to both the semantic notion and the syntactic notion. Logic, unfortunately, is rife with this sort of inconsistency: we're often lulled into a false sense of security by results which tell us that these inconsistencies can't cause us problems in the particular situations we're interested in at the moment - in this case, that's the completeness/soundness theorems at work - but they're still annoying for students and can bite us down the road.
Digression, 1/2
I can't avoid at this point defending the syntax/semantics distinction as something which can at least in principle be made precise and whose precisiation is valuable and interesting. Granted, this is a point about which much ink (physical and digital) has been spilled, but I think it's still worth saying a bit about. And on that note, here's a "digression" which is longer than the entire rest of my answer!
There is no single definition of "logic" - indeed, I think that's part of the beauty of the subject - but the following is fairly decent (I'm being somewhat informal for simplicity):
An abstract logic $\mathcal{L}$ is a tuple $(Sent_\mathcal{L}, Mod_\mathcal{L},\leadsto_\mathcal{L},\models_\mathcal{L})$ where $Sent_\mathcal{L}$ is a set of things called "sentences," $Mod_\mathcal{L}$ is a set of things called "models," $\leadsto\mathcal{L}$ is a relation between sets of sentences and sentences, and $\models_\mathcal{L}$ is a relation between models and sentences.
From now on I'll suppress the "$\mathcal{L}$"-subscripts.
The relation "$\leadsto$" gives our basic proof steps - the deduction relation "$\vdash$" (or more precisely, "$\vdash_\mathcal{L}$") is the transitive closure of "$\leadsto$," and tells us what sentences we can eventually prove from a given set of hypotheses. A "coarser" definition would have skipped $\leadsto$ in favor of $\vdash$ alone, but I think it's good to take this finer approach.
The set $Mod$ and the $\models$-relation provide our semantics, while $\leadsto$ (with its transitive closure $\vdash$) provides our syntax; the only commonality between the two$^1$ is that they both involve $Sent$. So we can make the syntax/semantics divide precise in this context by saying that something is syntactic if it only involves $Sent$ and $\leadsto$, and is semantic if it only involves $Sent$, $Mod$, and $\models$.
Digression, 2/2
Let me now turn things back towards Keisler's statement, at least a little bit.
One important thing to keep in mind is that the same "logic" may be presented as an abstract logic in multiple different ways, in the same way that a single natural-language algorithm might have many different specific implementations. For example, let's look at propositional logic:
On the semantic side, there is the usual semantics where $Mod$ consists of all (total) assignments, but there is also the "partial assignment" semantics mentioned above. These have meaningful differences: for instance, "$\mathcal{M}\models\varphi$ or $\mathcal{M}\models\neg\varphi$" is true about the latter but not the former.
On the syntactic side, there are many different proof systems we can use for propositional logic, which give rise to different $\leadsto$ notions.
However, these implementations can be equated in precise ways:
If $a$ is a partial assignment, $b$ is a total assignment extending $a$, and $\varphi$ is a sentence using only letters in the domain of $a$, then $a\models \varphi$ in the partial assignment semantics iff $b\models\varphi$ in the total assignment semantics.
- Note that this is exactly Keisler's observation! And this is why I disagree with his characterization. "Finitary," yes; "syntactic," no.
All the various $\leadsto$-notions we consider have the same transitive closure.
Incidentally, the same situation holds with respect to first-order logic. One half of this is very well-known: we quickly see many different $\leadsto$-notions which all have the same transitive closure. More interestingly and much less commonly known, there are alternate semantics which are quite different but still "equivalent" in a precise sense to the standard one (these crop up in algebraic logic, especially around problems like "how many variables do we need to introduce to prove a sentence of a given type?" - see e.g. here).
Thinking about what these equivalences mean leads us to notions like embeddings of and "nice" maps between logics; that is, to the study of logics as algebraic objects, just like groups, rings, fields, etc. The syntax/semantics distinction (and other distinctions) help us organize the different aspects of the logics (informally construed) that we're interested in, and one of the things this does is indicate algebraic aspects of their more abstract presentations which we can focus on (which is one reason I care about disagreeing with Keisler's presentation).
$^1$Yay footnotes.
There's a slight abuse here, since in natural language we'd consider sentences to be syntactic objects while in this case I want to consider them neither syntactic nor semantic. But this is fairly benign. That said, we can sharpen this distinction, and present a logic as consisting of a purely semantic part with no mention of sentences, a syntactic part as above, and the $\models$-relation as a third jointly-syntactic-and-semantic part. If you're interested, the "something satisfying" section of this old answer of mine says a bit about what the purely semantic part of such a thing might be like.