Syntax/semantics conflation leads to infinitary logic
Andrej gives a good general answer, but here is a more specific elaboration of what I think may have been in Moore’s mind here.
Work in the language of PA, or some similar language intended to be interpreted in the natural numbers. Consider the two formulas (in modern notation): $$ \exists x\ \varphi(x) \qquad \qquad \bigvee_{n \in \mathbb{N}} \varphi(\bar{n}) $$ where $\bar{n}$ is the numeral $S^n(0)$, as usual.
What’s the difference? Obvious answer: the first formula is an existential quantifier; the second is an (infinitary) disjunction. But that terminological answer is a bit unsatisfying, since (as you know) existential quantification can be analysed as a kind of disjunction, and vice versa. The real essential difference is that the former is quantifying over the domain of individuals under consideration, while the latter is quantifying over the actual natural numbers.
We can exhibit this difference concretely by looking at their interpretations in a nonstandard model of PA. However, as long as they are just interpreted in $\mathbb{N}$, there’s no difference — they’re completely equivalent, since the domain of individuals is the actual natural numbers. So to articulate the difference, we have to see the formulas as part of a syntax that can be interpreted in many different structures, not simply as a way of describing the single structure $\mathbb{N}$.
In summary, if syntax is not set up as independent from semantics, but just for a specific fixed structure, then one loses the distinction between existential quantifiers and infinitary disjunctions indexed by the domain. Similarly, one loses the distinction between having infinitely many constants in the language and infinitely many elements in the domain, and so on. In a modern setup, where the language is defined independently of the interpretation, these distinctions are clear, but in early setups, it was not at all so.
When one first articulates the distinction, therefore, one can analyse the ambiguous existential quantifications of the older language either as internally indexed (i.e. existential quantifications in the modern sense), or externally (i.e. as infinitary disjunctions in the modern sense). With a modern eye, we do the former without thinking; but for Löwenheim, it was just as reasonable to understand them in the latter sense, and be led thereby to infinitary logics.
It sounds to me like this question is asking us to divine about the thinking of mathematicians in the early 20th century. Obviously, I can do no such thing, but perhaps I can explain how the early views of logic make very good sense from the point of view of modern logic.
In classical treatments of first-order logic and model theory, the dichotomy between syntax and semantics is quite pronounced, and one easily gets the impression that it is necessary. Students are taught that finitary syntax must be the norm. However, this really is just a design choice. For instance, in categorical logic one customarily considers internal languages and logics of various kinds of categories, without insisting that finitary syntax is paramount or primary.
Let us give an example. The internal language of a particular topos $\mathcal{E}$ has as types all (names of) the objects, and as term formers all the (names of) morphisms of the topos, of which there may be arbitrarily many. If there is a formal distinction between an individuum and its name it is certainly not dwelled on or considered essential. This point of view is close to the one of Löwenheim, who "did not explicitly distinguish the names of individuals from the individuals themselves". Because the distinction was not there it could not have lead to infinitary logic.
Furthermore, we take as the axioms all equations of the form $x : A \vdash f(g(x)) = h(x)$ whenever $h = g \circ f$ in $\mathcal{E}$. (I shall address infinitary operations shortly.) The usual finitary syntax is special because of its universal mathematical property: it is the internal language of the initial topos (where of course "initial" has to be appropriately interpreted).
My point is that it looks as if the mindsets of the early 20th century logicians were closer to that of categorical logic than first-order logic and model theory. Or to put it another way, the conflation of syntax and semantics was not a mistake. Using the word "conflation" to describe what they did betrays a very syntactically minded view of logic rooted in philosophy of (human!?) language–which came later to dominate logic for several decades. It is easy to argue in the opposite direction and present the dichotomy as a flaw: the preoccupation with pure syntax by logicians of the mid 20th century demonstrates their inability to think abstractly, and can be likened to the historic period of mathematical analysis during which "function" was equated with "expression".
The question invites primarily opinion-based answers, which is why I voted to close it.
Let me address the question of infinitary languages. Here again we first need to realize that "finite vs. infinitary" is not a political or a philosophical question, or if it is, it should be wrestled from politicians and philosophers and made into a proper mathematical question. The language should be precisely as infinitary as the situation demands. (To my geometrically minded self a language without a situation is meaningless. In fact, a language will create a situation for itself if none is given in advance.) Thus, the internal language of elementary toposes (and logical morphisms) is not infinitary because not all such toposes have sufficient infinitary structure, but the internal language of Grothendieck toposes (and geometric morphisms) is infinitary because Grothendieck toposes are cocomplete.
Under the "conflation" of syntax and semantics the semantics naturally suggests the infinitary operations. Contrary to what the question seems to presuppose, one does not even start with a finitary language, and there is no step under which we are "lead" to an infinitary language. I find it odd that one would explain the early 20th century logic as if there was some sort of passage from finitary to infinitary logic. With a century of hindsight (but perhaps not with only half a century of hindsight) it seems more natural that the finite/infinitary distinction appears afterwards, when one has already had some success in formulating a logical language appropriate to the semantic domain under consideration. (Of course, a semantic domain incorporates not only an extension of individuals but also the appropriate structure carried by the extension.)
One last remark: early logicians, particularly those who studied complete Boolean algebras, were naturally inclined to interpret $\forall$ and $\exists$ as very large conjunctions and disjunctions. In fact, many mathematicians today still naively think of quantifiers as infinitary versions of connectives, and it takes a bit of logic training to understand that the difference between $\exists$ and $\bigvee$ is a bit like the difference between uniform and pointwise continuity. Quantifiers are infinite suprema and infima only in certain situations, whereas in general they are adjoints, as was discovered by W. F. Lawvere.