Does "Let" imply existence?
Whatever you prove after "Let $x$ be [blah]" is a theorem under the assumption that $x$ is a blah. Notice that it's not a theorem under the assumption that there exists a blah, but that $x$ is such a blah.
In particular it is a result with a free variable, but it doesn't say anything about whether such a blah exists. "Let $x$ be a real number with square $-1$. [blah] Therefore $0=1$" (I'll let you fill in the gaps of this proof !) is a perfectly valid proof, and it is not operating under the assumption that there exists a real number with square $-1$. It is operating under the assumption that $x$ is a real number with square $-1$.
In particular, if you already know that $0=1$ is false, then you can deduce the following result (with no free variable) : there is no real number with square $-1$.
Let me now state things more precisely. In most formal deduction systems, you have variables $x,y,z,x_0,...$. In a formula (not a proved formula, just a syntactically correct formula), some variables can be bound (for instance $x$ appearing within the scope of a quantifier such as $\forall x$), some can be free.
To deal with this precisely in the deduction system, one very convenient tool is the tool of contexts - this is better known in things such as type theory but it makes complete sense and is very relevant in classical logic as well. A context $\Gamma$ is a declaration of free variables, and assumptions on these variables (in type theory, you can somehow package these into just saying the type of the variable, but in classical logic this will come in the form of formulas). An example of a context $\Gamma$ is "variables : $x$, assumptions: $x$ is a real number, $x> 0$" - one variable, and two formulas.
Along a deduction, the context can change, which can make your deduction system very flexible and much closer to actual mathematical practice than other deduction systems without contexts. One example of such a change of context is the $\exists$-introduction rule.
This says, very vaguely (there are precise hypotheses that I will not spell out) that if in the context $\Gamma \cup \{x, \psi(x)\}$, where $\Gamma$ does not talk about $x$, you can deduce $\phi$, which does not talk about $x$ either, then in the context $\Gamma$, you can deduce $(\exists x, \psi(x))\implies \phi$. Informally, what this is saying is "if just by assuming that $x$ satisfies $\psi(x)$, and nothing else on $x$, I can deduce that $\phi$ is true, then just the existence of such an $x$ implies $\phi$" - it's obvious, when you say it like that, but in a formal deduction system you have to write it down.
In any case, what's the point of all of this ?
Well the sentence "Let $x$ be a [blah]" should be seen as a context declaration : you're declaring that you're working in the context $\Gamma = x$, such and such assumption on $x$. This does not, in any way, assume anything about existence. When you conclude your proof, say you prove $\phi$ which doesn't mention $x$, then you reach the state "$\Gamma \vdash \psi$" (read "$\psi$ can be proved in the context $\Gamma$")
This does not allow you to conclude that $\psi$ is correct, of course, because you're working within a context. Now, by the $\exists$-intro rule that I mentioned earlier, this does give you $\vdash (\exists x, blah)\implies \psi$ - in the empty context (or under some axioms, with no free variables, defining a global context which you don't specify at all times).
Theorems are results that are proved in the empty context. Now if you want to deduce $\psi$ from this theorem, then you have to make a statement about existence of an $x$ such that blah - this is how implications work : if I want to prove $B$ but only know $A\implies B$, I better say something about $A$ !
So in conclusion you are completely free to use "let" even when things don't exist, or when you don't know whether they exist - as long as you draw the right conclusion from what you've actually proved, and don't say something like "let $x$ be such that blah, therefore $\psi$. Therefore $\psi$" without knowing whether such an $x$ exists.
Note that it would be absurd to have this restriction on "let" : math is not decidable, and so that means that there would be sentences where it's not decidable whether or not you're allowed to write them down !!
Technically it would probably be better to say Suppose that $x$ is (rather than Let $x$ be) when we don’t know for a fact that such an $x$ exists or when we intend to prove by contradiction that it does not: we can always suppose something hypothetically. In practice, however, it is common — I would even say quite normal — to use let in both contexts, and I have little if any sympathy for your professor, who seems to me to have been riding an idiosyncratic hobbyhorse.
"Let" uses an existential proposition implicitly. In formal logic, there is a rule for existence elimination. What it says is that if you have a proposition $\exists x P(x)$, where $P$ is a predicate, then you can construct a subproof with $P\left(c\right)$ as a premise through introducing a new symbol $c$. If you can derive a proposition $Q$ which is irrelevant to $c$ in your subproof, then you $Q$ can be placed outside your subproof and be a derived proposition in the main proof. So "let ..." can be seen as a combination of an existence proposition and the introduction of a subproof with a new symbol.
I strongly recommend reading the book "language, proof and logic" by Jon Barwise and John Ethemendy and getting familiar with the fitch proof style introduced in this book. Honestly, most of the difficulties in learning rigorous math like real analysis or linear algebra lie in the lack of understanding of formal logic. Universities are lacking fundamental math education for new undergraduates and expect them to understand complicated proofs without knowing thoroughly first order logic. It is very difficult for students to understand the so-called $\epsilon-\delta$ language and the implicit use of logic rules in English. The book provides you with a fundamental understanding of first order logic and its application in set theory. Even if it is unrealistic to write daily proofs in fitch style, understanding it helps you figure out the correspondence between daily language and formal language.