Does symbol "$+$" denote an operation in the notation of a complex number: "$a+ib$"? In case it does, which operation does "$+$" denote?
There is indeed a very annoying abuse of notation here. The short version is that the "$+$" in "$a+bi$" - in the context of defining the complex numbers - is being used as a purely formal symbol; that said, after having made sense of the complex numbers it can be conflated with complex addition.
An actually formal way to construct $\mathbb{C}$ from $\mathbb{R}$ is the following:
A complex number is an ordered pair $(a,b)$ with $a,b\in\mathbb{R}$.
We define complex addition and complex multiplication by $$(a,b)+_\mathbb{C}(c,d)=(a+c,b+d)$$ and $$(a,b)\times_\mathbb{C}(c,d)=(a\times c-b\times d, a\times d+b\times c)$$ respectively. Note that we're using the symbols "$+$," "$-$," and "$\times$" here in the context of real numbers - we're assuming those have already been defined (we're building $\mathbb{C}$ from $\mathbb{R}$).
We then introduce some shorthand: for real numbers $a$ and $b$, the expression "$a+bi$" is used to denote $(a,b)$, "$a$" is shorthand for $(a,0)$, and "$bi$" is shorthand for $(0,b)$. We then note that "$a+bi=a+bi$," in the sense that $$a+bi=(a,b)=(a,0)+_\mathbb{C}(0,b)=a+_\mathbb{C}bi$$ (cringing a bit as we do so).
Basically, what's happening in the usual construction of the complex numbers is that we're overloading the symbol "$+$" horribly; this can in fact be untangled, but you're absolutely right to view it with skepticism (and it's bad practice in general to construct a new object so cavalierly).
This old answer of mine explains how properties of $\mathbb{C}$ can be rigorously proved from such a rigorous construction, and may help clarify things. Additionally, it's worth noting that this sort of notational mess isn't unique to the complex numbers - the same issue can crop up with the construction of even very simple field extensions (see this old answer of mine).
Some would say: we identify a real number $a$ with the complex number $(a,0)$. Then, using this identification, $$ (a,b) = (a,0)+(0,b) = (a,0)+(0,1)(b,0)= a+ib . $$ If we say it that way, then the "$+$" is complex addition. And (with this identificaion) every real number is also a complex number.
Maybe a teacher would (to start with) use a different notation for the real number $a$ and the complex number $a$. But after a while that different notation would be dropped, and the "identification" would be understood.
We have similar things at more elementary level. A natural number is "identified" with an integer. An integer is "identified" with a rational number. A rational number is "identified" with a real number. Should we, in fact, keep different notations for all these?
You're right that this poses an interesting problem. As with other things, there isn't "one right way" to deal with it, and it admits a number of interpretations with equal validity but different semantic content.
Some have been suggested here; I would like to suggest another - and that is type theory.
You see, since I also have a fair bit of background in computer programming and, I remember hearing the claim that "computer programming tries, in the ideal, to be more like math". I thought there was some merit to this, and when I heard it, I also started to wonder if maths, likewise, could not benefit from being more like computer programming.
And one of the most useful concepts in computer programming is that of a "data type": everything in a computer is ultimately constructed out of strings of binary bits (at least at one level of abstraction), but we'd like to say that, in writing programs, some strings of bits are not interchangeable with other strings, because they are "meant" to represent different concepts. For example, a bitstring "01000001" could represent the decimal number 65 - an integer - or it could represent the letter 'A' (in one very common encoding system, at least). We obviously don't want to mix up text and numbers indiscriminately, so we assign these two things different "data types", at least within the programming language, even if the computer itself doesn't care at the base, or "implementation", level.
In the same way, we encounter a very similar problem in maths, in how it is usually built. In a common "low-level" form of doing maths, most objects are represented "at the bottom" by sets - e.g. the number "2", as a natural number, is "implemented" with
$$2 := \{\ \{\}, \{ \{\} \}\ \}$$
basically just some sets nested inside other sets. But this leads to "weird" problems like the apparent validity of saying
$$\{\} \in 2$$
which is something you would, and indeed, should, at first, crow "nonsense!" to, even though this formalism would recognize the above as valid. As you can see, this is not any different from the computer situation where the bitstring could represent either a fragment of text (the letter 'A') or a number (65) - only here, we're dealing with sets, not bitstrings.
And that's the job of type theories: basically, they are ways to try and introduce a notion of "data types", like this, into mathematics - though, unfortunately, it seems that they aren't often used. In that way, we can declare something like
$$\{\} \in 2$$
to be illegal (that is, the result of it is undefined), even if we have "implemented" $2$ as a set, because we can tag that $2$ and $\{\}$ belong to different types: we may call them, say, $\mathbf{nat}$ and $\mathbf{Set}$, and we would write
$$2 : \mathbf{nat}$$
to mean "2 has type 'nat', i.e. natural number", and
$$\{\} : \mathbf{Set}$$
to mean "$\{\}$ has type 'Set', i.e. a set". And then, trying to take
$$\{\} \in 2$$
fails because $\in$ cannot accept a non-$\mathbf{Set}$ object on its right hand argument, even if our type theory would let us "implement" the type $\mathbf{nat}$ as a select subset of sets drawn from $\mathbf{Set}$: type theories roll the extra type information into the evaluation of expressions and would say the above expression must fail.
In the case at hand, what we have is that the operation $+$, here, takes two complex numbers - type $\mathbf{complex}$. But we have $a : \mathbf{real}$ and $b : \mathbf{real}$. And in computer programming, this crops up, too: we may have, say, a function that is defined to accept only arguments of, say, type "float" (floating-point approximation of real numbers), but many programming languages will allow you to call or invoke that function with integers as arguments, because of what is called a type coercion: the integers get implicitly "promoted" to floats, and then are passed as usual. Such type-coercion rules are used when that things of one type have a "reasonable" equivalent in another, but you can't just naively interchange them as indicated by the differing types.
And so we would do something similar in typified maths: there could be a type coercion rule, or "implicit type conversion", between reals and complexes:
$$(\mathbf{complex})\ a := (a, 0)_\mathbb{C}$$
where we have subscripted to denote that the ordered pair represents a complex number, and hence itself has type $\mathbf{complex}$. Then when you do
$$a + ib$$
what is going on is that both "a" and "b" are first to be type-coerced to the complex numbers $(a, 0)_\mathbb{C}$ and $(b, 0)_\mathbb{C}$ by the given rule, then per the rules of operator precedence (PMDAS, etc.), the complex multiplication $ib = (0, 1)_\mathbb{C} \cdot (b, 0)_\mathbb{C}$ is carried out, and finally the complex addition $a + ib = (a, 0)_\mathbb{C} + (0, b)_\mathbb{C}$ is carried out, ending with the expression evaluating to $(a, b)_\mathbb{C}$.
Hence, from this perspective, $+$ is indeed complex addition, but there is an additional 'translation' going on involving the reals $a$ and $b$.
If the type coercion rule did not exist, then
$$a + ib$$
would be an invalid expression (for reason of mismatched types), and we would have to use the full
$$(a, 0)_\mathbb{C} + (0, 1)_\mathbb{C} \cdot (b, 0)_\mathbb{C}$$
to do the equivalent. Or, would just write $(a, b)_\mathbb{C}$.
Unfortunately, type theories seem to be a minority method that, although of interest as objects of study in themselves, are not typically used foundationally even though there's a good case, I think, that they can be more intuitive and capture rather readily some important aspects of mathematical usage that otherwise have to be dismissed as mere "sloppage". Indeed, given the rise of the computer, harmonizing mathematics with computer programming seems only natural in the modern age.