Can a Hadamard gate be realized with perfect precision?
All gates are imperfect. In the QC business, error rates are usually divided into those for single-qubit gates, such as the Hadamard, and those of two-qubit gates. Often the error from all single-qubit gates can be made small and comparable, while two-qubit gates may be more trouble.
Any complete description of an experimental system will include enumeration of these errors. For example, from the recent Google demonstration of quantum supremacy (1), this comes in Fig. 2:
So the error of a Hadamard gate in their system is on the order of 0.16%.
As seen in this example, single qubit errors are often among the smallest error sources and also relatively uncorrelated, both of which make them good candidates for error correction. So they get talked about less because they generally matter the least, both for current devices and anticipated future ones.
As an interesting sidenote, the error of a one-qubit operation from over-rotation or under-rotation can also be made arbitrarily small, in principle, by use of carefully designed composite pulses (2).
I will write a brief introduction to quantum error correction (QEC), as it relates to your question.
First, the operations I,X,Y,Z form a complete set, so that any transformation of a single qubit can be written in terms of them. This makes it sufficient to correct errors of these forms, in order to correct general errors. For a proof see early papers by Shor or myself, or a textbook such as Nielsen and Chuang.
Next, a physical system might wander out of the Hilbert space used for computing. In this case it can be corrected by detecting whether or not it has so wandered, returning it if it has, and then applying QEC.
A faulty logic gate is equivalent to a perfect gate followed by an error or errors.
To correct such errors, there are two approaches. First, there are a number of tricks, many of them developed for nuclear magnetic resonance experiments, such that a sequence of well-chosen operations can cancel out common systematic errors. After employing such tricks, what remains are random errors, and QEC is well-designed to cope with those. It works in some respects similarly to error correction in classical information theory, but there are some subtle points regarding correcting in two different bases simultaneously, and in extracting diagnostic information about noise without also extracting information about the logical state. I won't describe all that here but refer you to the (wide) literature.
Finally, QEC itself requires logic gates, and these will be faulty. This brings us to the set of ideas called fault-tolerance. They rely on the use of error-correcting networks which are carefully designed to avoid routes whereby errors can propagate and multiply. They also make use of some interesting features of quantum physics, whereby errors of different types (X, Z) propagate differently, so that the overall correction can go in two stages, neither of which overwhelms the other with noise.
You are correct to discover that large quantum algorithms require high precision of the operations, but it is not as bad as you guess, and this is important. We only require precision at the level $1/Q$ where $Q$ is the number of steps in the algorithm. This $Q$ could be large, say $10^9$, but this is (obviously) a lot different from $10^{100}$. Memory errors meanwhile need to be at the level $1/(NQ)$ where $N$ is the number of logical qubits. QEC achieves this degree of precision. A $t$-error-correcting protocol reduces the failure probability from $O(p)$ to $O(p^{t+1})$ where $p$ is some basic noise level. If we take $p=0.01$ and $t=5$ then we get $\sim 10^{-12}$ which is good enough for many powerful algorithms.
Some tutorials and other introductory material can be found at:
https://users.physics.ox.ac.uk/~Steane/qec/QECtute.html