General question about analog and digital signals
Basically, from an electrical viewpoint, every "digital" signal is as you say, only an approximation of a square wave. In particular it will have finite rise and fall times.
At high speeds, it can be difficult to ensure it looks as nice as the theory wants. To ensure that the signal is still detected as digital (i.e. the receiver doesn't get utterly confused by a horribly shaped signal), the so-called eye diagram (aka eye pattern) is used to measure its characteristics over a number of samples.
Many standards (e.g. USB and what not) define some acceptable characteristics for this diagram.
Note that an eye pattern/diagram isn't restricted to just two [voltage] levels. It's also applicable when you have any number of discrete output levels. For example, Gigabit Ethernet over twisted pairs (1000BASE-T) uses not two but 5 different voltage levels.
Is it just our interpretation to whenever the voltage passes some barrier or falls under it? Meaning, when the voltage is above some arbitrarily chosen threshold we consider it as "high" but otherwise we consider it "low"?
Basically, yes, that's how it works, some voltage thresholds for what is "1" and what is "0" are decided by a some standard.
Digital signals are binary. They have only two states - on or off, high or low, up or down - whatever you want to call them. As you have deduced, there is some threshold above which the value is deemed to be high and another threshold below which the value is deemed to be low. Digital is very simple to do with transistors by either turning them fully on or fully off.
Analog signals are analogous to the quantity they are measuring. e.g., a weighing scales might give out a voltage proportional to the load - say 0 to 10 V for a 0 to 200 kg load. Another example is the signal from a microphone which varies with the sound pressure affecting the microphone diaphragm. In this case the frequency will vary with the pitch of the sound and the amplitude will vary with the loudness.