What inspires people to define linear maps?
In calculus, the derivative is defined by $$ f'(x_0) = \lim_{x \to x_0} \frac{f(x) - f(x_0)}{x - x_0}. $$ Intuitively, if the input changes by a small amount $\Delta x$, and the corresponding change in the output is $\Delta f$, then the change in the output is related to the change in the input by the equation $$ \tag{1} \Delta f \approx f'(x_0) \Delta x. $$
We would like to generalize the idea of the derivative to functions $f:\mathbb R^n \to \mathbb R^m$. In this case, $\Delta x \in \mathbb R^n$ and $\Delta f \in \mathbb R^m$. What type of thing should $f'(x_0)$ be? $$ \underbrace{\Delta f}_{m \times 1} \approx \underbrace{f'(x_0)}_{\text{?}} \underbrace{\Delta x}_{n \times 1} $$ The answer is that $f'(x_0)$ should be a linear transformation from $\mathbb R^n$ to $\mathbb R^m$. Each component of the output should be a linear combination of the components of the input. That is the simplest or most obvious way to generalize the idea of multiplying by a scalar (in equation (1)) to this new setting, where the input and output are both vectors.
In my opinion, this is the most clear way to discover the idea of linear transformations. This is why we care about them. (At least, it is a major reason why we care about them.)
The fundamental strategy of calculus is to approximate a complicated nonlinear function $f$ by a linear function: $$ f(x) \approx \underbrace{f(x_0) + f'(x_0)(x - x_0)}_{L(x)}. $$
When we replace $f$ with its local linear approximation $L$, calculations are greatly simplified, and the approximation is often good enough to be useful. Most of calculus can be derived easily using this fundamental strategy.
With this viewpoint, we see that calculus and linear algebra are connected at the most basic level.
Linearity is defined by additivity and scalar homogeneity. I can't explain why linear maps have these properties, other than to say it's inbuilt into the definition.
The separate question of as to why we care about these maps is a good one. We define linearity because it's a simple and common property for relationships to have. "Map" is indeed another word for "function": we have a relationship $T$ between two vector spaces $X$ and $Y$. We map a vector $x$ to another vector $y$. If we were to add another vector $\Delta x$ to $x$, this will result in a change in $Y$, according to $T$. When $T$ is linear, we can expect the result of adding $\Delta x$ to our independent variable $x$ to be very predictable: it will always add $T(\Delta x)$. That is, the same change applied to our variable $x$, no matter what $x$ is beforehand, will result in the same change in $y$: specifically, adding $T(\Delta x)$.
In a practical sense, this is a useful property to have, as it makes interpolations and extrapolations trivially easy. If you work $30$ hours in a week and take home $\$600$ in that week, then the linear relationship will tell you very quickly that you would take home $\$800$ for a $40$ hour week, or $400$ for a $20$ hour week.
Compare this to something a bit more complex. Say you're making a rocket to fly into space, and you're wondering how much fuel to put into the tank. The relationship is not linear, since putting double the fuel will not result in double the distance; each additional tonne of fuel adds weight, and more fuel will be needed to propel this extra weight into space. So, to go double the distance into space, you'll need a fair bit more than double the fuel. This is an example of a non-linear relationship, and it makes things a little more complicated.
Often, when a non-linear relationship rears its head, people will attack it with some form of linearisation. In fact, calculus is primarily about linearisation; derivatives are, in essence, about approximating a (possibly non-linear) function by a linear function.
In a theoretic sense, linear maps lend themselves to some useful theory, which hopefully you're learning about. Linear maps between finite-dimensional spaces can be represented by matrices, and almost anywhere you see a matrix, there's some intuition about linear maps behind it. Linear operators (square matrices, in the finite-dimensional setting) are of great interest to people studying dynamical systems, and form the basis for areas of study like $C^*$ algebras.
Why are they called "linear" maps? Well, in essence, they preserve lines. Actually, there is a slightly larger class of maps called "affine" maps that also preserve lines, but these are merely linear maps with a constant added on them. Linear maps are maps that preserve lines and the origin.
I hope that answers your question.