linear combination, span, independence and bases for infinite dimensional vector spaces.
As an example, take the space $V$ of all sequences $(a_n)_{n\in\mathbb N}$ of real numbers such that $a_n=0$ if $n$ is large enough. A basis of $V$ is the set $\{e_1,e_2,e_3,\ldots\}$, where $e_k$ is the sequence such that its $k$th term is $1$ and all other terms are equal to $0$. And this set is a basis of $V$ because if $(a_n)_{n\in\mathbb N}\in V$, then, for some $N\in\mathbb N$, $a_n=0$ if $n>N$ and$$(a_n)_{n\in\mathbb N}=a_1e_1+a_2e_2+\cdots+a_Ne_N.$$So, as you can see, even though $\dim V=\infty$, every element of $V$ is a linear combination of a finite number of elements of the set $\{e_1,e_2,e_3,\ldots\}$.
All those definitions remain true for infinite dimensional spaces (spaces with an infinite basis). But they are not useful in the infinite dimensional spaces mathematicians and physicists most care about.
Those spaces usually have enough structure to make sense of infinite sums. Here's one classic example.
Let $H$ be the set of all sequences $(a_n)$ of real (or complex) numbers such that the sum $\Sigma a_n^2$ converges. It's clear that $H$ is closed under vector summation and scalar multiplication: those happen element by element. Then you can define the distance between any two vectors by analogy with the Euclidean distance:
$$ |v-w| = \sqrt{\sum_{n = 1}^\infty (v_i - w_i)^2} $$
With that definition you can make sense of some infinite sums of vectors, and use those infinite sums to define independence, span and basis. The set of vectors $e_i$ where for each $i$ the vector $e_i$ has a $1$ in place $1$ and is $0$ elsewhere is a basis.
If you think about replacing the sums in that example by integrals you can build even more interesting and useful vector spaces. The study of Fourier series can be thought of as understanding that the set of functions $\{ \sin nx, \cos nx\}$ forms a basis for the space of (nice enough) periodic functions.