Prove that for independent random variables $X_i$, we have $f_i(X_i)$ are independent.
For $i\in I$ let $\sigma\left(X_{i}\right)\subseteq\mathscr{F}$ denote the $\sigma$-algebra generated by random variable $X_{i}:\Omega\to\mathbb{R}$.
Then actually we have $\sigma\left(X_{i}\right)=X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\left\{ X_{i}^{-1}\left(B\right)\mid B\in\mathscr{B}\left(\mathbb{R}\right)\right\} $.
The collection $(X_i)_{i\in I}$ of random variables is independent iff:
For every finite $J\subseteq I$ and every collection $\left\{ A_{i}\mid i\in J\right\} $ satisfying $\forall i\in J\left[A_{i}\in\sigma\left(X_{i}\right)\right]$ we have:
$$P\left(\bigcap_{i\in J}A_{i}\right)=\prod_{i\in J}P\left(A_{i}\right)\tag {1}$$
Now if $f_{i}:\mathbb{R}\to Y_{i}$ for $i\in I$ where $\left(Y_{i},\mathcal{A}_{i}\right)$ denotes a measurable space and where every $f_{i}$ is Borel-measurable in the sense that $f_{i}^{-1}\left(\mathcal{A}_{i}\right)\subseteq\mathscr{B}\left(\mathbb{R}\right)$ then for checking independence we must look at the $\sigma$-algebras $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
But evidently: $$\sigma\left(f_{i}\left(X_{i}\right)\right)=\left(f_{i}\circ X_{i}\right)^{-1}\left(\mathcal{A}_{i}\right)=X_{i}^{-1}\left(f_{i}^{-1}\left(\mathcal{A}_{i}\right)\right)\subseteq X_{i}^{-1}\left(\mathscr{B}\left(\mathbb{R}\right)\right)=\sigma\left(X_{i}\right)$$ So if $\left(1.A\right)$ is satisfied for the $\sigma\left(X_{i}\right)$ then automatically it is satisfied for the smaller $\sigma\left(f_{i}\left(X_{i}\right)\right)$.
2)
The concept independence of random variables has impact on PDF's and calculation of moments, but its definition stands completely loose from it. Based on e.g. a split up of PDF's it can be deduced that there is independence but things like that must not be promoted to the status of "definition of independence". In situations like that we can at most say that it is a sufficient (not necessary) condition for independence. If we wonder: "what is needed for the $f_i(X_i)$ to be independent?" then we must focus on the definition of independence (not sufficient conditions). Doing so we find that measurability of the $f_i$ is enough whenever the $X_i$ are independent already.
BCLC edit: (let drhab edit this part further): There's no 'measurable' in elementary probability, so we just say 'suitable' or 'well-behaved' in that whatever functions that students of elementary probability will encounter, we hope that they are suitable. Probably, some textbooks will use weaker conditions than 'measurable' that will be used as the definition of independence for that book.
Edit: Functions that are not measurable (or not suitable, if you like) are in usual context very rare. The axiom of choice is needed to prove the existence of such functions. In that sense you could say that constructible functions (no arbitrary choice function is needed) are suitable.