Is the Invariant Subspace Problem interesting?
The invariant subspace problem for Banach spaces was solved in the negative for Banach spaces by Per Enflo and counterexamples for many classical spaces were constructed by Charles Read. The problem is open for reflexive Banach spaces. On the other hand, S. Argyros and R. Haydon recently constructed a Banach space $X$ s.t. $X^*$ is isomorphic to $\ell_1$ and every bounded linear operator on $X$ is the sum of a scalar times the identity plus a compact operator, hence the invariant subspace problem has a positive solution on $X$.
The invariant subspace problem has spurred quite a lot of interesting mathematics. Usually when a positive result is proved, much more comes out, such as a functional calculus for operators. See, e.g., recent papers by my colleague C. Pearcy and his collaborators.
In cases where the ISP has a positive solution for a class of operators, there may be a structure theory for the operators. There is, for example, J. Ringrose's classical structure theorem for compact operators on a Banach space. This is a beautiful and useful theorem, which, BTW, I am using currently with T. Figiel and A. Szankowski to relate the Lidskii trace formula to the J. Erdos theorem in Banach spaces.
Why is the twin prime conjecture interesting?
Most of the structure theorems for complex matrices can be expressed solely in terms of invariant subspaces. For example, the statement that every nxn complex matrix is unitarily equivalent to an upper triangular matrix (from which the spectral theorem for normal matrices easily follows) is equivalent to the existence of a chain of invariant subspaces having one of each possible dimension from 0 to n. A matrix is similar to a single Jordan block if and only if its lattice of invariant subspaces is a chain; this allows for the Jordan form to be expressed in terms of invariant subspaces. If you look to infinite-dimensional Hilbert spaces, the sub-Hilbert spaces are closed linear subspaces, and the natural analogue of matrix is a bounded linear operator. If you want to extend the finite-dimensional structure theory to the infinite-dimesnional situation the first natural question to ask is whether every operator has a nontrivial (closed, linear) invariant subspace. This problem was popularized by Paul Halmos in the 1970's and, while the solution may not be important, attempts at solutions have generated a vast amount of important mathematics. For example, the concept of quasidiagonality for C*-algebras, which is very important to that subject, was defined by Halmos as a reducing version of quasitriagularity (a property distilled from several theorems about the invariant subspace problem).
If the invariant subspace problem has a positive answer then every bounded operator $A \in B(H)$ can be put in upper triangular form, in the sense that there is a maximal chain $(E_\lambda)$ of closed subspaces of $H$ such that every $E_\lambda$ is invariant for $A$.
In $\mathbb{C}^n$, a maximal chain of subspaces looks like $$\{0\} = E_0 \subset E_1 \subset \cdots \subset E_n = \mathbb{C}^n,$$ where the dimension of $E_i$ is $i$, and any operator for which all the $E_i$ are invariant is literally upper triangular for an orthonormal basis whose first $i$ elements belong to $E_i$, for all $i$. The infinite dimensional version is a natural generalization and seems to say rather a lot about the structure of $A$.
No doubt this result would be considered "known" by experts, but I could not find it explicitly stated anywhere, so I included it near the end of this paper. Also, Matt Kennedy answered this question with a reference to a result that easily implies it.