On Hamkins' answer to a problem by Michael Hardy
UPDATE:
Jörg Brendle, Joel David Hamkins, and I have now written a paper entitled "The subseries number" (link) in which we analyze some new cardinal invariants of the continuum related to this question.
The main topic of the paper (and its title characteristic) is the cardinal originally defined by Joel in his answer here, the subseries number $\newcommand\ss{\bf ß}\ss$. In hindsight, it seems that $\ss$ is somehow more fundamental than the cardinal $\mathfrak{ii}$ Rahman asks about in his post. In fact, it turns out that Rahman's cardinal $\mathfrak{ii}$ is just the minimum of Joel's cardinal and the rearrangement number:
Theorem: $\mathfrak{ii} = \min \{\ss,\mathfrak{rr}\}$.
The proof of this equality can be found in the last section of our paper. (Interestingly, it breaks into two cases, according to whether $\mathfrak{ii} < \mathfrak{b}$ or not.) It seems to me that this theorem, paired with the thorough analysis of $\ss$ and $\mathfrak{rr}$ in these two papers, is the strongest answer to Rahman's question that one could hope for.
Let me mention one more result from our new paper that is particularly relevant to Rahman's original post:
Theorem: In the Laver model, $\ss < \mathfrak{rr}$.
Combining this with the other theorem mentioned above, we get the consistency of $\mathfrak{ii} < \mathfrak{rr}$, answering part of the original question.
ORIGINAL POST:
Great question! My answer turned out to be pretty long, but I'll summarize the key ideas here at the top.
We can show that $\mathfrak{ii}$ is always uncountable, and we can find a natural condition under which $\mathfrak{ii} = \mathfrak{rr}$. Specifically, we can show
$\min\{\mathfrak{b},\mathfrak{s}\} \leq \mathfrak{ii}.$
$\mathfrak{rr} = \max\{\mathfrak{b},\mathfrak{ii}\}$.
Taken together, these assertions imply that $\mathfrak{ii} \neq \mathfrak{rr}$ can be true only in models where $\mathfrak{s} < \mathfrak{b}$. In other words, if you want to prove $\mathfrak{ii} < \mathfrak{rr}$ relatively consistent, then you should look at a model where $\mathfrak{s} < \mathfrak{b}$ (this happens in the Hechler model and the Laver model, for example), and then try to prove that $\mathfrak{ii} < \mathfrak{b} = \mathfrak{rr}$ in that model.
Proof that $\min\{\mathfrak{b},\mathfrak{s}\} \leq \mathfrak{ii}$:
[Recall that $\mathfrak{s}$, the "splitting number", is the smallest cardinality of a splitting family, where $F \subseteq [\omega]^\omega$ is a splitting family iff, for any infinite $A \subseteq \omega$, there is some $B \in F$ with $A \cap B$ and $A \setminus B$ both infinite.]
Suppose $\kappa < \min\{\mathfrak{b},\mathfrak{s}\}$, and let $F$ be a family of injective functions $\omega \rightarrow \omega$. We want to cook up a conditionally convergent series $\sum_n a_n$ such that, for every $f \in F$, $\sum_n a_{f(n)}$ is still convergent.
For each $f \in F$, consider $f(\omega)$, the image of $f$. Since $\kappa < \mathfrak{s}$, there is an infinite $A \subseteq \omega$ that is not "split" by any of the $f(\omega)$: that is, for all $f \in F$, either $A \cap f(\omega)$ is finite, or $f(\omega) \setminus A$ is finite.
Let $F_0$ be the set of all $f \in F$ such that $f(\omega) \cap A$ is finite, and let $F_1$ be the set of all $f \in F$ such that $f(\omega) \setminus A$ is finite. We will put $a_n = 0$ for all $n \notin A$. This allows us to safely ignore the functions in $F_0$: $\sum_n a_{f(n)}$ will always converge for $f \in F_0$ because only finitely many of the terms of this sum will be nonzero. Our goal is to figure out a way to write a conditionally convergent series, with nonzero terms only on $A$, such that the series remains convergent after applying functions in $F_1$.
I claim that we already know how to do this, more or less, because it's just another variation of Joel's padding with zeros argument. First, I'll point out what we already know from my answer to Mike Hardy's question:
If $P$ is a family of permutations of $\omega$ and $|P| < \mathfrak{b}$, then there is an infinite $B \subseteq \omega$ such that any $p \in P$ only rearranges finitely many terms of $B$, and otherwise $p$ preserves the order of the terms in $B$.
But if you look at my proof there, you'll see that we never used the fact that the members of $P$ are permutations. We only used the fact that they're injections (indeed, "almost injections" (functions where the preimage of each point is finite) would be enough). Therefore, by the same proof, there is an infinite $B \subseteq A$ such that any $f \in F_1$ only rearranges finitely many terms of $B$, and otherwise $f$ preserves the order of the terms in $B$.
To obtain our series $\sum_n a_n$, we simply write the alternating harmonic series on the members of $B$. That is, let $a_n = 0$ for every $n \notin B$, and otherwise, if $n$ is the $k^{th}$ element of $B$, let $a_n = \frac{(-1)^k}{k}$. If $f \in F_0$, then $\sum_n a_{f(n)}$ converges because it has only finitely many nonzero terms. If $f \in F_1$, then $\sum_n a_{f(n)}$ converges, because it contains all but finitely many of the nonzero terms of our original series, it has them in the same order except possibly for some finitely many of them, and otherwise everything is zero. QED
Proof that $\mathfrak{rr} = \max\{\mathfrak{b},\mathfrak{ii}\}$:
The bounds $\mathfrak{b} \leq \mathfrak{rr}$ and $\mathfrak{ii} \leq \mathfrak{rr}$ are already known, and mentioned in the question. We need to show that $\mathfrak{rr} \leq \max\{\mathfrak{b},\mathfrak{ii}\}$. To accomplish this, suppose $F$ is a family of injections $\omega \rightarrow \omega$ such that $|F| = \mathfrak{ii}$, and for any conditionally convergent series $\sum_n a_n$, $\sum_n a_{f(n)}$ diverges for some $f \in F$. We will find a family $P$ of permutations of $\omega$ such that $|P| = |F| \cdot \mathfrak{b}$, and for any conditionally convergent series $\sum_n a_n$, $\sum_n a_{p(n)}$ diverges for some $p \in P$.
Let $B$ be an unbounded family of functions $\omega \rightarrow \omega$ with $|B| = \mathfrak{b}$. For simplicity, assume that every member of $B$ is strictly increasing, and that every member of $F$ and of $B$ has co-infinite image. (The assumptions about $B$ are without loss of any generality, since we can modify $B$ to make it fit this requirement. The assumption on $F$ is less obviously benign, but it is easy to modify the definition of $p_f^g$ below to work just as easily when $f$ has co-finite image. We're making the assumption only to avoid dealing with cases in what is already a somewhat tedious definition.) For each $f \in F$ and $g \in B$, define $p_f^g$ so that
If $n$ is the $k^{th}$ element of $\omega \setminus f(\omega)$, then $p_f^g(n)$ is the $k^{th}$ element of the image of $g$.
If $n$ is in the image of $f$, then $$p_f^g(n) = f(n - |\{m < n : m \in \mathrm{Image}(g)\}|).$$
This definition is a bit involved, but the idea behind it is simple. We think of $p_f^g$ as a way of listing the terms of our series (in a different order). How do we list them? We begin by writing out all the terms in the image of $f$, in the order specified by $f$. The problem is that this does not give us a permutation of the series: lots of things have been left out, namely every term indexed by a member of $\omega - f(\omega)$. We want to add these to our list, and our plan is to let them appear on the list "slowly": we want these extra terms appear so sparsely in our rearranged series that they won't effect its convergence properties. Exactly how sparsely these terms appear is determined by $g$. We stick the first extra term in after the first $g(1)$ terms of our preliminary list, we stick another one in after $g(2)$ terms, another after $g(3)$ terms, etc.
Let $P = \{p_f^g : f \in F, g \in B\}$. Clearly $|P| = |F| \cdot |B|$. We will show that for any conditionally convergent series $\sum_n a_n$, $\sum_n a_{p(n)}$ diverges for some $p \in P$.
Suppose $\sum_n a_n$ is a conditionally convergent series, and fix $f \in F$ such that $\sum_n a_{f(n)}$ diverges. The extended real numbers $$u = \limsup_{n \in \omega} \sum_{m \leq n}a_{f(n)}$$ $$\ell = \liminf_{n \in \omega} \sum_{m \leq n}a_{f(n)}$$ are different (this is what we mean by divergence). Let $d = \frac{u-\ell}{2}$ if $u$ and $\ell$ are both real, or $d = 1$ if either $u = \infty$ or $\ell = -\infty$. Define a function $osc: \omega \rightarrow \omega$ so that $osc(n)$ tells us when the partial sums have completed their $n^{th}$ oscillation of size $d$. Specifically, $osc(n)$ is defined to be the least number $N$ such that $$\limsup_{osc(n-1) < m \leq N}\sum_{i \leq m}a_{f(m)} - \liminf_{osc(n-1) < m \leq N}\sum_{i \leq m}a_{f(m)} \geq d.$$
Because $B$ is an unbounded family, there is some $g \in B$ such that $g(n) > osc(2n)$ infinitely often. This implies that there are infinitely many $n$ such that there is no $m$ with $osc(n) < g(m) \leq osc(n+1)$.
I claim that $\sum_n a_{p_f^g(n)}$ diverges. Fix $N$. There is some $n \geq N$ large enough that $|a_{p_f^g(m)}|$ is much smaller than $d$ for all $m \geq n$, and that there is no $m$ with $osc(n) < g(m) \leq osc(n+1)$. This means that, between $osc(n)$ and $osc(n+1)$, the terms of $a_{p_f^g(m)}$ will look exactly like the terms of $a_{f(n)}$. By our definition of $osc$, the sum of these terms accomplish an oscillation of size $d$ or more in that interval. Thus, for arbitrarily large values of $N$, the partial sums of $\sum_n a_{p_f^g(n)}$ accomplish an oscillation of size $d$ or more after the $N^{th}$ term. Thus $\sum_n a_{p_f^g(m)}$ diverges. QED
Let me show something about the subseries number, which I suggested in the comments, and which I find very natural to consider, in light of the following:
Fact. A series $\sum_n a_n$ is absolutely convergent just in case every subseries $\sum_{n\in A} a_n$ is convergent, for every $A\subset\mathbb{N}$.
So let us define the subseries number, denoted $\newcommand\ss{\bf ß}\ss$ (German sharp s), to be the size of the smallest family $\mathcal{A}$ of sets of natural numbers, such that if a series $\sum_n a_n$ has the property that $\sum_{n\in A}a_n$ converges for all $A\in\mathcal{A}$, then $\sum_n a_n$ is absolutely convergent.
Clearly, $\frak{i\!i}\leq\ss$, since considering subseries amounts to insisting on increasing injections in the context of the OP.
The splitting number $\newcommand\s{\frak{s}}\s$, in contrast, is the smallest size of a splitting family $S$, which is a family $\mathcal{S}$ of infinite sets of natural numbers, such that if $B\subset\mathbb{N}$ is infinite, then there is some $A\in\mathcal{S}$ such that $B\cap A$ and $B-A$ are both infinite, and in this case we say that $A$ splits $B$ and that $\mathcal{S}$ is a splitting family.
Meanwhile, ${\bf\text{non}}(\mathcal{M})$ is the size of the smallest non-meager set in the space (it will be convenient to consider) of all subsets of $\mathbb{N}$.
Theorem. $\s\leq\ss\leq{\bf\text{non}}(\mathcal{M})$.
Proof. For the first inequality, I shall show that every subseries family $\mathcal{A}$, witnessing the defining property of $\ss$, is also a splitting family. To see this (just as Will had argued), consider any infinite set $B\subset\mathbb{N}$. Let $\sum_n a_n$ be any conditionally convergent series whose nonzero terms $a_n$ arise solely for $n\in B$. For example, $\sum_n a_n$ could be the alternating harmonic series, padded with $0$'s so that the nonzero terms arise only on indices in $B$. Thus, $\sum_n a_n$ converges, but only conditionally. So there must be some $A\in\mathcal{A}$ such that $\sum_{n\in A} a_n$ diverges. Such a set $A$ must have $B\cap A$ infinite, in order to have infinitely many non-zero terms, and it must have $B-A$ infinite, in order to differ infinitely from $\sum_n a_n$. So $A$ splits $B$, and therefore $\mathcal{A}$ is a splitting family. So $\s\leq|\mathcal{A}|$, as desired.
For the second inequality, I shall show that every nonmeager family of subsets of $\mathbb{N}$ has the subseries property. Suppose that $\mathcal{A}$ is such a nonmeager family, and we have a conditionally convergent series $\sum_n a_n$. Since the positive and negative terms each sum to $\infty$, it follows that any finitely many terms of this series can be extended in such a way to cause an additional huge oscillation; from this, it follows that every conditionally convergent series has a comeager set of subsets $A\subset\mathbb{N}$ for which $\sum_{n\in A}a_n$ does not converge. Since $\mathcal{A}$ is non-meager, it must intersect this set, and so there must be some $A\in\mathcal{A}$ for which $\sum_{n\in A}a_n$ does not converge. So $\mathcal{A}$ has the subseries property, as desired. QED