Numerical integration using interval arithmetic, nowadays
I develop Arb, an arbitrary-precision interval library with special functions support. There is some code included for integration of complex analytic functions using Taylor series (documentation), which in principle should scale nicely to high precision. An example program is available here.
The last test case in the example program is $\tfrac{1}{\pi} \int_0^{\pi} \cos(10 t - (20+10i) \sin(t)) dt$ ($= J_{10}(20+10i)$). You can change $\cos$ to $\operatorname{erf}$ (_acb_poly_cos_series
to _acb_hypgeom_erf_series
) and check that it works. In fact, the new integrand peaks with a magnitude around $10^{38}$ while the result has magnitude around $10^{15}$, so this is a nice test case for handling severe cancellation. With 20d target accuracy and 60d working precision, the program gives
(-1.55...e+24 + 5.83...e+23j) +/- (1.27e+103, 1.27e+103j)
where the resulting midpoint is inaccurate but correctly enclosed, and with 80d target accuracy and 240d working precision, it gives
(868463919173069.624... + 85377491518329.173...j) +/- (1.9e-65, 2.37e-66j).
This takes 1.5 seconds.
I would not necessarily recommend using this code for anything serious, for a few reasons. First, the interface is very low level and inconvenient to work with. Some kind of high-level wrapper would make sense. (Note that Arb is a Sage package, but no Sage wrapper for this particular code has been written yet.)
Further, the code needs to bound the integrand on a surrounding contour to get error bounds for the series truncations (via the Cauchy integral formula). This is done semi-automatically by evaluating the integrand on "thick" input intervals, subdividing if necessary. This can be slow and might fail to converge. For low precision integration, computing the original integral directly by naive subdivision might be better.
The current version of the code also requires that the user input a radius that is known a priori to isolate the integration path from any singularities. Singularities on the integration path itself are obviously not supported at all.
A more robust way to do integration is with Taylor models, i.e. truncated Taylor series together with bounds on the approximation error that are propagated automatically upon function composition. This is more efficient, and works natively with real analytic functions, or even non-analytic functions like $|x|$. Taylor models could presumably also be extended to Puiseux-type series to support algebraic and logarithmic endpoint singularities. Unfortunately, I'm not aware of any work so far on implementing Taylor models for higher special functions. It's on my list of things to work on...
Some people have also worked on rigorous error bounds for the double exponential integration formula, but I'm not aware of any general-purpose implementations yet.
While this doesn't quite answer your question, you might have some success combining Arb's special functions with other packages for integration. The main caveat is that Arb doesn't always compute very tight enclosures for special functions, which is hard to do in general. For example, the $\operatorname{erf}(x)$ implementation currently computes reasonably tight enclosures, but the $\Gamma(s,x)$ implementation does not (see this). This doesn't matter much for algorithms that only sample the integrand at discrete points (it's enough to increase the precision by $p$ bits to circumvent $p$ bits of cancellation), but it can be an obstacle for algorithms that need enclosures on "thick" input intervals (subdividing requires $2^p$ function evaluations).