Integral calculus analog to the difference quotient

We have the fundamental theorem of calculus:

$$\int_a^bf'(x)~\mathrm dx=f(b)-f(a)$$

but without this, one cannot simplify the Riemann sum in general, or else we wouldn't have a need for writing out the Riemann sum like so. You can find some explanations of this here.

Also take note that what you have is not an infinite sum, but a limit of finite sums, and what you've written isn't even the most general form of the Riemann sum:

$$\int_a^bf(x)~\mathrm dx=\lim_{\max\Delta x\to0}\sum_{i=1}^nf(x_i^\star)\Delta x_i$$

where $a=x_0<x_1<\dots<x_n=b$, $x_i^\star\in[x_{i-1},x_i]$, $\Delta x_i=x_i-x_{i-1}$, and $\max\Delta x$ is the maximum value of $\Delta x_i$. Written in this form, it should be simpler to see what the Riemann sum is. All we're doing is making a bunch of rectangular approximations, where $f(x_i^\star)\Delta x_i$ is the area of the $i$th rectangle, with base length $\Delta x_i$ and height $f(x_i^\star)$ for some value of $f$ on that interval.

From this, we can see that what you've written is the specific case of $\Delta x_i=\frac{b-a}n$ and $x_i^\star=x_i$. Provided that the Riemann integral does in fact exist, one can use your provided form to calculate it. However there are cases when you cannot use the provided limit. Consider, for example, a function that is one value on rational numbers and another value on irrational numbers. According to your formula, we would only care about rational values if $a$ and $b$ are rational, which completely ignores a lot of the function! This is why the general form of the Riemann integral considers taking any value of $f$ over the intervals. A sufficient condition for being Riemann integrable is piecewise continuity though, which will be the case for most of the integrals you encounter.


Quite simply: no, there isn't.

Beyond the more obvious differences, antiderivatives are just more difficult than derivatives, in a qualitative way. Derivatives can be computed systematically, for probably any function you could come up with. It's not even very hard: a good project for first-year computer science students is to make a program that analytically computes derivatives, like taking in sin(cos(x)+x^2) and spitting out (2*x-sin(x))*cos(x^2+cos(x)).

For antiderivatives, it's a whole nother story. There are many fairly straightforward functions that can't have their antiderivative expressed in terms of your basic functions. One classic example is the logarithmic integral,

$$ \int \frac{1}{\ln x}\,dx = \mathrm{li}(x) $$

You can't express $\int \frac{1}{\ln x}\,dx$ in terms of any combination of adding, multiplying, constants, dividing, powers, roots, exponentials, logarithms, and trig functions. That is to say, it isn't considered an elementary function. Other integrals that aren't elementary functions are $\int \sin(x^2)\,dx,\quad$ $\int \sqrt{1-x^4}\,dx,\quad$ and $\int \exp(-x^2)\,dx$. The impossibility of this task has actually been rigorously proven.

(Unfortunately, all those integrals are also pretty important: they come up, respectively, in the fields of number theory, optics, modelling the motion of a pendulum without a small angle approximation, and statistics. So, we've given them unique names, like li(x), S(x), K(x), and erf(x).)

In some sense, if there was an extremely simple definition of the antiderivative, like there is for the derivative, we would expect that it would lead us to a much simpler way to find antiderivatives. The fact that so many integrals can't be solved in any kind of straightforward manner is evidence that there won't be a simple definition.