Intuitive use of logarithms
Since logarithms convert multiplication into addition, they can be used to simplify basic arithmetic in the absence of computers.
Logarithms come in handy when searching for power laws. Suppose you have some data points given as pairs of numbers $(x,y)$. You could plot a graph directly of the two quantities, but you could also try taking logarithms of both variables. If there is a power law relationship between $y$ and $x$ like
$$y=a x^n$$
then taking the log turns it into a linear relationship:
$$\log(y) = n \log(x) + \log(a)$$
Finding the exponent $n$ of the power law is now a piece of cake, since it corresponds to the slope of the graph.
If the data do not follow a power law, but an exponential law or a logarithmic law, taking the log of only one of the variables will also reveal this. Say for an exponential law
$$y=a e^{b x}$$
taking the log of both sides gives
$$\log(y) = b x + \log(a)$$
Which means that there will be a linear relationship between $x$ and $\log(y)$.
I can visualize a logarithm if I think of it as an answer for questions such as these:
"How many places does this number have?"
- log (log10): given a number in decimal
- lb (log2): in binary (for a practical application, see this answer on how to detect if integer operations might overflow)
- and ln (log$e$), well, something in between
"How many levels does a (balanced) tree have which fits this number of leaf nodes?"
- log: each node has 10 children
- lb: each node has 2 children (a binary tree)
- this is mostly helpful if you know something about graph theory. If you're good at visualizing things, you can use this to get a grasp of the approximate value at which a logarithm of a number would be.
- it also helps to understand why binary search, tree map lookup and quicksort are so fast. The log function plays an important role in understanding algorithm complexity! The tree property can help to find fast algorithms for a problem.
Logarithms are also really helpful in the way that J.M. suggested in the comment to your question: getting minute quantities to a more usable scale.
A good example: probabilities.
In tasks such as speech-to-text programs and many other language-related computational problems, you deal with strings of elements (such as sentences composed of words) where each has an associated probability (numbers between 0 and 1, often near zero, such as 0.00763, 0.034 and 0.000069). To get the total probability over all such elements, i.e. the whole sentence, the individual probabilities are all multiplicated: for example 0.00763 * 0.034 * 0.000069, which yields 0.00000001789998. Those numbers would soon get too small for computers to handle easily, given that you want to use normal 32-bit precision (and even double precision does have its limits and you never know exactly how small the probabilites might get!) If that happens, the results become inaccurate and might even be rounded down to zero, which means that the whole calculation is lost.
However, if you –log-transform those numbers, you get two important advantages:
the numbers stay in a range which is easily expressed in 32-bit floating point numbers;
you can simply add the logarithmic values, which is the same as multiplying the original values, and addition is much faster in terms of processing time than multiplication. Example:
- –log(0.00763) = 2.11747... (more places are irrelevant)
- –log(0.034) = 1.46852...
- –log(0.000069) = 4.16115...
- 2,11747 + 1,46852 + 4,16115 = 7.74714
- 10–7.74714 = 0.00000001790028...
that's really, really close to the original number and we only had to keep track of six places per intermediate number!