minimum number of steps to reduce number to 1
In summary:
- If n is even, divide by 2
- If n is 3 or its least significant bits are 01, subtract.
- If n's least significant bits are 11, add.
Repeat these operations on n until you reach 1, counting the number of operations performed. This is guaranteed to give the correct answer.
As an alternative to the proof from @trincot, here is one that has less cases and is hopefully more clear:
Proof:
Case 1: n is even
Let y be the value of the number after performing some operations on it. To start, y = n.
- Assume that dividing n by 2 is not the optimal approach.
- Then either add or subtract an even number of times
- Mixing addition and subtraction will result in unnecessary operations, so only either one is done.
- An even number must be added/subtracted, since stopping on an odd number would force continued adding or subtracting.
- Let 2k, where k is some integer, be the number of additions or subtractions performed
- Limit k when subtracting so that n - 2k >= 2.
- After adding/subtracting, y = n + 2k, or y = n - 2k.
- Now divide. After dividing, y = n/2 + k, or y = n/2 - k
- 2k + 1 operations have been performed. But the same result could have have been achieved in 1 + k operations, by dividing first then adding or subtracting k times.
- Thus the assumption that dividing is not the optimal approach was wrong, and dividing is the optimal approach.
Case 2: n is odd
The goal here is to show that when faced with an odd n, either adding or subtracting will result in less operations to reach a given state. We can use that fact that dividing is optimal when faced with an even number.
We will represent n with a partial bitstring showing the least significant bits: X1, or X01, etc, where X represents the remaining bits, and is nonzero. When X is 0, the correct answers are clear: for 1, you're done; for 2 (0b10), divide; for 3 (0b11), subtract and divide.
Attempt 1: Check whether adding or subtracting is better with one bit of information:
- Start: X1
- add: (X+1)0, divide: X+1 (2 operations)
- subtract: X0, divide: X (2 operations)
We reach an impass: if X or X+1 were even, the optimal move would be to divide. But we don't know if X or X+1 are even, so we can't continue.
Attempt 2: Check whether adding or subtracting is better with two bits of information:
- Start: X01
- add: X10, divide: X1
- add: (X+1)0, divide: X+1 (4 operations)
- subtract: X0, divide: X (4 operations)
- subtract: X00, divide: X0, divide: X (3 operations)
- add: X+1 (possibly not optimal) (4 operations)
- add: X10, divide: X1
Conclusion: for X01, subtracting will result in at least as few operations as adding: 3 and 4 operations versus 4 and 4 operations to reach X and X+1.
- Start: X11
- add: (X+1)00, divide: (X+1)0, divide: X+1 (3 operations)
- subtract: X (possibly not optimal) (4 operations)
- subtract: X10, divide: X1
- add: (X+1)0, divide: X+1 (4 operations)
- subtract: X0, divide: X (4 operations)
- add: (X+1)00, divide: (X+1)0, divide: X+1 (3 operations)
Conclusion: for X11, adding will result in at least as few operations as subtracting: 3 and 4 operations versus 4 and 4 operations to reach X+1 and X.
Thus, if n's least significant bits are 01, subtract. If n's least significant bits are 11, add.
There is a pattern which allows you to know the optimal next step in constant time. In fact, there can be cases where there are two equally optimal choices -- in that case one of them can be derived in constant time.
If you look at the binary representation of n, and its least significant bits, you can make some conclusions about which operation is leading to the solution. In short:
- if the least significant bit is zero, then divide by 2
- if n is 3, or the 2 least significant bits are 01, then subtract
- In all other cases: add.
Proof
If the least significant bit is zero, the next operation should be the division by 2. We could instead try 2 additions and then a division, but then that same result can be achieved in two steps: divide and add. Similarly with 2 subtractions. And of course, we can ignore the useless subsequent add & subtract steps (or vice versa). So if the final bit is 0, division is the way to go.
Then the remaining 3-bit patterns are like **1
. There are four of them. Let's write a011
to denote a number that ends with bits 011
and has a set of prefixed bits that would represent the value a:
a001
: adding one would givea010
, after which a division should occur:a01
: 2 steps taken. We would not want to subtract one now, because that would lead toa00
, which we could have arrived at in two steps from the start (subtract 1 and divide). So again we add and divide to geta1
, and for the same reason we repeat that again, giving:a+1
. This took 6 steps, but leads to a number that could be arrived at in 5 steps (subtract 1, divide 3 times, add 1), so clearly, we should not perform the addition. Subtraction is always better.a111
: addition is equal or better than subtraction. In 4 steps we geta+1
. Subtraction and division would givea11
. Adding now would be inefficient compared to the initial addition path, so we repeat this subtract/divide twice and geta
in 6 steps. Ifa
ends in 0, then we could have done this in 5 steps (add, divide three times, subtract), ifa
ends in a 1, then even in 4. So Addition is always better.a101
: subtraction and double division leads toa1
in 3 steps. Addition and division leads toa11
. To now subtract and divide would be inefficient, compared to the subtraction path, so we add and divide twice to geta+1
in 5 steps. But with the subtraction path, we could reach this in 4 steps. So subtraction is always better.a011
: addition and double division leads toa1
. To geta
would take 2 more steps (5), to geta+1
: one more (6). Subtraction, division, subtraction, double division leads toa
(5), to geta+1
would take one more step (6). So addition is at least as good as subtraction. There is however one case not to overlook: if a is 0, then the subtraction path reaches the solution half-way, in 2 steps, while the addition path takes 3 steps. So addition is always leading to the solution, except when n is 3: then subtraction should be chosen.
So for odd numbers the second-last bit determines the next step (except for 3).
Python Code
This leads to the following algorithm (Python), which needs one iteration for each step and should thus have O(logn) complexity:
def stepCount(n):
count = 0
while n > 1:
if n % 2 == 0: # bitmask: *0
n = n // 2
elif n == 3 or n % 4 == 1: # bitmask: 01
n = n - 1
else: # bitmask: 11
n = n + 1
count += 1
return count
See it run on repl.it.
JavaScript Snippet
Here is a version where you can input a value for n and let the snippet produce the number of steps:
function stepCount(n) {
var count = 0
while (n > 1) {
if (n % 2 == 0) // bitmask: *0
n = n / 2
else if (n == 3 || n % 4 == 1) // bitmask: 01
n = n - 1
else // bitmask: 11
n = n + 1
count += 1
}
return count
}
// I/O
var input = document.getElementById('input')
var output = document.getElementById('output')
var calc = document.getElementById('calc')
calc.onclick = function () {
var n = +input.value
if (n > 9007199254740991) { // 2^53-1
alert('Number too large for JavaScript')
} else {
var res = stepCount(n)
output.textContent = res
}
}
<input id="input" value="123549811245">
<button id="calc">Caluclate steps</button><br>
Result: <span id="output"></span>
Please be aware that the accuracy of JavaScript is limited to around 1016, so results will be wrong for bigger numbers. Use the Python script instead to get accurate results.