How to wrap around a range
You can use something like this:
while (angle > 2pi)
angle -= 2pi;
while (angle < 0)
angle += 2pi;
Basically, you have to change the angle by 2pi until you are assured that it doesn't exceed 2pi or fall below 0.
What you are looking for is the modulus. The fmod function will not work because it calculates the remainder and not the arithmetic modulus. Something like this should work:
inline double wrapAngle( double angle )
{
double twoPi = 2.0 * 3.141592865358979;
return angle - twoPi * floor( angle / twoPi );
}
Edit:
The remainder is commonly defined as what is left over after long division (eg. the remainder of 18/4 is 2, because 18 = 4 * 4 + 2). This gets hairy when you have negative numbers. The common way to find the remainder of a signed division is for the remainder to have the same sign as the result (eg. the remainder of -18/4 is -2, because -18 = -4 * 4 + -2).
The definition of x modulus y is the smallest positive value of m in the equation x=y*c+m, given c is an integer. So 18 mod 4 would be 2 (where c=4), however -18 mod 4 would also be 2 (where c=-5).
The simplest calculation of x mod y is x-y*floor(x/y), where floor is the largest integer that is less than or equal to the input.
angle = fmod(angle, 2.0 * pi);
if (angle < 0.0)
angle += 2.0 * pi;
Edit: After re-reading this (and looking at Jonathan Leffler's answer) I was a bit surprised by his conclusion, so I rewrote the code to what I considered a somewhat more suitable form (e.g., printing out a result from the computation to ensure the compiler couldn't just discard the computation completely because it was never used). I also changed it to use the Windows performance counter (since he didn't include his timer class, and the std::chrono::high_resolution_timer
is completely broken in both the compilers I have handy right now).
I also did a bit of general code cleanup (this is tagged C++, not C), to get this:
#include <math.h>
#include <iostream>
#include <vector>
#include <chrono>
#include <windows.h>
static const double PI = 3.14159265358979323844;
static double r1(double angle)
{
while (angle > 2.0 * PI)
angle -= 2.0 * PI;
while (angle < 0)
angle += 2.0 * PI;
return angle;
}
static double r2(double angle)
{
angle = fmod(angle, 2.0 * PI);
if (angle < 0.0)
angle += 2.0 * PI;
return angle;
}
static double r3(double angle)
{
double twoPi = 2.0 * PI;
return angle - twoPi * floor(angle / twoPi);
}
struct result {
double sum;
long long clocks;
result(double d, long long c) : sum(d), clocks(c) {}
friend std::ostream &operator<<(std::ostream &os, result const &r) {
return os << "sum: " << r.sum << "\tticks: " << r.clocks;
}
};
result operator+(result const &a, result const &b) {
return result(a.sum + b.sum, a.clocks + b.clocks);
}
struct TestSet { double start, end, increment; };
template <class F>
result tester(F f, TestSet const &test, int count = 5)
{
LARGE_INTEGER start, stop;
double sum = 0.0;
QueryPerformanceCounter(&start);
for (int i = 0; i < count; i++) {
for (double angle = test.start; angle < test.end; angle += test.increment)
sum += f(angle);
}
QueryPerformanceCounter(&stop);
return result(sum, stop.QuadPart - start.QuadPart);
}
int main() {
std::vector<TestSet> tests {
{ -6.0 * PI, +6.0 * PI, 0.01 },
{ -600.0 * PI, +600.0 * PI, 3.00 }
};
std::cout << "Small angles:\n";
std::cout << "loop subtraction: " << tester(r1, tests[0]) << "\n";
std::cout << " fmod: " << tester(r2, tests[0]) << "\n";
std::cout << " floor: " << tester(r3, tests[0]) << "\n";
std::cout << "\nLarge angles:\n";
std::cout << "loop subtraction: " << tester(r1, tests[1]) << "\n";
std::cout << " fmod: " << tester(r2, tests[1]) << "\n";
std::cout << " floor: " << tester(r3, tests[1]) << "\n";
}
The results I got were as follows:
Small angles:
loop subtraction: sum: 59196 ticks: 684
fmod: sum: 59196 ticks: 1409
floor: sum: 59196 ticks: 1885
Large angles:
loop subtraction: sum: 19786.6 ticks: 12516
fmod: sum: 19755.2 ticks: 464
floor: sum: 19755.2 ticks: 649
At least to me, the results seem to support a rather different conclusion than Jonathon reached. Looking at the version that does subtraction in a loop, we see two points: for the large angles test it produces a sum that's different from the other two (i.e., it's inaccurate) and second, it's horribly slow. Unless you know for certain that your inputs always start out nearly normalized, this is basically just unusable.
Between the fmod
version and the floor
version there seems to be no room for argument--they both produce accurate results, but the fmod
version is faster in both the small angle and large angle tests.
I did a bit more testing, experimenting with increasing the number of repetitions and decreasing the step sizes in the large angles test. Although I suppose it's possible it's simply due to a difference in platform or compiler, I was unable to find any circumstance or situation that even came close to upholding Jonathan's results or conclusion.
Bottom line: if you have a lot of prior knowledge about your input, and know it'll always be nearly normalized before you normalize it, then you might be able to get away with doing subtraction in a loop. Under any other circumstance, fmod
is the clear choice. There seems to be no circumstance in which the floor
version makes any sense at all.
Oh, for what it's worth:
OS: Windows 7 ultimate
Compiler: g++ 4.9.1
Hardware: AMD A6-6400K
Out of curiosity, I experimented with three algorithms in other answers, timing them.
When the values to be normalized are close to the range 0..2π, then the while
algorithm is quickest; the algorithm using fmod()
is slowest, and the algorithm using floor()
is in between.
When the values to be normalized are not close to the range 0..2π, then the while
algorithm is slowest, the algorithm using floor()
is quickest, and the algorithm using fmod()
is in between.
So, I conclude that:
- If the angles are (generally) close to normalized, the
while
algorithm is the one to use. - If the angles are not close to normalized, then the
floor()
algorithm is the one to use.
Test results:
r1 = while
, r2 = fmod()
, r3 = floor()
Near Normal Far From Normal
r1 0.000020 r1 0.000456
r2 0.000078 r2 0.000085
r3 0.000058 r3 0.000065
r1 0.000032 r1 0.000406
r2 0.000085 r2 0.000083
r3 0.000057 r3 0.000063
r1 0.000033 r1 0.000406
r2 0.000085 r2 0.000085
r3 0.000058 r3 0.000065
r1 0.000033 r1 0.000407
r2 0.000086 r2 0.000083
r3 0.000058 r3 0.000063
Test code:
The test code used the value shown for PI
. The C standard does not define a value for π, but POSIX does define M_PI
and a number of related constants, so I could have written my code using M_PI
instead of PI
.
#include <math.h>
#include <stdio.h>
#include "timer.h"
static const double PI = 3.14159265358979323844;
static double r1(double angle)
{
while (angle > 2.0 * PI)
angle -= 2.0 * PI;
while (angle < 0)
angle += 2.0 * PI;
return angle;
}
static double r2(double angle)
{
angle = fmod(angle, 2.0 * PI);
if (angle < 0.0)
angle += 2.0 * PI;
return angle;
}
static double r3(double angle)
{
double twoPi = 2.0 * PI;
return angle - twoPi * floor( angle / twoPi );
}
static void tester(const char * tag, double (*test)(double), int noisy)
{
typedef struct TestSet { double start, end, increment; } TestSet;
static const TestSet tests[] =
{
{ -6.0 * PI, +6.0 * PI, 0.01 },
// { -600.0 * PI, +600.0 * PI, 3.00 },
};
enum { NUM_TESTS = sizeof(tests) / sizeof(tests[0]) };
Clock clk;
clk_init(&clk);
clk_start(&clk);
for (int i = 0; i < NUM_TESTS; i++)
{
for (double angle = tests[i].start; angle < tests[i].end; angle += tests[i].increment)
{
double result = (*test)(angle);
if (noisy)
printf("%12.8f : %12.8f\n", angle, result);
}
}
clk_stop(&clk);
char buffer[32];
printf("%s %s\n", tag, clk_elapsed_us(&clk, buffer, sizeof(buffer)));
}
int main(void)
{
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
tester("r1", r1, 0);
tester("r2", r2, 0);
tester("r3", r3, 0);
return(0);
}
Testing on Mac OS X 10.7.4 with the standard /usr/bin/gcc
(i686-apple-darwin11-llvm-gcc-4.2 (GCC) 4.2.1 (Based on Apple Inc. build 5658) (LLVM build 2336.9.00)
). The 'close to normalized' test code is shown; the 'far from normalized' test data was created by uncommenting the //
comment in the test data.
Timing with a home-built GCC 4.7.1 is similar (the same conclusions would be drawn):
Near Normal Far From Normal
r1 0.000029 r1 0.000321
r2 0.000075 r2 0.000094
r3 0.000054 r3 0.000065
r1 0.000028 r1 0.000327
r2 0.000075 r2 0.000096
r3 0.000053 r3 0.000068
r1 0.000025 r1 0.000327
r2 0.000075 r2 0.000101
r3 0.000053 r3 0.000070
r1 0.000028 r1 0.000332
r2 0.000076 r2 0.000099
r3 0.000050 r3 0.000065