Infinite loop on microcontroller vs modern CPU
Why is this seemingly ok on a microcontroller but not usually wanted on a microprocessor?
It is also unwanted on a microcontroller for the same reason: it wastes power.
Am I right in thinking that the ATmega is in fact running at 100%
Correct.
and that because it is so low powered it doesn't cause any obvious heat problems?
Correct. However, if you run your microcontroller on batteries, then you have to think real hard about not wasting power. On a tiny cpu like AtMega328P it will not cause heat problems, but it will definitely shorten battery life.
All cpu's, whether they're desktop powerhouses or tiny microcontrollers, use the same methods:
1- Reduce clock speed or voltage.
2- Shut down unneeded hardware.
3- Go to sleep and wake up on an event (this is a special case of shutting down unneeded hardware, in this case the cpu is shut down).
In the AtMega328P you can implement this too. You can use a slower clock if you don't need all the awesome power of the 8-bit core, you can shut down unneeded peripherals... and the most important is the sleep mode.
You should read the manual for details, as there are several sleep modes which differ in wake-up latencies, what peripherals remain online and able to wake the cpu, whether RAM data is conserved or lost, etc. But basically the idea is: when in sleep mode, the cpu is stopped so it uses much less power. When an interrupt occurs, this wakes up the CPU and it processes the interrupt.
Of course you have to use the proper sleep mode and configure it properly so the peripheral that needs to wake up the cpu (for example, a timer or a GPIO interrupt) is not shut down. If everything is shut down, you'll have to use NMI or even Reset to wake it up, in the latter case by rebooting it.
If all your application does is wait on interrupts, like:
Pin Change Interrupt (PCI) to detect a button press or an incoming signal
Timer
Data received by UART or USB
etc
Then you don't need to spin the main loop. After configuring everything at boot, you'd start the main loop with a "go to sleep" instruction. The next instruction will execute after the cpu wakes up, processes all pending interrupts, and returns to the main loop. Then the main loop can, if required, do something about the received events, if they were not entirely handled by the interrupt code... and then go back to sleep.
Even if you're not using batteries, having low standby current can a mains powered the switching power supply to skip cycles and waste a lot less power too.
On a microcontroller (more specifically, on an Arduino Uno board using the ATmega 328P microcontroller) I would normally use an infinite loop to check for inputs etc (in Arduino land, this is normally the loop() function). If I leave this function blank however, it doesn't cause any problems.
Classical programming pattern, having a main loop…
On a desktop / laptop with an Intel i7 CPU etc if I ran a similar infinite loop (with nothing to do, or very little to do) it would pin the CPU at ~100% and generally spin up the fans etc ( a delay could be added to prevent this for example).
… we might be writing different main loops.
This same main loop would be bad practice on a microcontroller too, because that also runs the CPU of that at full load – which burns power. Don't do that, especially if you're on battery.
Modern CPU cores have synchronization mechanisms. That allows people to implement something like "let this loop's execution sleep until 1 ms has passed, or until this condition has changed".
That's basically at the core of any multi-task operating system – and basically all OSes that deserve that name are by now. On microcontrollers, you'll often find so-called RTOSes (real-time operating systems), which make guarantees on how sure you can be that the execution of something has started after so and so many nanoseconds, because that's typical for the use case of microcontrollers, whereas on desktops and server CPUs you'll usually find fully-fledged simultaneous multiprocessing OSes that make fewer guarantees on timing, but offer a much larger set of functionalities and hardware and software environment abstraction.
I don't know the Arduino execution environment well enough to actually make qualified statements about it, I'm researching this as I write: Arduino seems not designed for this – it really expects you to just spin busily. Since it has no "yield" functionality, the "housekeeping" that it does between calls to your loop
can't get called when you use the built-in delay
function. Ugh! Bad design.
What you'd do in a power and/or latency-aware design, you'd use an RTOS for your microcontroller – FreeRTOS is pretty popular, for the ARM Cortex-M series, mbed has a lot of traction, I personally like ChibiOS (but I don't think that's a good choice when switching over from Arduino sketches), the Linux Foundation is pushing Zephyr (which I'm conflicted about); really, there's a wealth of choices, and the manufacturer of your microcontroller usually supports one or multiple through their IDEs.
Why is this seemingly ok on a microcontroller but not usually wanted on a microprocessor?
It's not really OK, it's an unusual design pattern, in fact, for microcontrollers, which typically do things in regular intervals or react to external stimuli. It's not usual that you want to "use as much CPU as you can" on a microcontroller continuously.
There's exceptions to that pattern, and they exist both in the MCU as well in the server/desktop processor world; when you know you practically always have e.g. network data to process in a switch appliance, or when you know that your game could always already precompute a bit of world that you might or might not need in a few moments, then you'll find these spin loops. In some hardware drivers, you'll find "spin locks", meaning that the CPU continuously queries a value until it has changed (e.g. the hardware is done setting up and can be used now), but it's generally an emergency solution only, and you'll have to explain why you're doing that when trying to get such code into Linux, for example.
Am I right in thinking that the ATmega is in fact running at 100%, and that because it is so low powered it doesn't cause any obvious heat problems?
Yes. The ATMega isn't, by modern standards, low powered, but it's low-power enough for the heat not to become a problem.
Am I right in thinking that the ATmega is in fact running at 100%, and that because it is so low powered it doesn't cause any obvious heat problems?
Yes, it normally runs at 100% all the time, but is so low powered that it doesn't heat up significantly.
On a desktop / laptop with an Intel i7 CPU etc if I ran a similar infinite loop (with nothing to do, or very little to do) it would pin the CPU at ~100% and generally spin up the fans etc
The faster a CPU is clocked the more power it draws, because each time a logic level changes it must charge the capacitances of the transistors in the gates. Modern CPUs are designed to run as fast as possible - actually faster than possible. Even after making the transistors as small as possible, using the lowest possible voltage, and applying a huge heatsink, they still can't run fast enough to satisfy the PC user's 'need for speed'. So they rely on the fact that the OS and application programs spend most their time waiting for stuff to happen (user input, peripheral hardware etc.).
If you tried to run all the cores in an i7 continuously at maximum frequency it would melt down. To prevent this unused cores are powered off, and when maximum speed is not required (ie. most of the time) the active core(s) run at a lower frequency. When idle the OS doesn't just run a busy loop continuously executing instructions, but puts the CPU into a slowed down or halted state while it waits for interrupts etc. Various parts of the CPU can also be powered down when not in use.
The ATmega can also be put into lower power modes and individual peripherals turned off when not needed. If the system clock is changed to a lower frequency such as 32.678 kHz and all unnecessary peripherals are turned off, it can run (slowly) on just a few μA - not to reduce temperature, but to last longer on a small battery.
It is also possible to 'overclock' many Atmega chips. I have run an ATmega1284p (rated for 20MHz max at 5V) at 30MHz and it worked fine, but got quite warm.