What's the difference between mobile and desktop processors?


Note: This answer is written with the assumption that the CPUs being compared consist of commercially-available Intel, AMD, and ARM-based SoCs from approximately 2006 to 2015. Any set of comparison measurements will be invalid given a wide enough scope; I wanted to provide a very specific and "tangible" answer here while also covering the two most widely used types of processor, so I made a bunch of assumptions that may not be valid in the absolutely general case of CPU design. If you have nitpicks, please keep this in mind before you share them. Thanks!


Let's get one thing straight: MHz / GHz and number of cores are no longer a reliable indicator of the relative performance of any two arbitrary processors.

They were dubious numbers at best even in the past, but now that we have mobile devices, they are absolutely terrible indicators. I'll explain where they can be used later in my answer, but for now, let's talk about other factors.

Today, the best numbers to consider when comparing processors are Thermal Design Power (TDP), and Feature Fabrication Size, aka "fab size" (in nanometers -- nm).

Basically: as the Thermal Design Power increases, the "scale" of the CPU increases. Think of the "scale" between a bicycle, a car, a truck, a train, and a C-17 cargo airplane. Higher TDP means larger scale. The MHz may or may not be higher, but other factors like the complexity of the microarchitecture, the number of cores, the branch predictor's performance, the amount of cache, the number of execution pipelines, etc. all tend to be higher on larger-scale processors.

Now, as the fab size decreases, the "efficiency" of the CPU increases. So, if we assume two processors which are designed exactly the same except that one of them is scaled down to 14nm while the other is at 28nm, the 14nm processor will be able to:

  • Perform at least as fast as the higher fab size CPU;
  • Do so using less power;
  • Do so while dissipating less heat;
  • Do so using a smaller volume in terms of the physical size of the chip.

Generally, when companies like Intel and the ARM-based chip manufacturers (Samsung, Qualcomm, etc) decrease fab size, they also tend to ramp up the performance a bit. This puts a hamper on exactly how much power efficiency they can gain, but everyone likes their stuff to run faster, so they design their chips in a "balanced" way, so that you get some power efficiency gains, and some performance gains. On the other extremes, they could keep the processor exactly as power-hungry as the previous generation, but ramp up the performance a lot; or, they could keep the processor exactly at the same speed as the previous generation, but reduce the power consumption by a lot.

The main point to consider is that the current generation of tablet and smartphone CPUs has a TDP around 2 to 4 Watts and a fab size of 28 nm. A low-end desktop processor from 2012 has a TDP of at least 45 Watts and a fab size of 22 nm. Even if the tablet's System On Chip (SoC) were connected to an A/C mains power source so it doesn't have to worry about power sipping (to save battery), a quad-core tablet SoC would completely lose every single CPU benchmark to a 2012 low-end "Core i3", dual-core processor running at perhaps lower GHz.

The reasons:

  • The Core i3/i5/i7 chips are MUCH larger (in terms of number of transistors, physical die area, power consumption, etc.) than a tablet chip;
  • Chips that go into desktops care MUCH less about power savings. Software, hardware and firmware combine to severely cut down to performance on mobile SoCs in order to give you long battery life. On desktops, these features are only implemented when they do not significantly impact the top-end performance, and when top-end performance is requested by an application, it can be given consistently. On a mobile processor, they often implement many little "tricks" to drop frames here and there, etc. (in games, for example) which are mostly imperceptible to the eye but save battery life.

One neat analogy I just thought of: you could think of a processor's "MHz" like the "RPMs" meter on a vehicle's internal combustion engine. If I rev up my motorcycle's engine to 6000 RPM, does that mean it can pull more load than a train's 16-cylinder prime mover at 1000 RPM? No, of course not. A prime mover has around 2000 to 4000 horsepower (example here), while a motorcycle engine has around 100 to 200 horsepower (example here of the highest horsepower motorcycle engine ever just topping 200 hp).

TDP is closer to horsepower than MHz, but not exactly.

A counterexample is when comparing something like a 2014-model "Haswell" (4th Generation) Intel Core i5 processor to something like a high-end AMD processor. These two CPUs will be close in performance, but the Intel processor will use 50% less energy! Indeed, a 55 Watt Core i5 can often outperform a 105 Watt AMD "Piledriver" CPU. The primary reason here is that Intel has a much more advanced microarchitecture that has pulled away from AMD in performance since the "Core" brand started. Intel has also been advancing their fab size much faster than AMD, leaving AMD in the dust.

Desktop/laptop processors are somewhat similar in terms of performance, until you get down to tiny Intel tablets, which have similar performance to ARM mobile SoCs due to power constraints. But as long as desktop and "full scale" laptop processors continue to innovate year over year, which it seems likely they will, tablet processors will not overtake them.

I'll conclude by saying that MHz and # of Cores are not completely useless metrics. You can use these metrics when you are comparing CPUs which:

  • Are in the same market segment (smartphone/tablet/laptop/desktop);
  • Are in the same CPU generation (i.e. the numbers are only meaningful if the CPUs are based on the same architecture, which usually means they'd be released around the same time);
  • Have the same fab size and similar or identical TDP;
  • When comparing all of their specs, they differ primarily or solely in the MHz (clock speed) or number of cores.

If these statements are true of any two CPUs -- for instance, the Intel Xeon E3-1270v3 vs. the Intel Xeon E3-1275v3 -- then comparing them simply by MHz and/or # of Cores can provide you a clue of the difference in performance, but the difference will be much smaller than you expect on most workloads.

Here's a little chart I did up in Excel to demonstrate the relative importance of some of the common CPU specs (note: "MHz" actually refers to "clock speed", but I was in a hurry; "ISA" refers to "Instruction Set Architecture", i.e. the actual design of the CPU)

Note: These numbers are approximate/ballpark figures based on my experience, not any scientific research.

Ballpark figures for CPU specs' relative importance


Hm.. This is a good question.

The answer is NO, Samsung Galaxy is most likely not as powerful as your Desktop PC. And this would be obvious if you would run a comprehensive CPU benchmark test.

I will try to put together the answer the way I see it. Other, more experienced members will probably add more details and value later.

First of all, due to the difference in CPU architecture, mobile device processors and desktop PC processor support different instruction sets. As you have probably guessed, the instruction set is larger for PCs.

Another thing is false advertising. The speed advertised for PC CPU is often achieved and, CPU can run at that speed for long periods of time. This is possible because of excessive power supply from the mains, and decent cooling system that allows to remove the heat from the core. This is not the case for mobile devices. Advertised speed is maximal possible speed but it is much higher than the average speed. Mobile devices will often slow down their CPU, because of overheating and to save battery.

And the last but not the least is the availability of additional components like main memory (RAM), cache memory, etc. The amount of RAM is not the only criteria. There is also RAM clock speed that defines how quickly can data be stored and retrieved in/from RAM. These parameters also vary between mobile devices and PCs.

You could come up with more differences but the root cause is power consumption and size requirements. PCs can afford to draw more power from the mains and can also afford to be bigger, so they will always deliver higher processing power.

For additional reading I recommend: Processors: Computer vs Mobile


Actually MHz rating has little relevance between different manufacturers processors. It only has some relevance to CPU's in exactly the same family. While phone processors are becoming pretty fast and might well beat the pants off those old Pentium 4's, you still cant compare them to even a low end core i3.

You should be aware that there are quite a number of factors that influence overall performance and not just from the CPU. For example,

  • CPU clock speed
  • Number of processor cores
  • Number of instructions per cycle
  • Branch prediction
  • Instruction set
  • Instruction width
  • Bus width
  • Memory Speed
  • Cache size
  • Cache design
  • Silicon layout
  • Software optimisation
  • etc

So the clock speed or MHz rating is just one part of a number of different things that you can use to gauge performance. An AMD processor is rather a different kettle of fish than one from Intel or ARM. It's long been known that an AMD CPU at 3GHz and the same core count does not perform as well as an Intel CPU with the same core count and similar spec and GHz rating.

And you'll also note that memory speed affects performance too as well as cache. Noting that server processors have large L1 caches compared to desktop counterparts and those you'll find in your phone. So they spend less time waiting for data than what a phone CPU might.

The reason I've added instruction set and software optimisation is that some software can algorithms run better one one chip than another because they can make use of special instructions to speed up certain operations that might otherwise take dozens of instructions. This should not be underestimated.

It should be pointed out the TPD has nothing to do with performance. An identical CPU build with a smaller manufacturing process, e.g. going from 32 to 22nm for example will result in a lower TDP in the 22nm vs the 32nm die. But has performance decreased? no, quite the opposite. There does exist cross platform measurements that attempt to gauge relative performance such as the Linpack benchmark. But these are artificial measures and rarely are benchmarks a good indicator of performance for a particular application.