Why are integrated circuits powered by low voltage and high current?
I am not sure why this wasn't the first thing pointed out by any of the earlier answers, but it is because as transistors are made smaller to increase speed, increase density, and reduce power consumption, the gate oxide layer is made thinner (which also increases leakage currents).
A thin gate oxide layer can't withstand very high voltages so you end up with a device that only operates at very low voltages. Thin oxide layers also have more leakage so you don't want a high voltage anyways since that would just increase leakage current and increases static power consumption.
Your mistake is this:
Data processing, unlike power systems, isn't about power delivery; It's about data processing. So it is not that designers choose to operate at low voltages and high currents thus going against \$I^2R\$. Yes, they are concerned about power consumption and heat due to losses, but they aren't concerned with the efficient delivery of power. A power designer has to deliver X amount of power and would increase voltage so they could decrease current while delivering that same power. A digital designer would outright decrease the "power output" if they could.
Their optimizations necessitate low operating voltages which results in high leakage currents. The goal of these optimizations is to allow smaller transistors so you can pack more of them in as well as switch them faster, and when you have millions upon millions of transistors switching very frequently that results a lot of charging/discharging the gate capacitances. This dynamic current results in the high peak currents which can be tens of amps in high-speed, high density digital logic. You can see that all this current and power is undesired and unintentional.
Ideally, we would like really no current at all because our concern is information, not energy/power. High voltages would also be nice for noise immunity but this runs directly counter to making transistors smaller.
The power required to switch a capacitance from logic 0 to logic 1 (or vice versa) is proportional to the clock frequency times the supply voltage squared. In CMOS digital circuits the logic gate inputs look like capacitors, so charging and discharging capacitances uses most of the power in these circuits.
As you mention, the \$I^2R\$ losses in conductors will go up, so the low voltage power supplies are placed as close as possible to the processor. Look at a modern motherboard and you will see a 12V connector very close to the CPU. You will also see several large inductors and capacitors...these are for the low voltage switching power supplies.
In addition to Elliot's point about the power required to charge the tiny capacitances associated with each transistor in a GPU or high performance CPU. consider the size of each transistor.
In the early 1980s people didn't worry much about electrostatic protection, but I started paying attention when I first came across a transistor with a 1 micron gate insulation width (in 1982). It's electric field strength (volts/metre), not just voltage, that causes high voltage breakdown.
You can get a lot of V/m across a micron.
Now, minimum feature sizes are a couple of orders of magnitude smaller, so connecting the tiny transistors in a CPU's core logic to the traditional 5V supply would simply destroy them.
I/O transistors were built outsize and especially tough, and chips used separate supply rails for the I/O interconnections. But increasingly, even these can only tolerate 3.3V, or even down to 1.8V. In FPGAs, pretty much only trailing edge devices are still 5V tolerant.