Why is there a clock in my arduino?
Why is there a clock in my arduino?
Because that is how computers and microcontrollers, etc, work.
With a 16mhz clock, each line of my program will take 1/16000000 second, right?
No.
even a very very big line will only take 1/16000000 of a second ?
No.
The clock defines at what speed the machine code instructions are fetched from memory and executed. Most instructions take 1 clock cycle, but some take more.
One line of C code may be compiled into any number of assembly instructions, which then get converted into machine code (raw numbers). That could be anywhere from 1 assembly instruction to thousands of assembly instructions, depending on what the line does.
With a 8mhz clock, each line will take the double of the time, right? So the whole "main loop" will take the double time too, right?
Every operation in the chip is governed by the clock. If the clock is half the speed the chip is running at half the speed, so every operation will take twice as long, yes.
Not answered in @Majenko's post: Yes, with a 8MHz clock, each line will take the double of the time. Unless the line waits for something that is not clock driven - e.g. external input.
In addition to @Majenko's answer, a CPU has a clock to ensure the instructions are complete, before the next step starts. A CPU is made out of lots of transistors (I found a reference that indicated the AtMega was in the range of low millions, give-or-take an order of magnitude).
During a cycle, the electricity flows through the chip, turning transistors on/off, the results of which will turn more transistors on/off, and so forth down the line. While this is happening, some parts of the chip have a "wrong" value - you can think of this as being half-way through a calculation (you've added the one's column and the 10s column, and are about to start on the 100s column). You don't want this to affect the outside world, so (for example) your output pins are latched - held at whatever value they are - until the instruction is complete. How long it takes to complete an instruction varies, but the manufacturer works out the slowest instruction under the worst circumstances.
For the AtMega (which is the chip on the Arduino), Atmel (who designed the chip) has declared this is 1/20,000,000 of a second - this is 20MHz.
Note that not all microprocessors run all their instructions at 1 instruction per cycle - some instructions might take 1, or 2, or 10 cycles. Pipelining makes things even more complicated - a processor might do some of the work (e.g. fetch the next instruction) in one cycle, execute it in the next - but while it is executing instruction 1, it can also fetch the next instruction. To do this, it might need to take a guess at what instruction comes next (in the case of the machine-code equivalent of a "goto" - such is used for loops), and if it guesses wrong, it has to cope with that; throw away the instruction it retrieved and retrieve the next one, losing a cycle.
The Wikipedia page on instruction pipelining shows an example of a RISC chip pipelining in 5 stages - instruction fetch, instruction decode, execute, memory access, and write-back. So, you can have 5 instructions at some stage of execution, overlapping. Until the "writeback" phase, the instructions have no real effect. You can think of this as an assembly line - it takes 7 minutes to put a widget together, but it can be broken down into 5 stages, the longest stage taking 2 minutes. Once every two minutes, each partly completed widget is moved by the assembly line to the next station. You get one widget out every two minutes - The "clock" can only tick as fast as the slowest step. If you push the widget out any faster, the "bottleneck" will get more and more widgets queued up.