Why in 8085 microprocessor, the clock frequency is divided by two?
One reason to divide a clock by two is to obtain an even 50% duty cycle square wave. It may be that the 8085 internally uses both clock edges, and wouldn't function if one half of the cycle happened to be much shorter than the other.
In the days when the 8085 was new, those nice canned oscillators weren't common, and people often cobbled together clock circuits out of discrete crystals, capacitors, and logic gates. Dividing by two ensures that you have equally spaced rising and falling edges.
As for 6.144MHz, you will find that it can be divided by an integer to get common baud rate values, at least up to 38400.
follow up ...
Looking at an Intel data sheet for the 8085, there are three interesting statements
The 8085 incorporates all of the features that the 8224 clock generator and 8228 system controller provided for the 8080A
X1 and X2: Are connected to a crystal, LC or RC network to drive the internal clock generator. The input frequency is divided by 2 to give the processor's internal operating frequency.
CLK: Clock output for use as a system clock. The period of CLK is twice the X1, X2 input period.
So, speculations about using the odd edges of the clock to move stuff around internally aside, it becomes apparent that when they designed the 8085, Intel was replacing the need for a special clock controller by integrating that feature into the chip. Dividing the X1-X2 timebase in half before outputting it as CLK ensures that the system gets a nice even duty cycle, if nothing else.
At the time this chip was designed, people used as few transistors as possible in the CPU, to make them small enough to fit on the available chips.
I suspect that practically every "register" (both programmer-visible instruction-set registers and also internal microarchitecture latches) in a CPU of that era stored data in a transparent gated D latch or something similar. Nowadays, there's plenty of transistors on a chip, so it's simpler to use full master-slave D flip-flops, even though they use twice as many transistors.
Many instructions take data from some register A, combine it with some other data with the ALU, and store the result back in register A. That is pretty easy to do if register A is implemented with a full master-slave D flip-flop.
But if register A is a transparent gated D latch, you need non-overlapping clocks. You use a pulse on one clock to store some intermediate result somewhere (while register A holds its output constant), and then a pulse on another clock to load register A with the new value (while the intermediate register holds its output constant).
This requires a 2-phase clock. The easiest way to make a non-overlapping 2-phase clock (in those days when transistors were scarce) was a little external circuit that takes an input clock and divides it by two.
As time went on, people figured out how to pack more and more transistors onto an IC. So people designing CPUs integrated more and more of the stuff around the CPU in a full computer system onto the CPU chip.
Reading between the lines of the Wikipedia clock signal article, I get the impression that the people who designed the 8085 and the 6502 and other chips of that era had just a little more room than the previous generation of integrated CPUs, and they decided the best use of that room was to put that little external circuit on-chip. But they kept all the registers the same gated D latch as before.
So that's why the clock frequency is divided by two. You can think of the first external clock pulse generating a pulse on the phase_one internal clock signal to update that intermediate result register, and the second pulse from the external clock generating a pulse on the phase_two internal clock signal to update the programmer-visible register.
There are lots of reasons to split the instruction cycle into multiple clock cycles. A good example is accessing the main memory bus.
Most modern processors are Von-Neumann architectures; that is, their code and data both exist in the same memory chip. Well, if you want to read an instruction, and that instruction is going to load a variable from memory...that's two memory accesses. But most memory is only single-port (that is, it can only do one read or write per cycle). So how do you read the instruction and read your variable?
The solution is to use a two-stage instruction cycle. The first stage will fetch the instruction from memory, and the second stage can then read (or write!) the variable from main memory.
Some older chips went even further. Back in the day, if your chip had 16-bits of addressable memory, but the external address bus is only 8-bits, then you would be familiar with the Address Latch Enable. One clock cycle sends the upper 8-bits of the 16-bit address, and the next clock cycle sends the lower 8-bits. A third cycle could then read/write the variable from/to memory.
There are other, better reasons to have an instruction cycle that is multiple clock cycles in length. One of the best reasons is pipelining. This is a trick that modern processors use to more fully exploit all the execution units available in a chip; for example, while one instruction is being executed, the next is being fetched at the same time.