Why is USB cable maximum length is shorter than in RS232?
Speed of transmission matters because USB is half-duplex: in order to transmit a response, the bus must be turned around and data transmitted in the other direction. So the host sends out data and waits for an acknowledgement or a response. All transfers are controlled by the host. The device then has a certain (fairly short) time in which to respond. This time is roughly the time taken for two signal trips along a 5m cable.
(I can't find references right this second, but the relevant spec documents are public)
Edit: thanks to psmears for finding this section
Cables and Long-Haul Solutions
- Why are there cable length limits, and what are they?
A: The cable length was limited by a cable delay spec of 26ns to allow for reflections to settle at the transmitter before the next bit was sent. Since USB uses source termination and voltage-mode drivers, this has to be the case, otherwise reflections can pile up and blow the driver. This does not mean the line voltage has fully settled by the end of the bit; with worst-case undertermination. However, there's been enough damping by the end of the bit that the reflection amplitude has been reduced to manageable levels. The low speed cable length was limited to 18ns to keep transmission line effects from impacting low speed signals.
- I want to build a cable longer than 5 meters, why won't this work?
A: Even if you violated the spec, it literally wouldn't get you very far. Assuming worst-case delay times, a full speed device at the bottom of 5 hubs and cables has a timeout margin of 280ps. Reducing this margin to 0ps would only give you an extra 5cm, which is hardly worth the trouble.
So my answer is only half-right: the round trip limit is for a worst-case chain of hubs and cables, for a total depth of 25m.
Dan Neely is also right that USB was always supposed to be the lowest-cost solution for "slow" peripherals like keyboards, mice, printers etc. If you wanted full duplex for more speed and more distance, 100baseT ethernet is the natural choice.
See this page, https://superuser.com/questions/64744/maximum-length-of-a-usb-cable.
Q1: How long of a cable can I use to connect my device? A1: In practice, the USB specification limits the length of a cable between full speed devices to 5 meters (a little under 16 feet 5 inches). For a low speed device the limit is 3 meters (9 feet 10 inches).
Q2: Why can't I use a cable longer than 3 or 5m? A2: USB's electrical design doesn't allow it. When USB was designed, a decision was made to handle the propagation of electromagnetic fields on USB data lines in a way that limited the maximum length of a USB cable to something in the range of 4m. This method has a number of advantages and, since USB is intended for a desktop environment, the range limitations were deemed acceptable. If you're familiar with transmission line theory and want more detail on this topic, take a look at the USB signals section of the developers FAQ.
It's not really possible to "buffer" USB, at least not in the usual sense of the word. Typically, buffering means electrical amplification and perhaps signal regeneration.
With USB, the host drives the entirety of the bus. A host sends out a request, and the device has to issue a response to the host. The beginning of the response has to arrive at the host a certain time after the request has finished transmitting. With too long of a cable, the propagation delay is too long for the response to reach the host in time.
So there are workarounds, and none of them involve simple "buffering" since the buffering adds additional delays, and we need to somehow make the host more tolerant of a longer delay.
There are two classes of workarounds:
Workarounds that insert physical or virtual hubs. If a host enumerates a hub on the bus, the hub itself adds an extra delay, and there's another potentially full-length cable between the hub and the host. Any requests for devices that attach downstream from the hub are scheduled with additional delays.
You can insert a single-port hub every 4m of the cable, with up to 7 hubs in series. The limitation is 7 levels of hubs from the host to the ultimate device, so if there are any hubs upstream of your contraption, you need to reduce the number of hubs accordingly. Many USB hosts include a single-level of internal hub, so a realistic limit would be 28m of cable, with 6 hubs in series. All hubs but the first one will have to pretend to be self-powered.
You can add virtual hubs, with a beefier transceiver with preemphasis, right at the plug that goes into the host, then transmit USB traffic over a longer cable. As long as the signals received by the device at the end of such an extended cable are within spec, and as long as your receiver can recover the data sent by the standard device over a long cable, you'll be OK. The virtual hubs are added so that the host allows the long delay - but of course there are no physical hubs, just an impersonation of them.
Workarounds that emulate a device that appears "slow" at a higher level of protocol. That's how some Cat-5 USB "extenders" work. There are five partners here: the real host (rHost), an emulated device seen by it (eDev), a long cable, an emulated host (eHost), and the devices that see it at the far end of the cable (rDev).
Initially, the eDev pretends not to be there. At some time the eHost sees that an rDev was plugged in. It enumerates it, and forwards the data to the eDev. The eDev then emulates a plug-in event, and the rHost enumerates it. The rHost believes that it sees rDev, but it's only eDev being there, pretending. Similarly, the rDev thinks that is sees an rHost, but it's only an eHost being there, pretending.
Eventually, the rHost wants to issue some transfers to the rDev it believes is there, to make some use of it. For IN transfers, the eDev pretends to have no data (replies with a NAK). The transfer request is forwarded to the eHost, which re-executes it with rDev. The results of this are forwarded back to eDev, which uses the results the next time the host attempts the transfer.
For OUT transfers, the eDev has to guess as to what the behavior of rDev would be. There are various heuristics and behaviors that can be attempted here. One way is for eDev to always receive the data and reply with an ACK. The transfer is forwarded to eHost, which then replays the transfer to rDev. Ideally, rDev will eventually consume the data and ACK it. If this doesn't succeed, or if the rDev replies with a STALL, the best the eDev can do is act this way on the next transfer from the host. Alternatively, the eDev can always NAK the transfer, with the usually correct assumption that the host will simply re-try the identical transfer later. Even though the original transfer was NAK-ed, it is forwarded to eHost, which then executes the transfer with rDev. Whatever rDev's reply is, becomes the reply of the eDev as soon as it learns of it.
Realistic implementations will start with conservative heuristics that involve full roundtrip to rDev for all transfers that can be postponed by a NAK. As the transfers proceed, rDev's expected behavior can be learned, and eDev can become less conservative. The "extender" can use the knowledge of standard USB classes, and some vendor-specific class/device knowledge/blacklists/whitelists to offer better performance as well.