Is there a way to flush a POSIX socket?

For Unix-domain sockets, you can use fflush(), but I'm thinking you probably mean network sockets. There isn't really a concept of flushing those. The closest things are:

  1. At the end of your session, calling shutdown(sock, SHUT_WR) to close out writes on the socket.

  2. On TCP sockets, disabling the Nagle algorithm with sockopt TCP_NODELAY, which is generally a terrible idea that will not reliably do what you want, even if it seems to take care of it on initial investigation.

It's very likely that handling whatever issue is calling for a 'flush' at the user protocol level is going to be the right thing.


In RFC 1122 the name of the thing that you are looking for is "PUSH". However, there does not seem to be a relevant TCP API implementation that implements "PUSH". Alas, no luck.

Some answers and comments deal with the Nagle algorithm. Most of them seem to assume that the Nagle algorithm delays each and every send. This assumption is not correct. Nagle delays sending only when at least one of the previous packets has not yet been acknowledged (http://www.unixguide.net/network/socketfaq/2.11.shtml).

To put it differently: TCP will send the first packet (of a row of packets) immediately. Only if the connection is slow and your computer does not get a timely acknowledgement, Nagle will delay sending subsequent data until either (whichever occurs first)

  • a time-out is reached or
  • the last unacknowledged packet is acknowledged or
  • your send buffer is full or
  • you disable Nagle or
  • you shutdown the sending direction of your connection

A good mitigation is to avoid the business of subsequent data as far as possible. This means: If your application calls send() more than one time to transmit a single compound request, try to rewrite your application. Assemble the compound request in user space, then call send(). Once. This saves on context switches (much more expensive than most user-space operations), too.

Besides, when the send buffer contains enough data to fill the maximum size of a network packet, Nagle does not delay either. This means: If the last packet that you send is big enough to fill your send buffer, TCP will send your data as soon as possible, no matter what.

To sum it up: Nagle is not the brute-force approach to reducing packet fragmentation some might consider it to be. On the contrary: To me it seems to be a useful, dynamic and effective approach to keep both a good response time and a good ratio between user data and header data. That being said, you should know how to handle it efficiently.


What about setting TCP_NODELAY and than reseting it back? Probably it could be done just before sending important data, or when we are done with sending a message.

send(sock, "notimportant", ...);
send(sock, "notimportant", ...);
send(sock, "notimportant", ...);
int flag = 1; 
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));
send(sock, "important data or end of the current message", ...);
flag = 0; 
setsockopt(sock, IPPROTO_TCP, TCP_NODELAY, (char *) &flag, sizeof(int));

As linux man pages says

TCP_NODELAY ... setting this option forces an explicit flush of pending output ...

So probably it would be better to set it after the message, but am not sure how it works on other systems

Tags:

C

Sockets

Posix