Our current nginx config uses a separate HTTP/1.0 TCP connection for each request from nginx->varnish. Theses happen over the loopback interface so there's virtually zero latency impact from the handshakes, but it would be more efficient in general to use HTTP/1.1 keepalives, and reduce the bloat of TIME_WAIT sockets on the servers.
Typical socket states for nginx->varnish local connections currently (cp1065 has 32 worker threads, and each would have at most one connection established at any given moment, constantly breaking and re-making it):
root@cp1065:~# netstat -an|grep '10.64.0.102:80[ ]*10.64.0.102'|awk '{print $6}'|sort|uniq -c 32 ESTABLISHED 9 FIN_WAIT2 63688 TIME_WAIT
With keepalive patch manually applied (each of the 32 worker processes spawns parallel keepalive connections as necessary, with a maximum of 4x idle keepalive connections per process before it starts pruning them - so we'd expect 128 ESTABLISHED minimum at all times, and far fewer TIME_WAIT):
root@cp1065:~# netstat -an|grep '10.64.0.102:80[ ]*10.64.0.102'|awk '{print $6}'|sort|uniq -c 144 ESTABLISHED 2 FIN_WAIT2 2053 TIME_WAIT