Here’s a problem that had me puzzled for longer than I care to admit. I have a C# application that periodically makes rather light-weight calls on a web service. To be a good UI citizen, the app makes the calls on a background thread, and the standard .NET HttpWebRequest mechanism is used to make the calls. The thread maintains a single queue of requests, processes one at a time, creating a new HttpWebRequest object for each, and the results are returned to the UI (if necessary) by a callback.
My problem was that intermittently the calls were timing out (I’d set a 10-second timeout on them). Furthermore, when I examined the server logs, the requests were arriving and being processed, but often much later than they were sent, sometimes by as much as a minute. Yet when I attached a sniffer to the client machine, it showed the request packets leaving the machine right when my program was sending them. What on earth was going on?
Of course, I haven’t told you all the relevant details. The application also has two other background threads making long-polling calls on the same web server to allow the server to “push” information to the client. These calls return as soon as the server has something to share with the client, or return with no results when a minute has passed. One of the threads was using HttpWebRequest directly, and the other was using WebClient, but that turns out to just be a higher-level interface on top of HttpWebRequest.
Much closer examination of the sniffer logs revealed that the short request packets were indeed being sent when the program requested them to be, but they were being sent over one of the long-poll connections! So although a packet would arrive at the server in a timely manner, its processing didn’t happen until the long-poll that preceded it returned. Thus messing up this programmer’s mental model of different HttpWebRequest objects in different threads being independent.
It turns out that .NET internally has a limit on the number of simultaneous connections to the same server. And you guessed it, the limit is 2. This apparently stems from language in RFC 2616 that says a client “SHOULD NOT” maintain more than 2 connections with a server. Never mind that any modern web browser is perfectly happy to open 8 or more connections to a server to get good performance loading content-rich web pages from it. So HttpWebRequest doesn’t open a third connection, and it doesn’t complain, it just immediately reuses one of the existing two connections without regard to the fact that there’s a request pending on it.
The solution, fortunately, is easy—raise the connection limit for the host:
ServicePointManager.FindServicePoint(new Uri(uri)).ConnectionLimit = 8;
In retrospect, this discovery solved another mysterious problem we’d had. The UI contains a couple dozen thumbnails, all of which are BitmapImage objects with an ImageSource pointing to that same web server. When the UI would start up, most of the images would quickly display, but some remained blank for as long as a minute. We now see that the requests for those images must have been stuck on the end of one of those long poll connections.