Some clarification needed about synchronous versus asynchronous asio operations

The Boost.Asio documentation really does a fantastic job explaining the two concepts. As Ralf mentioned, Chris also has a great blog describing asynchronous concepts. The parking meter example explaining how timeouts work is particularly interesting, as is the bind illustrated example.

First, consider a synchronous connect operation:

synchronous connect

The control flow is fairly straightforward here, your program invokes some API (1) to connect a socket. The API uses an I/O service (2) to perform the operation in the operating system (3). Once this operation is complete (4 and 5), control returns to your program immediately afterwards (6) with some indication of success or failure.

The analogous asynchronous operation has a completely different control flow:

asynchronous connect

Here, your application initiates the operation (1) using the same I/O service (2), but the control flow is inverted. Completion of the operation causes the I/O service to notify your program through a completion handler. The time between step 3 and when the operation has completed was contained entirely within the connect operation for the synchronous case.

You can see the synchronous case is naturally easier for most programmers to grasp because it represents the traditional control flow paradigms. The inverted control flow used by asynchronous operations is difficult to understand, it often forces your program to split up operations into start and handle methods where the logic is shifted around. However, once you have a basic understanding of this control flow you’ll realize how powerful the concept really is. Some of the advantages of asynchronous programming are:

  • Decouples threading from concurrency. Take a long running operation, for the synchronous case you would often create a separate thread to handle the operation to prevent an application’s GUI from becoming unresponsive. This concept works fine at a small scale, but quickly falls apart at a handful of threads.

  • Increased Performance. The thread-per-connection design simply does not scale. See the C10K problem.

  • Composition (or Chaining). Higher level operations can be composed of multiple completion handlers. Consider transferring a JPEG image, the protocol might dictate the first 40 bytes include a header describing the image size, shape, maybe some other information. The first completion handler to send this header can initiate the second operation to send the image data. The higher level operation sendImage() does not need to know, or care, about the method chaining used to implement the data transfer.

  • Timeouts and cancel-ability. There are platform specific ways to timeout a long running operation (ex: SO_RCVTIMEO and SO_SNDTIMEO). Using asynchronous operations enables the usage of deadline_timer canceling long running operations on all supported platforms.


Of course, in both operations there is
allways the risk that the program flow
stops indefinitely by some
circunstance -there the use of
timers-, but I would like know some
more authorized opinions in this
matter.

My personal experience using Asio stems from the scalability aspect. Writing software for supercomputers requires a fair amount of care when dealing with limited resources such as memory, threads, sockets, etc. Using a thread-per-connection for ~2 million simultaneous operations is a design that is dead on arrival.

Leave a Comment