Running Asynchronous Code
An HTTP server should be able to serve multiple clients concurrently; that is, it should not wait for previous requests to complete before handling the current request. The book solves this problem by creating a thread pool where each connection is handled on its own thread. Here, instead of improving throughput by adding threads, we'll achieve the same effect using asynchronous code.
Let's modify handle_connection
to return a future by declaring it an async fn
:
async fn handle_connection(mut stream: TcpStream) {
//<-- snip -->
}
Adding async
to the function declaration changes its return type
from the unit type ()
to a type that implements Future<Output=()>
.
If we try to compile this, the compiler warns us that it will not work:
$ cargo check
Checking async-rust v0.1.0 (file:///projects/async-rust)
warning: unused implementer of `std::future::Future` that must be used
--> src/main.rs:12:9
|
12 | handle_connection(stream);
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
|
= note: `#[warn(unused_must_use)]` on by default
= note: futures do nothing unless you `.await` or poll them
Because we haven't await
ed or poll
ed the result of handle_connection
,
it'll never run. If you run the server and visit 127.0.0.1:7878
in a browser,
you'll see that the connection is refused; our server is not handling requests.
We can't await
or poll
futures within synchronous code by itself.
We'll need an asynchronous runtime to handle scheduling and running futures to completion.
Please consult the section on choosing a runtime
for more information on asynchronous runtimes, executors, and reactors.
Any of the runtimes listed will work for this project, but for these examples,
we've chosen to use the async-std
crate.
Adding an Async Runtime
The following example will demonstrate refactoring synchronous code to use an async runtime; here, async-std
.
The #[async_std::main]
attribute from async-std
allows us to write an asynchronous main function.
To use it, enable the attributes
feature of async-std
in Cargo.toml
:
[dependencies.async-std]
version = "1.6"
features = ["attributes"]
As a first step, we'll switch to an asynchronous main function,
and await
the future returned by the async version of handle_connection
.
Then, we'll test how the server responds.
Here's what that would look like:
#[async_std::main] async fn main() { let listener = TcpListener::bind("127.0.0.1:7878").unwrap(); for stream in listener.incoming() { let stream = stream.unwrap(); // Warning: This is not concurrent! handle_connection(stream).await; } }
Now, let's test to see if our server can handle connections concurrently.
Simply making handle_connection
asynchronous doesn't mean that the server
can handle multiple connections at the same time, and we'll soon see why.
To illustrate this, let's simulate a slow request.
When a client makes a request to 127.0.0.1:7878/sleep
,
our server will sleep for 5 seconds:
use std::time::Duration;
use async_std::task;
async fn handle_connection(mut stream: TcpStream) {
let mut buffer = [0; 1024];
stream.read(&mut buffer).unwrap();
let get = b"GET / HTTP/1.1\r\n";
let sleep = b"GET /sleep HTTP/1.1\r\n";
let (status_line, filename) = if buffer.starts_with(get) {
("HTTP/1.1 200 OK\r\n\r\n", "hello.html")
} else if buffer.starts_with(sleep) {
task::sleep(Duration::from_secs(5)).await;
("HTTP/1.1 200 OK\r\n\r\n", "hello.html")
} else {
("HTTP/1.1 404 NOT FOUND\r\n\r\n", "404.html")
};
let contents = fs::read_to_string(filename).unwrap();
let response = format!("{status_line}{contents}");
stream.write(response.as_bytes()).unwrap();
stream.flush().unwrap();
}
This is very similar to the
simulation of a slow request
from the Book, but with one important difference:
we're using the non-blocking function async_std::task::sleep
instead of the blocking function std::thread::sleep
.
It's important to remember that even if a piece of code is run within an async fn
and await
ed, it may still block.
To test whether our server handles connections concurrently, we'll need to ensure that handle_connection
is non-blocking.
If you run the server, you'll see that a request to 127.0.0.1:7878/sleep
will block any other incoming requests for 5 seconds!
This is because there are no other concurrent tasks that can make progress
while we are await
ing the result of handle_connection
.
In the next section, we'll see how to use async code to handle connections concurrently.