In the previous tutorial, we learned threads, channels, and Mutex for concurrency. Now we learn async/await – a different way to handle concurrent work.
Threads are good when you have CPU-heavy tasks. But many programs spend most of their time waiting – for network responses, file reads, or database queries. Creating one thread per request wastes memory. Async programming solves this problem. It lets one thread handle thousands of waiting tasks.
What is Async Programming?
Think of a restaurant kitchen. A synchronous kitchen has one chef per order. If the chef waits for water to boil, they stand there doing nothing. An async kitchen has one chef who starts boiling water, then goes to chop vegetables for another order, then comes back when the water is ready.
In Rust, async and .await give you this pattern. An async fn does not block the thread when it waits. Instead, it gives control back to the runtime, which can run other tasks.
Setting Up Tokio
Rust’s standard library provides the async and .await keywords, but it does not include an async runtime. You need a crate for that. The most popular choice is Tokio.
Add it to your Cargo.toml:
[dependencies]
tokio = { version = "1", features = ["full"] }
The "full" feature enables everything: the runtime, timers, channels, I/O, and macros.
Your First Async Function
An async fn returns a Future. The function body does not execute until you .await the future:
async fn greet(name: &str) -> String {
format!("Hello, {}!", name)
}
#[tokio::main]
async fn main() {
let greeting = greet("Alex").await;
println!("{}", greeting);
}
The #[tokio::main] macro sets up the Tokio runtime and runs your async main function. Without it, you cannot use .await in main.
How Futures Work
When you call greet("Alex"), Rust does not run the function body. It returns a Future. The future is like a recipe – it describes work to do, but the work has not started yet.
When you write .await, the runtime polls the future. If the future is ready, you get the result. If the future is not ready (for example, waiting for a network response), the runtime parks the task and runs something else.
This is the key difference from threads: async tasks do not block threads. They cooperate by yielding control when they wait.
Simulating Async Work
Let us simulate slow operations with tokio::time::sleep:
use std::time::Duration;
use tokio::time::sleep;
async fn fetch_user(id: u32) -> String {
sleep(Duration::from_millis(100)).await;
format!("User(id={})", id)
}
async fn fetch_order(id: u32) -> String {
sleep(Duration::from_millis(80)).await;
format!("Order(id={})", id)
}
Notice: tokio::time::sleep is an async sleep. It does not block the thread. The standard std::thread::sleep does block the thread – never use it in async code.
Sequential vs Concurrent Execution
Sequential – One After Another
async fn sequential_requests() -> (String, String) {
let user = fetch_user(1).await; // Wait 100ms
let order = fetch_order(42).await; // Then wait 80ms
(user, order)
// Total: ~180ms
}
This waits for the user request to finish before starting the order request. Total time is the sum of both waits.
Concurrent – Both at the Same Time
async fn concurrent_requests() -> (String, String) {
let (user, order) = tokio::join!(
fetch_user(1), // Start both
fetch_order(42) // at the same time
);
(user, order)
// Total: ~100ms (the longer one)
}
tokio::join! starts both futures and waits for all of them to finish. Total time is the maximum of all waits, not the sum. This is much faster when you have independent operations.
When to Use join!
Use tokio::join! when:
- You have multiple independent async operations
- You need ALL results before continuing
- The operations do not depend on each other
// Good: independent operations
let (user, orders, settings) = tokio::join!(
fetch_user(1),
fetch_orders(1),
fetch_settings(1)
);
// Bad: order depends on user — must be sequential
let user = fetch_user(1).await;
let orders = fetch_orders_for(user.id).await;
Spawning Tasks with tokio::spawn
tokio::join! runs futures on the same task. tokio::spawn creates a new task that runs independently on the Tokio runtime:
use std::time::Duration;
use tokio::time::sleep;
async fn spawn_tasks() -> Vec<String> {
let mut handles = vec![];
for i in 0..3 {
let handle = tokio::spawn(async move {
sleep(Duration::from_millis(50)).await;
format!("Task {} done", i)
});
handles.push(handle);
}
let mut results = vec![];
for handle in handles {
results.push(handle.await.unwrap());
}
results
}
tokio::spawn vs tokio::join!
| Feature | tokio::join! | tokio::spawn |
|---|---|---|
| Where it runs | Same task | New independent task |
| Can outlive caller | No | Yes (if you drop the handle) |
Needs 'static | No | Yes (the future must own its data) |
| Cancellation | All cancel together | Independent |
Use tokio::spawn when:
- You want fire-and-forget tasks
- You need tasks to run truly in parallel across threads
- The task should continue even if the caller finishes
Use tokio::join! when:
- You need all results together
- The futures share references to local data
- You want structured concurrency
The ‘static Requirement
tokio::spawn requires the future to be 'static. This means it cannot borrow local variables:
async fn example() {
let name = String::from("Alex");
// This does NOT compile:
// tokio::spawn(async {
// println!("{}", name); // Borrows name
// });
// This works — move ownership into the task:
tokio::spawn(async move {
println!("{}", name); // Owns name
});
// name is no longer available here
}
Racing Futures with tokio::select!
tokio::select! waits for the first future to complete and cancels the rest:
use std::time::Duration;
use tokio::time::sleep;
async fn timeout_example() -> &'static str {
tokio::select! {
_ = sleep(Duration::from_secs(10)) => {
"slow task finished"
}
_ = sleep(Duration::from_millis(50)) => {
"timeout reached"
}
}
}
The 50ms sleep finishes first, so select! returns "timeout reached" and cancels the 10-second sleep.
Common Use Case: Timeouts
use std::time::Duration;
use tokio::time::timeout;
async fn with_timeout() -> Result<String, &'static str> {
match timeout(Duration::from_secs(5), fetch_user(1)).await {
Ok(user) => Ok(user),
Err(_) => Err("Request timed out"),
}
}
Tokio provides a built-in timeout function that wraps this pattern.
Async Error Handling
Async functions work with Result just like regular functions:
use std::time::Duration;
use tokio::time::sleep;
async fn fetch_data(url: &str) -> Result<String, String> {
if url.is_empty() {
return Err("URL cannot be empty".to_string());
}
sleep(Duration::from_millis(10)).await;
Ok(format!("Data from {}", url))
}
#[tokio::main]
async fn main() {
match fetch_data("https://example.com").await {
Ok(data) => println!("Got: {}", data),
Err(e) => println!("Error: {}", e),
}
}
The ? operator works in async functions too:
async fn process() -> Result<(), String> {
let data = fetch_data("https://example.com").await?;
println!("Processing: {}", data);
Ok(())
}
Shared State in Async Code
To share state between tasks, use Arc<tokio::sync::Mutex<T>>:
use std::sync::Arc;
use tokio::sync::Mutex;
async fn shared_counter() -> u32 {
let counter = Arc::new(Mutex::new(0u32));
let mut handles = vec![];
for _ in 0..5 {
let counter = Arc::clone(&counter);
let handle = tokio::spawn(async move {
let mut lock = counter.lock().await;
*lock += 1;
});
handles.push(handle);
}
for handle in handles {
handle.await.unwrap();
}
*counter.lock().await
}
tokio::sync::Mutex vs std::sync::Mutex
| Feature | std::sync::Mutex | tokio::sync::Mutex |
|---|---|---|
| Lock method | .lock() (blocking) | .lock().await (async) |
Holds across .await | Not safe | Safe |
| Performance | Faster for short locks | Slower but async-friendly |
Rule of thumb: If you need to hold the lock across an .await point, use tokio::sync::Mutex. If the lock is short (no .await inside), std::sync::Mutex is fine and faster.
Async vs Threads
When should you use async and when should you use threads?
Use Async When:
- You have many I/O-bound tasks (network, files, databases)
- You need thousands of concurrent operations
- Each task spends most time waiting
Use Threads When:
- You have CPU-heavy work (math, parsing, compression)
- You have a small number of tasks
- You need true parallelism on multiple cores
Memory Comparison
A thread uses about 2MB of stack memory. An async task uses about 200 bytes. This means:
- 1000 threads = ~2GB of memory
- 1000 async tasks = ~200KB of memory
For a web server handling 10,000 connections, async is the only practical choice.
Mixing Async and Threads
Sometimes you need both. Use tokio::task::spawn_blocking to run CPU-heavy work from async code:
async fn process_data(data: Vec<u8>) -> Vec<u8> {
// Move CPU-heavy work to a thread pool
tokio::task::spawn_blocking(move || {
// This runs on a separate thread
data.iter().map(|b| b.wrapping_add(1)).collect()
}).await.unwrap()
}
Never do CPU-heavy work directly in an async function. It blocks the runtime and prevents other tasks from running.
Common Mistakes
Mistake 1: Using std::thread::sleep in Async Code
// BAD: blocks the entire thread
async fn bad_delay() {
std::thread::sleep(std::time::Duration::from_secs(1));
}
// GOOD: yields to the runtime
async fn good_delay() {
tokio::time::sleep(std::time::Duration::from_secs(1)).await;
}
Mistake 2: Forgetting to .await
async fn example() {
// This does NOTHING — the future is created but never polled
// fetch_user(1);
// This actually runs the function
fetch_user(1).await;
}
The Rust compiler will warn you about unused futures. Always pay attention to those warnings.
Mistake 3: Blocking the Runtime
// BAD: blocks the runtime
async fn bad_compute() -> u64 {
(0..1_000_000u64).sum()
}
// GOOD: move to blocking thread pool
async fn good_compute() -> u64 {
tokio::task::spawn_blocking(|| {
(0..1_000_000u64).sum()
}).await.unwrap()
}
Testing Async Code
Use #[tokio::test] instead of #[test]:
#[cfg(test)]
mod tests {
use super::*;
#[tokio::test]
async fn test_greet() {
let result = greet("Sam").await;
assert_eq!(result, "Hello, Sam!");
}
#[tokio::test]
async fn test_fetch_data_error() {
let result = fetch_data("").await;
assert!(result.is_err());
}
#[tokio::test]
async fn test_shared_counter() {
let result = shared_counter().await;
assert_eq!(result, 5);
}
}
Summary
| Concept | Syntax | Purpose |
|---|---|---|
| Async function | async fn name() | Define a function that returns a Future |
| Await | .await | Run a future and get its result |
| Tokio runtime | #[tokio::main] | Set up the async runtime |
| Join | tokio::join!(a, b) | Run futures concurrently, wait for all |
| Spawn | tokio::spawn(future) | Run a future on a new task |
| Select | tokio::select! | Wait for the first future to complete |
| Timeout | tokio::time::timeout | Cancel a future after a duration |
| Async sleep | tokio::time::sleep | Non-blocking delay |
| Async mutex | tokio::sync::Mutex | Lock that works across .await |
| Spawn blocking | spawn_blocking(closure) | Run CPU work on a thread pool |
Source Code
Related Articles
- Rust Tutorial #14: Concurrency – previous tutorial
- Rust Tutorial #16: Collections – next tutorial
- Rust Tutorial Series – all tutorials
What’s Next?
We now know async programming with Tokio. We covered async functions, .await, tokio::join!, tokio::spawn, tokio::select!, and the differences between async and threads. Next, we learn collections – HashMap, BTreeMap, VecDeque, BinaryHeap, and all the data structures you need to build real programs.
Next: Rust Tutorial #16: Collections – HashMap, BTreeMap, VecDeque