In the previous tutorial, we learned smart pointers. Now we learn concurrency — running code on multiple threads at the same time.

Concurrency is hard in most languages. Data races, deadlocks, and race conditions cause bugs that only show up in production. Rust prevents most of these bugs at compile time. The ownership system guarantees that you cannot share data unsafely between threads.

This is called fearless concurrency. The compiler catches mistakes before your code runs.

Spawning Threads

Use thread::spawn to create a new thread:

use std::thread;

fn main() {
    let handle = thread::spawn(|| {
        println!("Hello from a new thread!");
    });

    println!("Hello from the main thread!");
    handle.join().unwrap();  // Wait for the thread to finish
}

thread::spawn takes a closure and runs it on a new thread. It returns a JoinHandle. Calling .join() waits for the thread to finish. If you do not call .join(), the main thread might exit before the spawned thread completes.

Getting Values from Threads

The closure’s return value is available through .join():

use std::thread;

fn main() {
    let handle = thread::spawn(|| {
        let sum: i32 = (1..=100).sum();
        sum
    });

    let result = handle.join().unwrap();
    println!("Sum: {}", result);  // 5050
}

Move Closures with Threads

Threads run independently. They might outlive the variables they reference. Rust forces you to move data into the thread:

use std::thread;

fn main() {
    let data = vec![1, 2, 3, 4, 5];

    let handle = thread::spawn(move || {
        let sum: i32 = data.iter().sum();
        sum
    });

    // println!("{:?}", data);  // ERROR: data was moved into the thread
    let result = handle.join().unwrap();
    println!("Sum: {}", result);  // 15
}

Without move, this code would not compile. The compiler sees that data might be dropped in the main thread while the spawned thread still needs it. The move keyword transfers ownership to the thread.

Parallel Computation

You can split work across multiple threads:

use std::thread;

fn parallel_sum(numbers: Vec<i32>, chunk_size: usize) -> i32 {
    let chunks: Vec<Vec<i32>> = numbers
        .chunks(chunk_size)
        .map(|c| c.to_vec())
        .collect();

    let handles: Vec<thread::JoinHandle<i32>> = chunks
        .into_iter()
        .map(|chunk| {
            thread::spawn(move || chunk.iter().sum())
        })
        .collect();

    handles.into_iter().map(|h| h.join().unwrap()).sum()
}

fn main() {
    let numbers: Vec<i32> = (1..=100).collect();
    let total = parallel_sum(numbers, 25);
    println!("Total: {}", total);  // 5050
}

This splits the numbers into chunks, sums each chunk on a separate thread, then adds the partial sums. Each chunk is moved into its thread. No shared data, no races.

Message Passing with Channels

Channels let threads communicate by sending messages. Rust uses mpsc channels: multiple producers, single consumer.

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    thread::spawn(move || {
        let messages = vec!["hello", "from", "thread"];
        for msg in messages {
            tx.send(String::from(msg)).unwrap();
        }
    });

    for received in rx {
        println!("Got: {}", received);
    }
}

mpsc::channel() returns a transmitter (tx) and receiver (rx). The transmitter sends values. The receiver gets them. When all transmitters are dropped, the receiver’s iterator ends.

How Channels Work

Thread 1          Channel          Main Thread
+----------+     +----------+     +----------+
| tx.send() | --> | buffer   | --> | rx.recv() |
+----------+     +----------+     +----------+

Values are moved through the channel. The sender gives up ownership, and the receiver gets it. This prevents two threads from accessing the same data.

Multiple Producers

Clone the transmitter to send from multiple threads:

use std::sync::mpsc;
use std::thread;

fn main() {
    let (tx, rx) = mpsc::channel();

    for i in 0..3 {
        let tx_clone = tx.clone();
        thread::spawn(move || {
            tx_clone.send(i * 10).unwrap();
        });
    }
    drop(tx);  // Drop original so rx iterator ends

    let mut results: Vec<i32> = rx.iter().collect();
    results.sort();
    println!("{:?}", results);  // [0, 10, 20]
}

You must drop(tx) because the original transmitter was never moved into a thread. Without dropping it, rx.iter() would wait forever.

Channel Pipeline

You can chain channels to build a processing pipeline:

use std::sync::mpsc;
use std::thread;

fn main() {
    let input = vec![1, 2, 3, 4, 5];

    // Stage 1: double each value
    let (tx1, rx1) = mpsc::channel();
    thread::spawn(move || {
        for value in input {
            tx1.send(value * 2).unwrap();
        }
    });

    // Stage 2: add 1 to each value
    let (tx2, rx2) = mpsc::channel();
    thread::spawn(move || {
        for value in rx1 {
            tx2.send(value + 1).unwrap();
        }
    });

    // Collect results
    let results: Vec<i32> = rx2.iter().collect();
    println!("{:?}", results);  // [3, 5, 7, 9, 11]
}

Each stage runs on its own thread. Data flows through the pipeline: input, then double, then add one, then output. This pattern is great for processing streams of data.

Shared State with Mutex

Channels move data between threads. But sometimes you need multiple threads to access the same data. That is what Mutex<T> is for.

A mutex (mutual exclusion) lets only one thread access data at a time:

use std::sync::Mutex;

fn main() {
    let counter = Mutex::new(0);

    {
        let mut num = counter.lock().unwrap();
        *num += 1;
    }  // Lock is released here

    {
        let mut num = counter.lock().unwrap();
        *num += 1;
    }

    println!("Count: {}", *counter.lock().unwrap());  // 2
}

.lock() gives you exclusive access. Other threads wait until the lock is released. The lock is released when the MutexGuard goes out of scope.

Arc and Mutex Together

To share a Mutex across threads, wrap it in Arc:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let counter = Arc::new(Mutex::new(0));
    let mut handles = Vec::new();

    for _ in 0..10 {
        let counter_clone = Arc::clone(&counter);
        let handle = thread::spawn(move || {
            let mut num = counter_clone.lock().unwrap();
            *num += 1;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    println!("Final count: {}", *counter.lock().unwrap());  // 10
}

Why both Arc and Mutex?

  • Arc handles shared ownership across threads (multiple owners)
  • Mutex handles exclusive access (only one thread at a time)

Without Arc, you cannot share the mutex. Without Mutex, you cannot mutate the data safely.

Shared Vec

You can share any type, not just numbers:

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
    let data = Arc::new(Mutex::new(Vec::new()));
    let mut handles = Vec::new();

    for i in 0..5 {
        let data_clone = Arc::clone(&data);
        let handle = thread::spawn(move || {
            let mut vec = data_clone.lock().unwrap();
            vec.push(format!("thread-{}", i));
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    let mut result = data.lock().unwrap().clone();
    result.sort();
    println!("{:?}", result);
    // ["thread-0", "thread-1", "thread-2", "thread-3", "thread-4"]
}

Send and Sync Traits

Two traits control what can cross thread boundaries:

Send means a type can be transferred to another thread. Almost everything in Rust is Send. Exception: Rc<T> is not Send because its reference count is not atomic.

Sync means a type can be referenced from another thread. Mutex<T> is Sync (with locking). RefCell<T> is not Sync because its runtime borrow checking is not thread-safe.

You rarely implement these traits yourself. The compiler automatically implements them when safe. If you try to send a non-Send type to a thread, you get a compile error:

use std::rc::Rc;
use std::thread;

fn main() {
    let data = Rc::new(42);
    // thread::spawn(move || {
    //     println!("{}", data);  // ERROR: Rc<i32> cannot be sent between threads
    // });
}

The fix is to use Arc instead of Rc.

Message Passing vs Shared State

Both approaches work. Here is when to use each:

Message passing (channels):

  • Data flows in one direction
  • Producer/consumer patterns
  • Pipeline processing
  • Simpler to reason about

Shared state (Mutex):

  • Multiple threads need to read/write the same data
  • Counters, caches, shared collections
  • When channels would create too much copying

A common saying: “Do not communicate by sharing memory; share memory by communicating.” Channels are often the safer choice. But sometimes shared state is simpler.

Practical Example: Parallel Word Count

Here is a real-world example. Count words across multiple texts in parallel:

use std::sync::{Arc, Mutex};
use std::thread;

fn parallel_word_count(texts: Vec<String>) -> usize {
    let total = Arc::new(Mutex::new(0usize));
    let mut handles = Vec::new();

    for text in texts {
        let total_clone = Arc::clone(&total);
        let handle = thread::spawn(move || {
            let count = text.split_whitespace().count();
            let mut total = total_clone.lock().unwrap();
            *total += count;
        });
        handles.push(handle);
    }

    for handle in handles {
        handle.join().unwrap();
    }

    *total.lock().unwrap()
}

fn main() {
    let texts = vec![
        String::from("hello world"),
        String::from("rust is fast and safe"),
        String::from("concurrency without fear"),
    ];
    println!("Total words: {}", parallel_word_count(texts));  // 10
}

Each text is processed on its own thread. The total count is protected by a mutex.

Common Mistakes

Mistake: Forgetting to Drop the Lock

let mut num = counter.lock().unwrap();
*num += 1;
// Lock is still held here — other threads are blocked!
// do_something_slow();
// Lock released at end of scope

Fix: Use a block to limit the lock’s scope:

{
    let mut num = counter.lock().unwrap();
    *num += 1;
}  // Lock released immediately
// do_something_slow();  // Other threads can proceed

Mistake: Using Rc Instead of Arc

// let data = Rc::new(Mutex::new(0));
// thread::spawn(move || { ... });  // ERROR: Rc is not Send

Fix: Use Arc:

let data = Arc::new(Mutex::new(0));

Mistake: Forgetting to Drop the Original Transmitter

let (tx, rx) = mpsc::channel();
for i in 0..3 {
    let tx_clone = tx.clone();
    thread::spawn(move || { tx_clone.send(i).unwrap(); });
}
// rx.iter() will hang forever — tx is still alive!

Fix: Drop the original:

drop(tx);
for msg in rx {
    println!("{}", msg);
}

Summary

ConceptWhat It Does
thread::spawnCreate a new thread
handle.join()Wait for thread to finish
move closureTransfer ownership to thread
mpsc::channel()Create a message channel
tx.send(value)Send a value through channel
rx.recv() / rx.iter()Receive values from channel
Mutex::new(value)Create a mutex-protected value
mutex.lock()Get exclusive access
Arc::new(value)Thread-safe shared ownership
Arc<Mutex<T>>Thread-safe shared mutable data
Send traitType can be sent to another thread
Sync traitType can be shared between threads

Source Code

View source code on GitHub ->

What’s Next?

We now know how to write concurrent Rust programs safely. The compiler prevents data races, and the ownership system makes sure shared data is properly protected. Next, we learn async programming with Tokio – async/await, spawning tasks, and concurrent I/O without creating a new thread for every task.

Next: Rust Tutorial #15: Async/Await and Tokio