In the previous tutorial, you learned how to organize a Go project. Now it is time for Go’s most powerful feature — goroutines.
Goroutines are lightweight threads managed by the Go runtime. They are the reason Go is so popular for servers, microservices, and concurrent programs. You can run millions of goroutines on a single machine.
What is a Goroutine?
A goroutine is a function that runs concurrently with other goroutines. You start one with the go keyword:
package main
import (
"fmt"
"time"
)
func sayHello(name string) {
fmt.Printf("Hello, %s!\n", name)
}
func main() {
// Start a goroutine
go sayHello("Alex")
go sayHello("Sam")
go sayHello("Jordan")
// Wait a bit so goroutines can finish
time.Sleep(100 * time.Millisecond)
fmt.Println("Done")
}
Output (order may vary):
Hello, Jordan!
Hello, Alex!
Hello, Sam!
Done
The go keyword starts the function in a new goroutine. The function runs concurrently with main. The order of output is not guaranteed because goroutines run at the same time.
Important: Using time.Sleep to wait for goroutines is a bad practice. We will fix this with sync.WaitGroup shortly.
Goroutines vs OS Threads
Goroutines are not operating system threads. They are much lighter:
| Feature | OS Threads | Goroutines |
|---|---|---|
| Stack size | ~1 MB fixed | ~2 KB initial (grows as needed) |
| Creation cost | Expensive (system call) | Cheap (Go runtime) |
| Limit | Thousands | Millions |
| Scheduling | OS kernel | Go runtime |
| Context switch | Slow (kernel mode) | Fast (user space) |
The Go runtime multiplexes goroutines onto a small number of OS threads. This is called an M:N scheduling model — M goroutines run on N OS threads.
sync.WaitGroup — Waiting for Goroutines
The correct way to wait for goroutines is sync.WaitGroup. It counts active goroutines and blocks until all finish:
package main
import (
"fmt"
"sync"
"time"
)
func worker(id int, wg *sync.WaitGroup) {
defer wg.Done() // Decrease counter when function returns
fmt.Printf("Worker %d starting\n", id)
time.Sleep(time.Duration(id) * 100 * time.Millisecond) // Simulate work
fmt.Printf("Worker %d done\n", id)
}
func main() {
var wg sync.WaitGroup
for i := 1; i <= 5; i++ {
wg.Add(1) // Increase counter before starting goroutine
go worker(i, &wg)
}
wg.Wait() // Block until counter reaches 0
fmt.Println("All workers finished")
}
Output (order may vary, but “All workers finished” is always last):
Worker 5 starting
Worker 1 starting
Worker 3 starting
Worker 2 starting
Worker 4 starting
Worker 1 done
Worker 2 done
Worker 3 done
Worker 4 done
Worker 5 done
All workers finished
Three methods to remember:
wg.Add(1)— call before starting a goroutinewg.Done()— call when the goroutine finishes (usedefer)wg.Wait()— blocks until the counter reaches zero
Important: Always pass *sync.WaitGroup as a pointer. If you pass by value, the goroutine gets a copy and the original never decrements.
Race Conditions
A race condition happens when multiple goroutines access the same variable without synchronization. One goroutine reads while another writes:
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
counter++ // Race condition! Multiple goroutines modify counter
}()
}
wg.Wait()
fmt.Println("Counter:", counter) // Expected 1000, but often less
}
You might expect the counter to be 1000, but it will often be less. Multiple goroutines read the same value, increment it, and write back — overwriting each other’s work.
The Race Detector
Go has a built-in race detector. Run your program with the -race flag:
go run -race main.go
Output:
==================
WARNING: DATA RACE
Read at 0x00c0000b4010 by goroutine 8:
main.main.func1()
/main.go:16 +0x6a
Previous write at 0x00c0000b4010 by goroutine 7:
main.main.func1()
/main.go:16 +0x80
==================
The race detector tells you exactly which line has the race condition. Always test concurrent code with -race during development.
sync.Mutex — Protecting Shared Data
A mutex (mutual exclusion) ensures only one goroutine accesses a resource at a time:
package main
import (
"fmt"
"sync"
)
func main() {
counter := 0
var mu sync.Mutex
var wg sync.WaitGroup
for i := 0; i < 1000; i++ {
wg.Add(1)
go func() {
defer wg.Done()
mu.Lock() // Only one goroutine can enter
counter++
mu.Unlock() // Release the lock
}()
}
wg.Wait()
fmt.Println("Counter:", counter) // Always 1000
}
Output:
Counter: 1000
Now the counter is always 1000. The mutex ensures only one goroutine modifies counter at a time.
sync.RWMutex — Multiple Readers, Single Writer
If you have many readers and few writers, sync.RWMutex is more efficient. Multiple goroutines can read at the same time, but writing requires exclusive access:
package main
import (
"fmt"
"sync"
)
type SafeMap struct {
mu sync.RWMutex
data map[string]int
}
func NewSafeMap() *SafeMap {
return &SafeMap{data: make(map[string]int)}
}
func (m *SafeMap) Set(key string, value int) {
m.mu.Lock() // Exclusive lock for writing
defer m.mu.Unlock()
m.data[key] = value
}
func (m *SafeMap) Get(key string) (int, bool) {
m.mu.RLock() // Shared lock for reading
defer m.mu.RUnlock()
val, ok := m.data[key]
return val, ok
}
func main() {
m := NewSafeMap()
var wg sync.WaitGroup
// 10 writers
for i := 0; i < 10; i++ {
wg.Add(1)
go func(n int) {
defer wg.Done()
key := fmt.Sprintf("key-%d", n)
m.Set(key, n*10)
}(i)
}
wg.Wait()
// Read all values
for i := 0; i < 10; i++ {
key := fmt.Sprintf("key-%d", i)
if val, ok := m.Get(key); ok {
fmt.Printf("%s = %d\n", key, val)
}
}
}
Use RLock() for reads and Lock() for writes. Multiple readers can hold the RLock at the same time.
How Many Goroutines Can You Run?
Let us test it:
package main
import (
"fmt"
"runtime"
"sync"
)
func main() {
var wg sync.WaitGroup
count := 100_000
for i := 0; i < count; i++ {
wg.Add(1)
go func() {
defer wg.Done()
// Do nothing — just exist
}()
}
fmt.Printf("Started %d goroutines\n", count)
fmt.Printf("Active goroutines: %d\n", runtime.NumGoroutine())
wg.Wait()
fmt.Println("All done")
}
Output:
Started 100000 goroutines
Active goroutines: 54321
All done
100,000 goroutines use about 200 MB of memory (2 KB each). You can easily run millions on a modern machine. This is what makes Go great for web servers that handle thousands of connections.
Goroutines with Anonymous Functions
You will often use anonymous functions (closures) with goroutines:
package main
import (
"fmt"
"sync"
)
func main() {
var wg sync.WaitGroup
names := []string{"Alex", "Sam", "Jordan"}
for _, name := range names {
wg.Add(1)
go func(n string) {
defer wg.Done()
fmt.Printf("Hello, %s!\n", n)
}(name) // Pass name as argument
}
wg.Wait()
}
Important: Always pass loop variables as function arguments. If you capture them directly, all goroutines might see the same value because the loop variable changes before the goroutines run.
In Go 1.22+, the loop variable is scoped per iteration, so this issue is less common. But passing as an argument is still the clearest approach.
A Complete Example
Here is a practical example — a concurrent web page fetcher:
package main
import (
"fmt"
"net/http"
"sync"
"time"
)
type Result struct {
URL string
Status int
Time time.Duration
}
func fetch(url string, wg *sync.WaitGroup, mu *sync.Mutex, results *[]Result) {
defer wg.Done()
start := time.Now()
resp, err := http.Get(url)
elapsed := time.Since(start)
if err != nil {
mu.Lock()
*results = append(*results, Result{URL: url, Status: 0, Time: elapsed})
mu.Unlock()
return
}
defer resp.Body.Close()
mu.Lock()
*results = append(*results, Result{URL: url, Status: resp.StatusCode, Time: elapsed})
mu.Unlock()
}
func main() {
fmt.Println("=== GO-11: Goroutines ===")
fmt.Println()
urls := []string{
"https://go.dev",
"https://github.com",
"https://example.com",
"https://httpbin.org/get",
}
var wg sync.WaitGroup
var mu sync.Mutex
var results []Result
start := time.Now()
for _, url := range urls {
wg.Add(1)
go fetch(url, &wg, &mu, &results)
}
wg.Wait()
totalTime := time.Since(start)
fmt.Println("Results:")
for _, r := range results {
if r.Status == 0 {
fmt.Printf(" %s — ERROR (%v)\n", r.URL, r.Time)
} else {
fmt.Printf(" %s — %d (%v)\n", r.URL, r.Status, r.Time)
}
}
fmt.Printf("\nTotal time: %v (fetched %d URLs concurrently)\n", totalTime, len(urls))
}
All URLs are fetched at the same time. The total time is roughly the time of the slowest request, not the sum of all requests. This is the power of concurrency.
Common Mistakes
1. Forgetting to call wg.Done().
go func() {
// Missing defer wg.Done()
doWork()
}()
wg.Wait() // Blocks forever!
Always use defer wg.Done() at the top of the goroutine function. It runs even if the function panics.
2. Passing WaitGroup by value instead of pointer.
// Wrong — copies the WaitGroup
go worker(id, wg)
// Correct — passes a pointer
go worker(id, &wg)
A copy of WaitGroup does not share state with the original. Your program will hang.
3. Starting goroutines without synchronization.
go doWork()
// Program exits immediately — goroutine has no time to finish
The main function does not wait for goroutines. Use WaitGroup or channels (next tutorial) to synchronize.
Source Code
You can find the complete source code for this tutorial on GitHub:
Related Articles
- Go Tutorial #10: Project Structure and Clean Architecture — How to organize Go projects
- Go Tutorial #12: Channels — Communication between goroutines
- Go Cheat Sheet — Quick reference for Go syntax
What’s Next?
In the next tutorial, Go Tutorial #12: Channels — Communication Between Goroutines, you will learn:
- What channels are and how they work
- Buffered vs unbuffered channels
- Channel direction (send-only, receive-only)
- Closing channels and using
range - How to avoid deadlocks
This is part 11 of the Go Tutorial series. Follow along to learn Go from scratch.