Back to Articles
45 min read

GoLang Concurrency: Master Goroutines, Channels, Context & Sync Patterns

Go's concurrency model is its defining characteristic. This deep dive covers everything from the M:N scheduler and race detection to implementing robust worker pools and managing timeouts with the Context package.

GOROUTINES

Goroutine Basics

Goroutines are lightweight threads managed by the Go runtime, not the OS. They start with only ~2KB of stack space (vs ~1MB for OS threads) and can grow dynamically. Thousands of goroutines can run concurrently on a few OS threads, making Go extremely efficient for concurrent programming.

┌─────────────────────────────────────────────────────────┐ │ GO RUNTIME │ │ ┌──────────┐ ┌──────────┐ ┌──────────┐ ┌──────────┐ │ │ │Goroutine1│ │Goroutine2│ │Goroutine3│ │Goroutine4│ │ │ └────┬─────┘ └────┬─────┘ └────┬─────┘ └────┬─────┘ │ │ └────────────┴─────┬──────┴────────────┘ │ │ ┌─────▼─────┐ │ │ │ Scheduler │ │ │ └─────┬─────┘ │ │ ┌────────────────┼────────────────┐ │ │ ┌────▼────┐ ┌────▼────┐ ┌────▼────┐ │ │ │OS Thread│ │OS Thread│ │OS Thread│ │ │ └─────────┘ └─────────┘ └─────────┘ │ └─────────────────────────────────────────────────────────┘

go Keyword

The go keyword spawns a new goroutine that executes the function call concurrently. The calling function continues immediately without waiting for the goroutine to complete.

func main() { go doWork() // Non-blocking, returns immediately go func() { // Anonymous goroutine fmt.Println("inline work") }() time.Sleep(time.Second) // Wait (bad practice, use sync primitives) } func doWork() { fmt.Println("working...") }

Goroutine Scheduling

Go uses a cooperative scheduler where goroutines yield at specific points: channel operations, function calls, garbage collection, and runtime.Gosched(). The scheduler is work-stealing—idle processors steal goroutines from other processors' queues to balance load.

┌─────────────────────────────────────────────────────────────┐ │ SCHEDULING POINTS │ ├─────────────────────────────────────────────────────────────┤ │ • Channel send/receive • Function calls │ │ • Blocking syscalls • runtime.Gosched() │ │ • GC • time.Sleep() │ │ • select statements • sync operations │ └─────────────────────────────────────────────────────────────┘

M:N Scheduling Model

Go uses M:N scheduling: M goroutines are multiplexed onto N OS threads. The runtime manages three entities: G (goroutines), M (machine/OS threads), and P (processors/execution contexts). Each P has a local run queue, and there's a global queue for overflow.

┌─────────────────────────────────────────────────────────────────┐ │ G = Goroutine M = OS Thread P = Processor Context │ ├─────────────────────────────────────────────────────────────────┤ │ │ │ ┌───┐ ┌───┐ ┌───┐ Global Run Queue │ │ │ G │ │ G │ │ G │◄─────────────────────────────────┐ │ │ └───┘ └───┘ └───┘ │ │ │ │ │ │ ┌─────────────────┐ ┌─────────────────┐ │ │ │ │ P1 │ │ P2 │ │ │ │ │ ┌───┬───┬───┐ │ │ ┌───┬───┬───┐ │ │ │ │ │ │ G │ G │ G │ │ │ │ G │ G │ │ │◄──┘ │ │ │ └───┴───┴───┘ │ │ └───┴───┴───┘ │ │ │ │ Local Queue │ │ Local Queue │ │ │ └────────┬────────┘ └────────┬────────┘ │ │ │ │ │ │ ┌───▼───┐ ┌───▼───┐ │ │ │ M1 │ │ M2 │ OS Threads │ │ └───────┘ └───────┘ │ └─────────────────────────────────────────────────────────────────┘

GOMAXPROCS

GOMAXPROCS sets the maximum number of OS threads (P's) that can execute Go code simultaneously. By default, it equals the number of CPU cores. Setting it doesn't limit goroutine count—only concurrent execution contexts.

import "runtime" func main() { // Get current value current := runtime.GOMAXPROCS(0) // 0 = just query, don't change fmt.Println("CPUs:", runtime.NumCPU()) fmt.Println("GOMAXPROCS:", current) // Set to specific value runtime.GOMAXPROCS(4) // Or via environment variable: GOMAXPROCS=4 go run main.go }

Goroutine Lifecycle

A goroutine transitions through states: Runnable → Running → Waiting → Runnable → Running → Dead. It becomes Dead when its function returns. The main goroutine's termination kills all others immediately—orphaning them without cleanup.

┌───────────────────────────────────────────────────────────┐ │ GOROUTINE LIFECYCLE │ ├───────────────────────────────────────────────────────────┤ │ │ │ ┌──────────┐ │ │ │ Creation │ (go func()) │ │ └────┬─────┘ │ │ ▼ │ │ ┌──────────┐ scheduled ┌─────────┐ │ │ │ Runnable │──────────────────►│ Running │ │ │ └──────────┘ └────┬────┘ │ │ ▲ │ │ │ │ preempted │ │ │ └───────────────────────────────┤ │ │ │ │ │ ┌───────────────────────────────┘ │ │ │ I/O, channel, sync │ │ ▼ │ │ ┌──────────┐ event ready ┌──────────┐ │ │ │ Waiting │───────────────────►│ Runnable │ │ │ └──────────┘ └──────────┘ │ │ │ │ │ function returns │ │ ▼ │ │ ┌────────┐ │ │ │ Dead │ │ │ └────────┘ │ └───────────────────────────────────────────────────────────┘

Goroutine Leaks

Goroutine leaks occur when goroutines are blocked forever—waiting on channels, locks, or I/O that never completes. They consume memory and can exhaust resources. Always ensure goroutines have exit paths using contexts, done channels, or timeouts.

// ❌ LEAK: goroutine blocked forever func leak() { ch := make(chan int) go func() { val := <-ch // Blocks forever, ch never receives fmt.Println(val) }() // function returns, goroutine stays blocked } // ✅ FIXED: using context for cancellation func noLeak(ctx context.Context) { ch := make(chan int) go func() { select { case val := <-ch: fmt.Println(val) case <-ctx.Done(): return // Exit path! } }() }

Anonymous Goroutines

Anonymous goroutines are inline function literals executed with go. Beware of closure capture—loop variables should be passed as parameters to avoid race conditions where all goroutines see the final loop value.

// ❌ BUG: All goroutines capture same 'i' variable for i := 0; i < 3; i++ { go func() { fmt.Println(i) // Likely prints: 3, 3, 3 }() } // ✅ FIXED: Pass as parameter (creates copy) for i := 0; i < 3; i++ { go func(n int) { fmt.Println(n) // Prints: 0, 1, 2 (any order) }(i) } // ✅ ALSO FIXED: Create new variable in loop scope for i := 0; i < 3; i++ { i := i // Shadow with new variable go func() { fmt.Println(i) }() }

WaitGroup (sync.WaitGroup)

sync.WaitGroup waits for a collection of goroutines to finish. Call Add(n) before launching goroutines, Done() when each completes (typically deferred), and Wait() to block until the counter reaches zero.

func main() { var wg sync.WaitGroup for i := 0; i < 5; i++ { wg.Add(1) // Increment BEFORE goroutine starts go func(id int) { defer wg.Done() // Decrement when done fmt.Printf("Worker %d done\n", id) }(i) } wg.Wait() // Block until counter == 0 fmt.Println("All workers completed") } // Output (order varies): // Worker 2 done // Worker 0 done // Worker 4 done // Worker 1 done // Worker 3 done // All workers completed

CHANNELS

Channel Basics

Channels are typed conduits for goroutine communication and synchronization. They follow CSP (Communicating Sequential Processes) principles—share memory by communicating, not communicate by sharing memory. Channels are first-class values: pass them, return them, store them.

┌─────────────────────────────────────────────────────────────┐ │ CHANNEL OPERATIONS │ ├─────────────────────────────────────────────────────────────┤ │ │ │ Goroutine A Goroutine B │ │ ┌─────────┐ ┌─────────────┐ ┌─────────┐ │ │ │ │ │ │ │ │ │ │ │ ch <- v │─────►│ Channel │────►│ v = <-ch│ │ │ │ (send) │ │ │ │ (receive)│ │ │ └─────────┘ └─────────────┘ └─────────┘ │ │ │ │ • Send blocks until receiver ready (unbuffered) │ │ • Receive blocks until value available │ │ • Provides synchronization + data transfer │ └─────────────────────────────────────────────────────────────┘

Channel Creation (make)

Channels must be created with make() before use. The optional second argument specifies buffer capacity. A nil channel (uninitialized) blocks forever on send/receive—useful in select for disabling cases.

// Unbuffered channel - synchronous ch1 := make(chan int) // Buffered channel - capacity 10 ch2 := make(chan string, 10) // Typed channels ch3 := make(chan *User) ch4 := make(chan struct{}) // Signal-only, zero memory // nil channel - blocks forever var ch5 chan int // nil, sending/receiving blocks forever fmt.Printf("ch1: %v, ch2 cap: %d\n", ch1, cap(ch2)) // Output: ch1: 0xc000018180, ch2 cap: 10

Send and Receive Operations

The arrow operator <- indicates data flow direction. ch <- v sends v to channel; v = <-ch receives from channel. Both operations block by default until the other side is ready (for unbuffered channels) or buffer has space/data.

ch := make(chan int) // Send in goroutine (would block main if done there first) go func() { ch <- 42 // Send value }() // Receive value := <-ch // Blocking receive fmt.Println(value) // 42 // Receive with ok (detect closed channel) value, ok := <-ch if !ok { fmt.Println("channel closed") }

Channel Direction (send-only, receive-only)

Channels can be direction-restricted in function signatures for compile-time safety. chan<- T is send-only; <-chan T is receive-only. Bidirectional channels implicitly convert to either direction, but not vice versa.

// Send-only channel parameter func producer(out chan<- int) { out <- 1 out <- 2 close(out) // <-out // Compile error: cannot receive from send-only } // Receive-only channel parameter func consumer(in <-chan int) { for v := range in { fmt.Println(v) } // in <- 1 // Compile error: cannot send to receive-only } func main() { ch := make(chan int) // Bidirectional go producer(ch) // Converts to chan<- int consumer(ch) // Converts to <-chan int }

Buffered vs Unbuffered Channels

Unbuffered channels synchronize sender and receiver—both block until the other is ready. Buffered channels decouple them—send blocks only when full, receive blocks only when empty. Use buffered for async handoffs; unbuffered for guaranteed synchronization.

┌─────────────────────────────────────────────────────────────┐ │ UNBUFFERED (make(chan int)) │ ├─────────────────────────────────────────────────────────────┤ │ Sender Receiver │ │ │ │ │ │ │─── ch <- val ──────────────────────►│ val = <-ch │ │ │ (blocks until received) │ │ │ │◄─────── synchronization ───────────►│ │ └─────────────────────────────────────────────────────────────┘ ┌─────────────────────────────────────────────────────────────┐ │ BUFFERED (make(chan int, 3)) │ ├─────────────────────────────────────────────────────────────┤ │ Sender ┌───┬───┬───┐ Receiver │ │ │ │ 1 │ 2 │ │ │ │ │ │─── ch <- ──►│───┴───┴───│───► <-ch ────►│ │ │ │ │ buffer │ │ │ │ │ └───────────┘ │ │ │ (blocks only cap=3, len=2 (blocks only when empty) │ │ when full) │ └─────────────────────────────────────────────────────────────┘
// Unbuffered - synchronous rendezvous unbuf := make(chan int) go func() { unbuf <- 1 }() // Blocks until main receives fmt.Println(<-unbuf) // Buffered - can send without receiver ready buf := make(chan int, 2) buf <- 1 // Doesn't block buf <- 2 // Doesn't block // buf <- 3 // Would block! Buffer full fmt.Println(<-buf, <-buf)

Channel Capacity

cap(ch) returns buffer capacity; len(ch) returns current element count. For unbuffered channels, both are 0. These are useful for monitoring but avoid making logic depend on them—race conditions can make values stale immediately.

ch := make(chan int, 5) ch <- 1 ch <- 2 ch <- 3 fmt.Printf("Length: %d, Capacity: %d\n", len(ch), cap(ch)) // Output: Length: 3, Capacity: 5 <-ch // Remove one fmt.Printf("Length: %d, Capacity: %d\n", len(ch), cap(ch)) // Output: Length: 2, Capacity: 5

Closing Channels

close(ch) signals no more values will be sent. Receivers get zero values after draining, with ok=false. Sending to a closed channel panics. Only senders should close; closing is optional—it's a signal, not a cleanup requirement.

ch := make(chan int, 3) ch <- 1 ch <- 2 ch <- 3 close(ch) // Receiving from closed channel fmt.Println(<-ch) // 1 fmt.Println(<-ch) // 2 fmt.Println(<-ch) // 3 fmt.Println(<-ch) // 0 (zero value, channel empty+closed) v, ok := <-ch fmt.Printf("value: %d, open: %v\n", v, ok) // Output: value: 0, open: false // ch <- 4 // PANIC: send on closed channel // close(ch) // PANIC: close of closed channel

Range Over Channels

for v := range ch loops until the channel is closed and drained. It's the idiomatic way to consume all values. The loop blocks waiting for values and exits automatically when the channel is closed.

func main() { ch := make(chan int) go func() { for i := 0; i < 5; i++ { ch <- i } close(ch) // Required! Otherwise range blocks forever }() // Range receives until channel closed for v := range ch { fmt.Print(v, " ") } // Output: 0 1 2 3 4 }

select Statement

select multiplexes channel operations, choosing one ready case randomly (if multiple ready). It blocks until a case can proceed. Essential for handling multiple channels, timeouts, and non-blocking operations.

ch1 := make(chan string) ch2 := make(chan string) go func() { time.Sleep(100*time.Millisecond); ch1 <- "one" }() go func() { time.Sleep(200*time.Millisecond); ch2 <- "two" }() for i := 0; i < 2; i++ { select { case msg1 := <-ch1: fmt.Println("received", msg1) case msg2 := <-ch2: fmt.Println("received", msg2) } } // Output: // received one // received two

Default Case in Select

default makes select non-blocking—it executes immediately if no channel is ready. Use for polling, non-blocking sends/receives, or busy loops with work to do between channel checks.

ch := make(chan int) // Non-blocking receive select { case v := <-ch: fmt.Println("received", v) default: fmt.Println("no value ready") // Executes immediately } // Non-blocking send select { case ch <- 42: fmt.Println("sent") default: fmt.Println("channel not ready") // Executes if ch full/no receiver }

nil Channels

Sending to or receiving from a nil channel blocks forever. In select, nil channel cases are never chosen—useful for dynamically disabling cases by setting channels to nil after they're done.

func merge(ch1, ch2 <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for ch1 != nil || ch2 != nil { select { case v, ok := <-ch1: if !ok { ch1 = nil // Disable this case continue } out <- v case v, ok := <-ch2: if !ok { ch2 = nil // Disable this case continue } out <- v } } }() return out }

Channel Axioms

Three fundamental rules: (1) Send to nil channel blocks forever, (2) Receive from nil channel blocks forever, (3) Send to closed channel panics, (4) Receive from closed channel returns zero value immediately.

┌────────────────────────────────────────────────────────────────┐ │ CHANNEL AXIOMS │ ├────────────────┬───────────────────────────────────────────────┤ │ Operation │ Channel State │ │ ├─────────────┬─────────────┬───────────────────┤ │ │ nil │ open │ closed │ ├────────────────┼─────────────┼─────────────┼───────────────────┤ │ Send │ Block │ Proceed/Blk │ PANIC │ │ Receive │ Block │ Proceed/Blk │ Zero value, false │ │ Close │ PANIC │ Succeed │ PANIC │ └────────────────┴─────────────┴─────────────┴───────────────────┘

Channel Patterns

Common patterns include: done channels for signaling, result channels for returning values, semaphores for limiting concurrency, and signal channels (chan struct{}) for zero-allocation notifications.

// Done channel pattern - signaling completion done := make(chan struct{}) go func() { defer close(done) doWork() }() <-done // Wait for completion // Result channel - returning values from goroutine type Result struct { Value int Err error } resultCh := make(chan Result, 1) go func() { v, err := expensiveOperation() resultCh <- Result{v, err} }() result := <-resultCh

Fan-in Pattern

Fan-in merges multiple input channels into a single output channel. Used to aggregate results from parallel workers. The merge function typically spawns goroutines to forward each input to the shared output.

┌──────────┐ ────►│ worker 1 │─────┐ └──────────┘ │ ┌─────────────┐ ┌──────────┐ ├─────►│ merged │────► ────►│ worker 2 │─────┤ │ output │ └──────────┘ │ └─────────────┘ ┌──────────┐ │ ────►│ worker 3 │─────┘ └──────────┘
func fanIn(channels ...<-chan int) <-chan int { out := make(chan int) var wg sync.WaitGroup for _, ch := range channels { wg.Add(1) go func(c <-chan int) { defer wg.Done() for v := range c { out <- v } }(ch) } go func() { wg.Wait() close(out) }() return out }

Fan-out Pattern

Fan-out distributes work from one channel to multiple workers for parallel processing. Workers read from a shared input channel; each item is processed by exactly one worker.

┌──────────┐ ┌────►│ worker 1 │────► ┌─────────────┐ │ └──────────┘ │ input │───┼────►┌──────────┐ │ channel │ │ │ worker 2 │────► └─────────────┘ │ └──────────┘ └────►┌──────────┐ │ worker 3 │────► └──────────┘
func fanOut(input <-chan int, workers int) []<-chan int { outputs := make([]<-chan int, workers) for i := 0; i < workers; i++ { out := make(chan int) outputs[i] = out go func() { defer close(out) for v := range input { // All workers share input out <- process(v) } }() } return outputs }

Pipeline Pattern

Pipelines chain processing stages via channels. Each stage receives from upstream, transforms data, sends downstream. Enables composable, concurrent data processing with natural backpressure.

┌─────────┐ ┌──────────┐ ┌──────────┐ ┌─────────┐ │ generate│───►│ square │───►│ filter │───►│ print │ │ 1,2,3 │ │ x*x │ │ x > 5 │ │ │ └─────────┘ └──────────┘ └──────────┘ └─────────┘
func generate(nums ...int) <-chan int { out := make(chan int) go func() { defer close(out) for _, n := range nums { out <- n } }() return out } func square(in <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for n := range in { out <- n * n } }() return out } // Usage: pipeline for v := range square(square(generate(1, 2, 3))) { fmt.Println(v) // 1, 16, 81 }

Cancellation with Channels

Use a done channel (typically chan struct{}) to signal cancellation to goroutines. Workers select on both work and done channels. Closing done broadcasts to all listeners. Prefer context.Context for new code.

func worker(done <-chan struct{}, jobs <-chan int) { for { select { case <-done: fmt.Println("worker: cancelled") return case job, ok := <-jobs: if !ok { return } fmt.Println("processing", job) } } } func main() { done := make(chan struct{}) jobs := make(chan int) go worker(done, jobs) jobs <- 1 jobs <- 2 close(done) // Signal cancellation to all workers time.Sleep(100 * time.Millisecond) }

CONTEXT PACKAGE

context.Context Interface

context.Context carries deadlines, cancellation signals, and request-scoped values across API boundaries. It's the standard way to propagate cancellation in Go. Always pass as first parameter; never store in structs.

type Context interface { Deadline() (deadline time.Time, ok bool) // When work should be cancelled Done() <-chan struct{} // Closed when work should stop Err() error // Why context was cancelled Value(key any) any // Request-scoped data } // Usage func doWork(ctx context.Context) error { select { case <-ctx.Done(): return ctx.Err() // Canceled or DeadlineExceeded case <-time.After(time.Second): return nil } }

context.Background

context.Background() returns an empty, never-cancelled context. Use as the root context in main, init, tests, and at the top of request handling. It's the starting point from which other contexts derive.

func main() { ctx := context.Background() // Root context // Derive child contexts from it ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel() if err := doWork(ctx); err != nil { log.Fatal(err) } }

context.TODO

context.TODO() is a placeholder when you're unsure which context to use or when refactoring. Functionally identical to Background() but signals "this needs proper context." Static analysis tools can flag TODO usages.

// Use when refactoring code to use contexts func legacyFunction() { ctx := context.TODO() // "I know I need a context, will fix later" newAPIThatNeedsContext(ctx) } // Don't use in production - prefer receiving context as parameter func properFunction(ctx context.Context) { newAPIThatNeedsContext(ctx) // Pass through }

context.WithCancel

WithCancel returns a derived context and cancel function. Calling cancel cancels the child (and its descendants), not the parent. Always defer cancel() to prevent leaks, even if context expires naturally.

func main() { ctx, cancel := context.WithCancel(context.Background()) defer cancel() // Clean up resources go func() { select { case <-ctx.Done(): fmt.Println("goroutine:", ctx.Err()) return } }() time.Sleep(100 * time.Millisecond) cancel() // Signal cancellation time.Sleep(100 * time.Millisecond) } // Output: goroutine: context canceled

context.WithDeadline

WithDeadline creates a context cancelled at a specific time. If the parent has an earlier deadline, that takes precedence. Returns cancel function for manual early cancellation.

func main() { deadline := time.Now().Add(2 * time.Second) ctx, cancel := context.WithDeadline(context.Background(), deadline) defer cancel() select { case <-time.After(3 * time.Second): fmt.Println("finished work") case <-ctx.Done(): fmt.Println("deadline exceeded:", ctx.Err()) } } // Output: deadline exceeded: context deadline exceeded

context.WithTimeout

WithTimeout(parent, duration) is shorthand for WithDeadline(parent, time.Now().Add(duration)). Use for operations that should complete within a relative time period. Most common context type for RPC calls.

func fetchData(ctx context.Context, url string) ([]byte, error) { ctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel() req, _ := http.NewRequestWithContext(ctx, "GET", url, nil) resp, err := http.DefaultClient.Do(req) if err != nil { return nil, err // Includes timeout error } defer resp.Body.Close() return io.ReadAll(resp.Body) }

context.WithValue

WithValue attaches request-scoped data to context. Keys should be unexported types to avoid collisions. Use sparingly—for request ID, auth tokens, tracing. Never for function parameters or optional values.

type contextKey string const ( requestIDKey contextKey = "requestID" userKey contextKey = "user" ) func middleware(next http.Handler) http.Handler { return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { ctx := context.WithValue(r.Context(), requestIDKey, uuid.New().String()) next.ServeHTTP(w, r.WithContext(ctx)) }) } func handler(w http.ResponseWriter, r *http.Request) { reqID := r.Context().Value(requestIDKey).(string) log.Printf("[%s] handling request", reqID) }

Context Cancellation Propagation

Cancellation flows downward through the context tree. When a parent is cancelled, all children are automatically cancelled. Children cannot cancel parents. This enables clean shutdown of entire request subtrees.

┌─────────────────────────────────────────────────────────────┐ │ CANCELLATION PROPAGATION │ ├─────────────────────────────────────────────────────────────┤ │ │ │ ┌─────────────┐ │ │ │ Background │ │ │ └──────┬──────┘ │ │ │ │ │ ┌──────▼──────┐ │ │ │ Request │◄── cancel() called here │ │ │ Context │ │ │ └──────┬──────┘ │ │ ┌────────────┼────────────┐ │ │ ▼ ▼ ▼ │ │ ┌─────────┐ ┌─────────┐ ┌─────────┐ │ │ │ DB ctx │ │ API ctx │ │Cache ctx│ All cancelled │ │ └────┬────┘ └────┬────┘ └────┬────┘ automatically │ │ ▼ ▼ ▼ │ │ ✗ Done ✗ Done ✗ Done │ └─────────────────────────────────────────────────────────────┘

Context Best Practices

Pass context as first parameter named ctx. Never store in structs. Don't pass nil—use context.TODO(). Create request-scoped contexts for each request. Always call cancel functions (defer). Don't use context for passing optional parameters.

// ✅ Good: context as first param func Process(ctx context.Context, data []byte) error // ❌ Bad: context in struct type Service struct { ctx context.Context // Don't do this } // ✅ Good: per-request context func (s *Server) Handle(w http.ResponseWriter, r *http.Request) { ctx := r.Context() // Request-scoped ctx, cancel := context.WithTimeout(ctx, 10*time.Second) defer cancel() // Always clean up result, err := s.process(ctx) }

Context in HTTP Handlers

HTTP requests carry context via r.Context(). Use http.NewRequestWithContext() for outgoing requests. Context is cancelled when client disconnects, enabling cleanup of abandoned requests.

func handler(w http.ResponseWriter, r *http.Request) { ctx := r.Context() // Cancelled if client disconnects resultCh := make(chan string, 1) go func() { resultCh <- expensiveOperation() }() select { case result := <-resultCh: fmt.Fprint(w, result) case <-ctx.Done(): log.Println("client disconnected") return } }

Context in Database Operations

Pass context to database operations for timeout control and cancellation. Standard database/sql methods have Context variants. Cancelled queries get rolled back at the database level.

func GetUser(ctx context.Context, db *sql.DB, id int) (*User, error) { ctx, cancel := context.WithTimeout(ctx, 3*time.Second) defer cancel() var user User err := db.QueryRowContext(ctx, "SELECT id, name, email FROM users WHERE id = $1", id, ).Scan(&user.ID, &user.Name, &user.Email) if err == context.DeadlineExceeded { return nil, fmt.Errorf("database query timed out") } return &user, err }

SYNCHRONIZATION PRIMITIVES

sync.Mutex

sync.Mutex provides mutual exclusion—only one goroutine can hold the lock. Use Lock() and Unlock() (always defer Unlock). Protects shared state from concurrent access. Never copy a mutex after first use.

type SafeCounter struct { mu sync.Mutex count int } func (c *SafeCounter) Inc() { c.mu.Lock() defer c.mu.Unlock() c.count++ } func (c *SafeCounter) Value() int { c.mu.Lock() defer c.mu.Unlock() return c.count }

sync.RWMutex

RWMutex allows multiple concurrent readers OR one exclusive writer. Use RLock()/RUnlock() for reads, Lock()/Unlock() for writes. Improves performance when reads vastly outnumber writes.

type Cache struct { mu sync.RWMutex items map[string]string } func (c *Cache) Get(key string) (string, bool) { c.mu.RLock() // Multiple readers allowed defer c.mu.RUnlock() val, ok := c.items[key] return val, ok } func (c *Cache) Set(key, value string) { c.mu.Lock() // Exclusive access defer c.mu.Unlock() c.items[key] = value }

sync.Once

sync.Once ensures a function executes exactly once, even across goroutines. Used for lazy initialization, singleton patterns. The function runs synchronously—other callers block until it completes.

type DB struct { once sync.Once conn *sql.DB } func (d *DB) Connect() *sql.DB { d.once.Do(func() { // Runs exactly once, even with concurrent calls var err error d.conn, err = sql.Open("postgres", connectionString) if err != nil { log.Fatal(err) } }) return d.conn }

sync.Cond

sync.Cond enables goroutines to wait for or signal conditions. Use Wait() to release lock and block until Signal() (wake one) or Broadcast() (wake all). Complex to use correctly—often channels are simpler.

type Queue struct { cond *sync.Cond items []int } func NewQueue() *Queue { return &Queue{cond: sync.NewCond(&sync.Mutex{})} } func (q *Queue) Put(item int) { q.cond.L.Lock() defer q.cond.L.Unlock() q.items = append(q.items, item) q.cond.Signal() // Wake one waiting consumer } func (q *Queue) Get() int { q.cond.L.Lock() defer q.cond.L.Unlock() for len(q.items) == 0 { q.cond.Wait() // Release lock, wait for signal } item := q.items[0] q.items = q.items[1:] return item }

sync.Pool

sync.Pool caches temporary objects for reuse, reducing GC pressure. Objects may be removed any time without notification. Use for frequently allocated, expensive objects like buffers. Not for connection pools—use dedicated pooling.

var bufferPool = sync.Pool{ New: func() any { return make([]byte, 4096) }, } func process(data []byte) { buf := bufferPool.Get().([]byte) defer bufferPool.Put(buf) // Return to pool // Reset before use (pool doesn't clear objects) buf = buf[:0] // Use buffer... copy(buf, data) }

sync.Map

sync.Map is a concurrent map safe for multiple goroutines without locking. Optimized for two cases: (1) write-once, read-many entries, (2) disjoint key sets per goroutine. For other cases, use regular map with RWMutex.

var cache sync.Map func main() { // Store cache.Store("key1", "value1") // Load val, ok := cache.Load("key1") fmt.Println(val, ok) // value1 true // LoadOrStore - get existing or store new actual, loaded := cache.LoadOrStore("key2", "value2") fmt.Println(actual, loaded) // value2 false (new entry) // Delete cache.Delete("key1") // Range cache.Range(func(key, value any) bool { fmt.Println(key, value) return true // continue iteration }) }

atomic Package

sync/atomic provides low-level atomic memory operations. Faster than mutexes for simple operations. Use for counters, flags, pointers. Operations include Load, Store, Add, Swap, CompareAndSwap.

type Counter struct { value int64 } func (c *Counter) Inc() { atomic.AddInt64(&c.value, 1) } func (c *Counter) Get() int64 { return atomic.LoadInt64(&c.value) } func main() { var c Counter var wg sync.WaitGroup for i := 0; i < 1000; i++ { wg.Add(1) go func() { defer wg.Done() c.Inc() }() } wg.Wait() fmt.Println(c.Get()) // 1000 (always correct) }

atomic.Value

atomic.Value provides atomic load/store for arbitrary values. Once stored, type must remain consistent. Use for configuration hot-reloading, feature flags, or any read-heavy shared data.

type Config struct { Timeout time.Duration MaxConn int } var config atomic.Value func init() { config.Store(&Config{Timeout: 30 * time.Second, MaxConn: 100}) } func GetConfig() *Config { return config.Load().(*Config) // Type assertion } func UpdateConfig(c *Config) { config.Store(c) // Atomic replacement } // Safe concurrent access func handler() { cfg := GetConfig() // Always gets consistent snapshot timeout := cfg.Timeout }

Compare-and-Swap Operations

CAS atomically compares value to expected, swaps if equal. Returns whether swap happened. Foundation for lock-free algorithms. Use for implementing spinlocks, counters, and state machines.

func main() { var value int64 = 10 // CompareAndSwapInt64(addr, old, new) bool swapped := atomic.CompareAndSwapInt64(&value, 10, 20) fmt.Println(swapped, value) // true 20 swapped = atomic.CompareAndSwapInt64(&value, 10, 30) fmt.Println(swapped, value) // false 20 (expected 10, was 20) } // Lock-free increment with CAS func increment(addr *int64) { for { old := atomic.LoadInt64(addr) if atomic.CompareAndSwapInt64(addr, old, old+1) { return // Success } // CAS failed, retry with new value } }

Memory Ordering

Go's memory model defines when writes in one goroutine are visible to reads in another. Synchronization primitives (channels, mutexes, atomics) establish happens-before relationships. Without synchronization, compilers/CPUs may reorder operations.

┌─────────────────────────────────────────────────────────────┐ │ HAPPENS-BEFORE RELATIONSHIPS │ ├─────────────────────────────────────────────────────────────┤ │ │ │ 1. Within a goroutine: statement order │ │ 2. Send on channel → corresponding receive │ │ 3. Close of channel → receive returning zero value │ │ 4. Unlock of mutex → subsequent lock │ │ 5. Once.Do(f) call → any once.Do(f) return │ │ │ │ ❌ Without sync, no guarantees: │ │ Goroutine 1: Goroutine 2: │ │ a = 1 print(b) // may see 0! │ │ b = 2 print(a) // may see 0! │ │ │ └─────────────────────────────────────────────────────────────┘

Data Races

A data race occurs when two goroutines access the same variable concurrently, and at least one access is a write. Races cause undefined behavior—results may vary between runs, be impossible to reproduce, or cause crashes.

// ❌ DATA RACE var counter int for i := 0; i < 1000; i++ { go func() { counter++ // Read-modify-write, not atomic }() } // ✅ FIXED: with mutex var mu sync.Mutex for i := 0; i < 1000; i++ { go func() { mu.Lock() counter++ mu.Unlock() }() } // ✅ FIXED: with atomic var atomicCounter int64 for i := 0; i < 1000; i++ { go func() { atomic.AddInt64(&atomicCounter, 1) }() }

Race Detector (go run -race)

The race detector instruments code to detect data races at runtime. Adds ~10x slowdown and 5-10x memory overhead. Run tests with -race in CI. Zero race warnings should be the goal.

# Run with race detector go run -race main.go go test -race ./... go build -race -o app # Output example: ================== WARNING: DATA RACE Write at 0x00c00001e0f8 by goroutine 7: main.main.func1() /path/main.go:12 +0x38 Previous read at 0x00c00001e0f8 by goroutine 6: main.main.func1() /path/main.go:12 +0x30 Goroutine 7 (running) created at: main.main() /path/main.go:11 +0x68 ==================

CONCURRENCY PATTERNS

Worker Pools

Worker pools limit concurrent workers processing jobs from a shared queue. Workers read from a jobs channel, process, and optionally write results. Control pool size for resource management and throughput optimization.

func workerPool(numWorkers int, jobs <-chan int, results chan<- int) { var wg sync.WaitGroup for i := 0; i < numWorkers; i++ { wg.Add(1) go func(id int) { defer wg.Done() for job := range jobs { results <- process(job) } }(i) } go func() { wg.Wait() close(results) }() } func main() { jobs := make(chan int, 100) results := make(chan int, 100) workerPool(5, jobs, results) for i := 0; i < 50; i++ { jobs <- i } close(jobs) for r := range results { fmt.Println(r) } }

Rate Limiting

Rate limiting controls how often operations can occur. Use time.Ticker for steady rate, token buckets for bursts. Essential for API clients, preventing resource exhaustion, and fair resource sharing.

// Simple rate limiter with ticker func rateLimited(requests <-chan int, rps int) { limiter := time.NewTicker(time.Second / time.Duration(rps)) defer limiter.Stop() for req := range requests { <-limiter.C // Wait for next tick process(req) } } // Bursty rate limiter with buffered channel func burstyLimiter(requests <-chan int, rps, burst int) { tokens := make(chan time.Time, burst) // Fill initial burst for i := 0; i < burst; i++ { tokens <- time.Now() } // Refill at rps rate go func() { for t := range time.Tick(time.Second / time.Duration(rps)) { tokens <- t } }() for req := range requests { <-tokens // Take a token process(req) } }

Timeout Patterns

Timeouts prevent operations from blocking forever. Use context.WithTimeout, time.After in select, or deadline-aware functions. Always provide escape hatches for blocking operations.

// Using context func withTimeout(ctx context.Context) error { ctx, cancel := context.WithTimeout(ctx, 3*time.Second) defer cancel() select { case result := <-doSlowWork(): return processResult(result) case <-ctx.Done(): return ctx.Err() } } // Using time.After func withTimeAfter(work <-chan int) (int, error) { select { case result := <-work: return result, nil case <-time.After(3 * time.Second): return 0, errors.New("timeout") } }

Cancellation Patterns

Cancellation signals goroutines to stop work early. Use context or done channels. Check for cancellation at regular intervals and at the start of loop iterations. Propagate cancellation to child operations.

func longOperation(ctx context.Context) error { for i := 0; i < 1000; i++ { // Check cancellation at start of each iteration select { case <-ctx.Done(): return ctx.Err() default: } // Do work chunk if err := doWorkChunk(ctx, i); err != nil { return err } } return nil } // Propagate to child operations func parentOp(ctx context.Context) error { ctx, cancel := context.WithCancel(ctx) defer cancel() // Cancels children if we return early errCh := make(chan error, 2) go func() { errCh <- childOp1(ctx) }() go func() { errCh <- childOp2(ctx) }() for i := 0; i < 2; i++ { if err := <-errCh; err != nil { return err // cancel() runs, stops other child } } return nil }

Error Handling in Concurrent Code

Return errors through channels or use error groups. Don't let errors go unobserved. Consider whether to fail-fast (cancel others on first error) or collect all errors. Always know how each goroutine reports failures.

type Result struct { Value int Err error } func concurrentWithErrors(items []int) ([]int, error) { results := make(chan Result, len(items)) for _, item := range items { go func(v int) { val, err := process(v) results <- Result{val, err} }(item) } var values []int var errs []error for range items { r := <-results if r.Err != nil { errs = append(errs, r.Err) } else { values = append(values, r.Value) } } if len(errs) > 0 { return values, fmt.Errorf("failed: %v", errs) } return values, nil }

Error Groups (golang.org/x/sync/errgroup)

errgroup manages goroutines with error handling and context cancellation. First error cancels the group's context. Wait() returns the first error. Perfect for multiple tasks where any failure should stop all work.

import "golang.org/x/sync/errgroup" func fetchAll(ctx context.Context, urls []string) ([]string, error) { g, ctx := errgroup.WithContext(ctx) results := make([]string, len(urls)) for i, url := range urls { i, url := i, url // Capture for goroutine g.Go(func() error { body, err := fetch(ctx, url) if err != nil { return err // First error cancels ctx } results[i] = body return nil }) } if err := g.Wait(); err != nil { return nil, err // Returns first error } return results, nil }

Semaphore Pattern

Semaphores limit concurrent access to a resource. Implement with buffered channels—capacity equals max concurrency. Acquire by sending, release by receiving. Useful for limiting connections, file handles, or CPU-intensive tasks.

type Semaphore chan struct{} func NewSemaphore(max int) Semaphore { return make(chan struct{}, max) } func (s Semaphore) Acquire() { s <- struct{}{} } func (s Semaphore) Release() { <-s } func main() { sem := NewSemaphore(3) // Max 3 concurrent for i := 0; i < 10; i++ { go func(id int) { sem.Acquire() defer sem.Release() fmt.Printf("Worker %d running\n", id) time.Sleep(time.Second) }(i) } }

Or-channel Pattern

Or-channel combines multiple done channels into one that closes when ANY input closes. Useful for waiting on whichever of several events happens first—timeouts, cancellation, or completion.

func or(channels ...<-chan struct{}) <-chan struct{} { switch len(channels) { case 0: return nil case 1: return channels[0] } orDone := make(chan struct{}) go func() { defer close(orDone) switch len(channels) { case 2: select { case <-channels[0]: case <-channels[1]: } default: select { case <-channels[0]: case <-channels[1]: case <-channels[2]: case <-or(append(channels[3:], orDone)...): } } }() return orDone } // Usage done := or(cancel1, cancel2, timeout) <-done // Unblocks when any one closes

Or-done-channel Pattern

Or-done wraps a channel read to also respect a done signal. Prevents goroutine leaks when the source channel never closes. Returns values until either the source or done closes.

func orDone(done <-chan struct{}, c <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for { select { case <-done: return case v, ok := <-c: if !ok { return } select { case out <- v: case <-done: return } } } }() return out } // Usage done := make(chan struct{}) for v := range orDone(done, someChannel) { if shouldStop(v) { close(done) // Stop reading, even if source has more break } process(v) }

Tee-channel Pattern

Tee splits one input channel into two output channels. Each output receives every value from the input. Both outputs must be read to prevent blocking—useful for sending to multiple consumers.

func tee(done <-chan struct{}, in <-chan int) (<-chan int, <-chan int) { out1, out2 := make(chan int), make(chan int) go func() { defer close(out1) defer close(out2) for val := range orDone(done, in) { // Use local vars for select case references o1, o2 := out1, out2 for i := 0; i < 2; i++ { // Send to both select { case <-done: return case o1 <- val: o1 = nil // Disable after send case o2 <- val: o2 = nil } } } }() return out1, out2 }

Bridge-channel Pattern

Bridge joins a channel of channels into a single flat output channel. Useful when stages produce channels of results—bridge flattens them into one stream for downstream consumption.

func bridge(done <-chan struct{}, chanStream <-chan <-chan int) <-chan int { out := make(chan int) go func() { defer close(out) for { var stream <-chan int select { case maybeStream, ok := <-chanStream: if !ok { return } stream = maybeStream case <-done: return } for val := range orDone(done, stream) { select { case out <- val: case <-done: return } } } }() return out } // Usage: flatten [[1,2], [3,4]] into [1,2,3,4] for v := range bridge(done, chanOfChans) { fmt.Println(v) }

Queuing Patterns

Queuing buffers work between producer and consumer stages. Use buffered channels for simple queues. Consider back-pressure (blocking when full) vs. dropping (non-blocking send with default). Match buffer size to expected burst size.

// Simple buffered queue queue := make(chan Job, 100) // With back-pressure func enqueue(queue chan<- Job, job Job, timeout time.Duration) error { select { case queue <- job: return nil case <-time.After(timeout): return errors.New("queue full") } } // Drop on full (non-blocking) func tryEnqueue(queue chan<- Job, job Job) bool { select { case queue <- job: return true default: return false // Dropped } }

Bounded Parallelism

Bounded parallelism limits concurrent work to prevent resource exhaustion. Combine worker pools with input channels. Set bounds based on available resources (CPU cores, connections, memory).

func boundedParallel(ctx context.Context, items []int, maxWorkers int) []int { input := make(chan int) results := make(chan int, len(items)) // Start bounded workers var wg sync.WaitGroup for i := 0; i < maxWorkers; i++ { wg.Add(1) go func() { defer wg.Done() for item := range input { select { case <-ctx.Done(): return case results <- process(item): } } }() } // Feed work go func() { for _, item := range items { input <- item } close(input) wg.Wait() close(results) }() return collect(results) }

Heartbeat Pattern

Heartbeats signal that a goroutine is still alive and working. Useful for detecting stuck operations, implementing timeouts, and debugging. Send heartbeats at work intervals or periodically.

func worker(done <-chan struct{}) (<-chan struct{}, <-chan int) { heartbeat := make(chan struct{}, 1) results := make(chan int) go func() { defer close(heartbeat) defer close(results) for i := 0; i < 10; i++ { select { case <-done: return case heartbeat <- struct{}{}: // Signal alive default: // Don't block if nobody listening } // Do work time.Sleep(100 * time.Millisecond) select { case <-done: return case results <- i: } } }() return heartbeat, results }

Publish-Subscribe Pattern

Pub-sub broadcasts messages from publishers to multiple subscribers. A broker manages subscriptions and message distribution. Use for event systems, notifications, and decoupled component communication.

type Broker struct { mu sync.RWMutex subs map[string][]chan string } func NewBroker() *Broker { return &Broker{subs: make(map[string][]chan string)} } func (b *Broker) Subscribe(topic string) <-chan string { ch := make(chan string, 16) b.mu.Lock() b.subs[topic] = append(b.subs[topic], ch) b.mu.Unlock() return ch } func (b *Broker) Publish(topic, msg string) { b.mu.RLock() defer b.mu.RUnlock() for _, ch := range b.subs[topic] { select { case ch <- msg: default: // Don't block if subscriber slow } } } // Usage broker := NewBroker() sub1 := broker.Subscribe("events") sub2 := broker.Subscribe("events") broker.Publish("events", "hello")