GoLang Testing Ecosystem: Unit Tests, Benchmarks, Fuzzing & Mocks
Go treats testing as a first-class citizen. This guide covers the full QA spectrum: from writing idiomatic table-driven unit tests and generating coverage reports to profiling memory allocations and discovering bugs via Fuzzing.
Testing Basics
testing package
The testing package is Go's built-in testing framework - no external dependencies needed. It provides types and functions for writing unit tests, benchmarks, and examples. Import it with import "testing" and the framework handles test discovery, execution, and reporting automatically.
Test function signature
Test functions must start with Test, followed by a capitalized name, and accept exactly one parameter *testing.T. The framework uses reflection to discover and run these functions automatically.
func TestAddition(t *testing.T) { result := Add(2, 3) if result != 5 { t.Error("Expected 5") } }
t.Error, t.Errorf
These methods mark the test as failed but continue execution, allowing multiple failures to be reported in one run. t.Errorf supports printf-style formatting for detailed error messages.
func TestValues(t *testing.T) { if got := compute(); got != 10 { t.Errorf("got %d, want %d", got, 10) // continues running } // more checks can run... }
t.Fatal, t.Fatalf
These methods mark the test as failed and immediately stop the current test function. Use when continuing makes no sense (e.g., nil pointer would cause panic).
func TestConnection(t *testing.T) { conn, err := Connect() if err != nil { t.Fatalf("failed to connect: %v", err) // stops here } conn.Query() // won't panic if Connect failed }
t.Log, t.Logf
These methods output informational messages only when the test fails or when running with go test -v. Useful for debugging without cluttering normal output.
func TestProcess(t *testing.T) { t.Logf("Testing with input: %v", input) // shown with -v or on failure // test logic... }
t.Skip
Skips the current test with an optional message - useful for conditional test execution based on environment, OS, or external dependencies. The test is marked as skipped, not failed.
func TestLinuxOnly(t *testing.T) { if runtime.GOOS != "linux" { t.Skip("Skipping: requires Linux") } // Linux-specific tests... }
t.Helper
Marks a function as a test helper, so when failures occur, the stack trace points to the calling test, not the helper function. Essential for clean error reporting in reusable test utilities.
func assertEqual(t *testing.T, got, want int) { t.Helper() // error points to caller, not this line if got != want { t.Errorf("got %d, want %d", got, want) } }
go test command
The primary command for running tests. It compiles test files, runs matching test functions, and reports results. Common flags: -v (verbose), -run (filter), -count (repeat), -timeout (limit).
┌─────────────────────────────────────────────────────────┐
│ go test # run tests in current package │
│ go test ./... # run tests in all subpackages │
│ go test -v # verbose output │
│ go test -run=Login # run tests matching "Login" │
│ go test -count=1 # disable test caching │
└─────────────────────────────────────────────────────────┘
Test file naming (*_test.go)
Test files must end with _test.go - this naming convention tells the Go toolchain to exclude them from production builds while including them during testing. They live alongside the code they test.
mypackage/ ├── user.go # production code ├── user_test.go # tests for user.go ├── auth.go # production code └── auth_test.go # tests for auth.go
Table-driven tests
The idiomatic Go pattern for testing multiple scenarios. Define test cases as a slice of structs, then loop through them. Reduces code duplication and makes adding new cases trivial.
func TestAdd(t *testing.T) { tests := []struct { name string a, b int expected int }{ {"positive", 2, 3, 5}, {"negative", -1, -2, -3}, {"zero", 0, 0, 0}, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { if got := Add(tt.a, tt.b); got != tt.expected { t.Errorf("Add(%d,%d) = %d, want %d", tt.a, tt.b, got, tt.expected) } }) } }
Test coverage (go test -cover)
Shows the percentage of code executed during tests. Quick way to identify untested code paths. Aim for high coverage but remember: 100% coverage doesn't mean bug-free code.
$ go test -cover PASS coverage: 78.5% of statements ok mypackage 0.003s
Coverage reports (go test -coverprofile)
Generates a detailed coverage file that can be visualized in browser or terminal. Shows exactly which lines were executed (green) and which were missed (red).
# Generate profile and view in browser go test -coverprofile=coverage.out go tool cover -html=coverage.out # View in terminal go tool cover -func=coverage.out
Advanced Testing
Subtests (t.Run)
Creates nested test cases with their own names, allowing selective execution and better organization. Each subtest gets its own *testing.T and can fail independently. Essential for table-driven tests.
func TestMath(t *testing.T) { t.Run("Addition", func(t *testing.T) { if Add(1, 2) != 3 { t.Error("failed") } }) t.Run("Subtraction", func(t *testing.T) { if Sub(5, 3) != 2 { t.Error("failed") } }) } // Run specific: go test -run=TestMath/Addition
Parallel tests (t.Parallel)
Marks a test to run concurrently with other parallel tests. Call at the beginning of test/subtest. Speeds up test suites but requires tests to be independent (no shared mutable state).
func TestParallel(t *testing.T) { tests := []struct{ name string }{{"A"}, {"B"}, {"C"}} for _, tt := range tests { tt := tt // capture range variable (fixed in Go 1.22+) t.Run(tt.name, func(t *testing.T) { t.Parallel() // runs concurrently // test logic... }) } }
Test setup and teardown
Go uses explicit setup/teardown via deferred cleanup or t.Cleanup() (Go 1.14+). No magic beforeEach/afterEach - cleanup functions run after test completes, even on failure.
func TestWithCleanup(t *testing.T) { // Setup db := createTestDB() t.Cleanup(func() { db.Close() // runs after test completes }) // Test logic using db... }
TestMain function
Provides control over test execution for a package - runs before any tests. Use for global setup/teardown like starting databases or setting environment. Must call m.Run() and os.Exit().
func TestMain(m *testing.M) { // Global setup db := setupTestDatabase() code := m.Run() // runs all tests // Global teardown db.Cleanup() os.Exit(code) }
Test fixtures
Static test data files stored in a testdata directory (special name ignored by Go tooling). Used for loading sample inputs, expected outputs, or configuration needed during tests.
mypackage/ ├── parser.go ├── parser_test.go └── testdata/ # ignored by go build ├── valid.json ├── invalid.json └── expected.txt // In test: data, _ := os.ReadFile("testdata/valid.json")
Golden files
A testing pattern where expected output is stored in files. Tests compare actual output against these "golden" files. Often include an -update flag to regenerate them when output intentionally changes.
var update = flag.Bool("update", false, "update golden files") func TestOutput(t *testing.T) { got := GenerateOutput() golden := filepath.Join("testdata", "output.golden") if *update { os.WriteFile(golden, got, 0644) } want, _ := os.ReadFile(golden) if !bytes.Equal(got, want) { t.Errorf("output mismatch") } } // Update: go test -update
Testing internal packages
Use package foo_test for black-box testing (only exported API) or package foo for white-box testing (access to unexported functions). For accessing internals from _test package, create an export_test.go file.
// export_test.go (in package foo, not foo_test) package foo // Export unexported for testing var InternalFunc = internalFunc // foo_test.go package foo_test func TestInternal(t *testing.T) { foo.InternalFunc() // now accessible }
Build tags for tests
Conditional compilation for tests using build constraints. Useful for integration tests, OS-specific tests, or tests requiring special environments. Place at file top before package declaration.
//go:build integration // +build integration package mypackage func TestDatabaseIntegration(t *testing.T) { // Only runs with: go test -tags=integration }
Short tests (-short flag)
Convention for marking and skipping slow tests during quick iterations. Check testing.Short() to skip long-running tests. Developers run -short locally, CI runs full suite.
func TestSlowOperation(t *testing.T) { if testing.Short() { t.Skip("skipping slow test in short mode") } // Long-running test... } // Quick run: go test -short // Full run: go test
Test caching
Go caches successful test results based on inputs (source code, env, flags). Cached results show (cached) in output. Disable with -count=1 when testing external dependencies.
$ go test ok mypackage 0.005s $ go test ok mypackage (cached) <- no recompilation $ go test -count=1 <- forces re-run ok mypackage 0.005s
Benchmarking
Benchmark function signature
Benchmark functions start with Benchmark, take *testing.B, and run the code b.N times. The framework automatically determines b.N to get statistically significant results.
func BenchmarkConcat(b *testing.B) { for i := 0; i < b.N; i++ { _ = "hello" + " " + "world" } } // Run: go test -bench=.
b.N loop
The benchmark framework adjusts b.N automatically (starting small, increasing until stable timing). Your code must loop exactly b.N times - the framework measures total time and divides by b.N.
func BenchmarkSort(b *testing.B) { for i := 0; i < b.N; i++ { // b.N: 1, 100, 10000, ... data := []int{5, 2, 8, 1, 9} sort.Ints(data) } } // Output: BenchmarkSort-8 5000000 234 ns/op
b.ResetTimer
Resets the benchmark timer, excluding setup time from measurements. Call after expensive initialization that shouldn't count toward the benchmark result.
func BenchmarkProcess(b *testing.B) { // Expensive setup data := loadLargeDataset() b.ResetTimer() // start timing from here for i := 0; i < b.N; i++ { process(data) } }
b.StopTimer, b.StartTimer
Fine-grained control to pause/resume timing within the benchmark loop. Useful when per-iteration setup shouldn't be measured, but use sparingly as it adds overhead.
func BenchmarkWithSetup(b *testing.B) { for i := 0; i < b.N; i++ { b.StopTimer() data := generateTestData() // not measured b.StartTimer() process(data) // measured } }
Benchmark flags
Control benchmark execution via command-line flags. Essential ones: -bench (pattern), -benchtime (duration/count), -benchmem (memory stats), -cpu (GOMAXPROCS values).
┌────────────────────────────────────────────────────────────┐ │ go test -bench=. # run all benchmarks │ │ go test -bench=Sort # match pattern │ │ go test -bench=. -benchtime=5s # run for 5 seconds │ │ go test -bench=. -benchtime=1000x # exactly 1000 iters │ │ go test -bench=. -benchmem # include memory stats │ │ go test -bench=. -cpu=1,2,4 # test different CPUs │ └────────────────────────────────────────────────────────────┘
Benchmark comparison
Compare before/after performance by saving results to files. Use the benchstat tool for statistical analysis including confidence intervals and significance testing.
# Before optimization git checkout main go test -bench=. -count=10 > old.txt # After optimization git checkout feature go test -bench=. -count=10 > new.txt # Compare benchstat old.txt new.txt
Memory allocation benchmarks (b.ReportAllocs)
Call b.ReportAllocs() or use -benchmem flag to include allocation statistics. Shows allocations per operation - critical for optimizing hot paths and reducing GC pressure.
func BenchmarkBuffer(b *testing.B) { b.ReportAllocs() for i := 0; i < b.N; i++ { var buf bytes.Buffer buf.WriteString("hello") } } // Output: 1 allocs/op 64 B/op
benchstat tool
Official tool for statistically comparing benchmark results. Shows percentage change, confidence intervals, and indicates whether changes are statistically significant.
$ benchstat old.txt new.txt
name old time/op new time/op delta
Parse-8 2.50µs ±2% 1.80µs ±1% -28.00% (p=0.000 n=10+10)
name old alloc/op new alloc/op delta
Parse-8 4.00kB ±0% 2.00kB ±0% -50.00% (p=0.000 n=10+10)
CPU profiling in benchmarks
Generate CPU profiles during benchmarks for detailed analysis with pprof. Identifies which functions consume the most CPU time.
# Generate profile go test -bench=. -cpuprofile=cpu.out # Analyze interactively go tool pprof cpu.out (pprof) top10 (pprof) web # visualize in browser # Or directly go tool pprof -http=:8080 cpu.out
Benchmark optimization
Pattern for comparing implementations. Create multiple benchmark functions or use sub-benchmarks. Always verify correctness before optimizing - fast but wrong is useless.
func BenchmarkConcat(b *testing.B) { b.Run("Plus", func(b *testing.B) { for i := 0; i < b.N; i++ { _ = "a" + "b" + "c" } }) b.Run("Builder", func(b *testing.B) { for i := 0; i < b.N; i++ { var sb strings.Builder sb.WriteString("a") sb.WriteString("b") sb.WriteString("c") _ = sb.String() } }) }
Mocking and Stubbing
Interface-based mocking
Go's primary mocking strategy: define interfaces for dependencies, then create mock implementations. The type system ensures mocks satisfy the interface contract.
type UserStore interface { GetUser(id int) (*User, error) } type MockUserStore struct { users map[int]*User } func (m *MockUserStore) GetUser(id int) (*User, error) { return m.users[id], nil } func TestHandler(t *testing.T) { mock := &MockUserStore{users: map[int]*User{1: {Name: "Alice"}}} handler := NewHandler(mock) // inject mock // test handler... }
Manual mocks
Hand-written mock implementations with configurable behavior. Simple but requires maintenance. Good for simple interfaces; consider generators for complex ones.
type MockMailer struct { SendFunc func(to, body string) error Calls []string } func (m *MockMailer) Send(to, body string) error { m.Calls = append(m.Calls, to) if m.SendFunc != nil { return m.SendFunc(to, body) } return nil }
Mock generation (mockgen, gomock)
gomock is the official Google mock framework. mockgen generates mock implementations from interfaces. Provides expectation setting, call verification, and argument matching.
# Install go install go.uber.org/mock/mockgen@latest # Generate mock from interface mockgen -source=user.go -destination=mock_user.go -package=mocks
func TestWithGoMock(t *testing.T) { ctrl := gomock.NewController(t) mock := mocks.NewMockUserStore(ctrl) mock.EXPECT().GetUser(1).Return(&User{Name: "Bob"}, nil) result, _ := mock.GetUser(1) // assertions... }
testify/mock
Popular third-party mocking framework from Stretchr. More flexible than gomock with built-in assertions. Mocks are struct-based with method recording.
type MockDB struct { mock.Mock } func (m *MockDB) Get(id int) string { args := m.Called(id) return args.String(0) } func TestService(t *testing.T) { m := new(MockDB) m.On("Get", 1).Return("data") result := m.Get(1) m.AssertExpectations(t) assert.Equal(t, "data", result) }
HTTP mocking (httptest package)
Standard library package for testing HTTP clients and servers. Create fake servers or record responses without actual network calls. Essential for testing HTTP interactions.
┌──────────────────────────────────────────────────┐ │ httptest Package │ ├──────────────────────────────────────────────────┤ │ httptest.NewServer → fake HTTP server │ │ httptest.NewRecorder → capture responses │ │ httptest.NewRequest → create test requests │ └──────────────────────────────────────────────────┘
httptest.Server
Creates a real HTTP server on localhost for testing clients. Use NewServer for HTTP or NewTLSServer for HTTPS. Always call Close() when done.
func TestHTTPClient(t *testing.T) { server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) { w.WriteHeader(200) w.Write([]byte(`{"status":"ok"}`)) })) defer server.Close() // Use server.URL as base URL for client resp, _ := http.Get(server.URL + "/api") // assert response... }
httptest.ResponseRecorder
Captures HTTP responses without a network. Implements http.ResponseWriter so you can pass it directly to handlers. Access Code, Body, and Header() after handler returns.
func TestHandler(t *testing.T) { req := httptest.NewRequest("GET", "/users/1", nil) rec := httptest.NewRecorder() myHandler(rec, req) if rec.Code != 200 { t.Errorf("status = %d, want 200", rec.Code) } if !strings.Contains(rec.Body.String(), "user") { t.Error("body missing user data") } }
Dependency injection for testing
Design pattern where dependencies are passed in rather than created internally. Makes code testable by allowing mock injection. Prefer constructor injection over global variables.
// ❌ Hard to test - creates own dependency type Service struct{} func (s *Service) Process() { db := sql.Open(...) // can't mock } // ✅ Testable - dependency injected type Service struct { store Store // interface } func NewService(s Store) *Service { return &Service{store: s} } // In tests: NewService(mockStore)
Fuzzing (Go 1.18+)
Fuzz testing basics
Automated testing that generates random inputs to find edge cases, crashes, and bugs humans wouldn't think of. Go's fuzzer mutates seed inputs and tracks code coverage to explore execution paths.
┌─────────────────────────────────────────────────────────────┐ │ FUZZING FLOW │ ├─────────────────────────────────────────────────────────────┤ │ Seed Corpus → Mutation Engine → Target Function │ │ ↑ ↓ ↓ │ │ └──── Coverage Feedback ←── Crash/New Path? │ └─────────────────────────────────────────────────────────────┘
Fuzz function signature
Fuzz functions start with Fuzz, take *testing.F, add seed corpus, then call f.Fuzz with a function that receives *testing.T and fuzzed arguments. Supports string, []byte, int, bool, float, etc.
func FuzzParseJSON(f *testing.F) { // Add seed corpus f.Add(`{"name":"test"}`) f.Add(`[]`) // Fuzz target f.Fuzz(func(t *testing.T, data string) { var v interface{} json.Unmarshal([]byte(data), &v) // If this panics, fuzzer found a bug! }) }
f.Add (seed corpus)
Provides initial inputs for the fuzzer to mutate. Good seeds cover known edge cases and typical inputs. Types must match f.Fuzz function parameters exactly.
func FuzzURL(f *testing.F) { f.Add("https://example.com") f.Add("http://localhost:8080/path?q=1") f.Add("") // empty f.Add("not-a-url") // invalid f.Add("http://[::1]:80/") // IPv6 f.Fuzz(func(t *testing.T, input string) { url.Parse(input) }) }
f.Fuzz
The core fuzzing function - receives generated inputs and tests them. Must be the last call in the fuzz function. The fuzzer runs this repeatedly with mutated inputs until stopped or crash found.
func FuzzDecode(f *testing.F) { f.Add([]byte{0x00, 0x01, 0x02}) f.Fuzz(func(t *testing.T, data []byte) { decoded, err := Decode(data) if err != nil { return // invalid input is fine } // Check roundtrip if !bytes.Equal(Encode(decoded), data) { t.Error("roundtrip failed") } }) }
Corpus directory
Fuzzer stores interesting inputs in testdata/fuzz/<FuzzTestName>/. Interesting = new code coverage. Commit this directory to preserve found edge cases as regression tests.
mypackage/
├── parser.go
├── parser_test.go
└── testdata/
└── fuzz/
└── FuzzParse/
├── 3a4b5c... # auto-generated
├── 7f8e9d... # found edge case
└── corpus.txt # manually added
go test -fuzz
Runs the fuzzer continuously until interrupted or crash found. By default runs one fuzz target; use -fuzz=. for pattern matching. Set -fuzztime to limit duration.
# Run fuzzer for specific test go test -fuzz=FuzzParse # Run for 30 seconds go test -fuzz=FuzzParse -fuzztime=30s # Run for 10000 iterations go test -fuzz=FuzzParse -fuzztime=10000x # Run matching pattern go test -fuzz=.
Fuzzing best practices
Keep fuzz targets fast and focused. Test one thing per fuzz function. Avoid network/disk I/O. Use t.Skip for known-invalid inputs. Compare against reference implementations when possible.
func FuzzJSON(f *testing.F) { f.Add(`{}`) f.Fuzz(func(t *testing.T, data string) { // Fast: no I/O // Focused: just parsing // Compare implementations: var std, custom interface{} errStd := json.Unmarshal([]byte(data), &std) errCustom := CustomUnmarshal([]byte(data), &custom) if (errStd == nil) != (errCustom == nil) { t.Errorf("behavior mismatch") } }) }
Crash inputs
When fuzzer finds a crash, it writes the input to testdata/fuzz/<Name>/. These become regression tests - go test (without -fuzz) runs them. Fix the bug, keep the input.
$ go test -fuzz=FuzzParse --- FAIL: FuzzParse (0.5s) Failing input written to testdata/fuzz/FuzzParse/8a3b... $ cat testdata/fuzz/FuzzParse/8a3b... go test fuzz v1 string("{{{{") # After fixing, this input becomes a test case $ go test # runs corpus including crash input PASS
Example Tests
Example function naming
Example functions serve as documentation AND tests. Must start with Example, followed by optional function/type/method name. Appear in go doc output and pkg.go.dev.
func Example() { // Package-level example fmt.Println("Hello") } func ExampleUser() { // Example for User type u := NewUser("Alice") } func ExampleUser_Name() { // Example for User.Name method u := User{name: "Bob"} fmt.Println(u.Name()) } func Example_suffix() { // Additional package example fmt.Println("Another example") }
Output comments
Magic comment that turns examples into tests. The test framework compares actual stdout with the // Output: comment. If they match, example passes.
func ExampleReverse() { fmt.Println(Reverse("hello")) fmt.Println(Reverse("world")) // Output: // olleh // dlrow } // This is both documentation AND an executable test!
Unordered output
Use // Unordered output: when output order is non-deterministic (maps, goroutines). Framework checks that all lines appear, regardless of order.
func ExamplePrintMap() { m := map[string]int{"a": 1, "b": 2, "c": 3} for k, v := range m { fmt.Printf("%s=%d\n", k, v) } // Unordered output: // a=1 // b=2 // c=3 }
Examples in documentation
Examples appear in generated documentation (go doc, pkg.go.dev) as runnable code. They're the best way to show how to use your API - tested documentation that can't go stale.
// Package strings provides string utilities. package strings // Reverse returns the reverse of s. // // Example usage appears in documentation automatically. func Reverse(s string) string { // ... } func ExampleReverse() { fmt.Println(Reverse("hello")) // Output: olleh }
┌─────────────────────────────────────────────────────────┐ │ $ go doc strings.Reverse │ │ │ │ func Reverse(s string) string │ │ Reverse returns the reverse of s. │ │ │ │ Example: │ │ fmt.Println(Reverse("hello")) │ │ // Output: olleh │ └─────────────────────────────────────────────────────────┘