Back to Articles
48 min read

GoLang Data Layer: SQL, NoSQL, ORMs & Distributed Messaging

The complete guide to persistence and async communication. We navigate the 'ORM vs Raw SQL' debate, explore type-safe database access with sqlc, and design event-driven architectures using RabbitMQ and Kafka.

Database Drivers

database/sql package

The database/sql package is Go's standard library interface for SQL databases, providing a generic, driver-agnostic API for database operations. It handles connection pooling, prepared statements, and transactions automatically while requiring specific drivers (like pq or mysql) to be imported for actual database connectivity.

import ( "database/sql" _ "github.com/lib/pq" // Driver registers itself via init() )

sql.DB

sql.DB is not a single database connection but a connection pool manager that handles opening/closing connections, managing idle connections, and is safe for concurrent use across goroutines. You should create one sql.DB instance per database and share it throughout your application.

db, err := sql.Open("postgres", connStr) if err != nil { log.Fatal(err) } defer db.Close() // Configure pool db.SetMaxOpenConns(25) db.SetMaxIdleConns(5) db.SetConnMaxLifetime(5 * time.Minute)

Connection Pooling

Go's database/sql automatically manages a pool of connections, reusing idle connections and creating new ones as needed up to configured limits. This eliminates the overhead of establishing connections for each query and handles connection lifecycle automatically.

┌─────────────────────────────────────────────────┐ │ sql.DB │ │ ┌─────────────────────────────────────────┐ │ │ │ Connection Pool │ │ │ │ ┌─────┐ ┌─────┐ ┌─────┐ ┌─────┐ │ │ │ │ │Conn1│ │Conn2│ │Conn3│ │Conn4│ ... │ │ │ │ │idle │ │busy │ │idle │ │busy │ │ │ │ │ └─────┘ └─────┘ └─────┘ └─────┘ │ │ │ └─────────────────────────────────────────┘ │ │ MaxOpenConns: 25 | MaxIdleConns: 5 │ └─────────────────────────────────────────────────┘

sql.Open

sql.Open creates a database handle but doesn't actually establish a connection—it validates the driver name and DSN format only. The actual connection is established lazily on first use, which is why you should always call Ping() afterward to verify connectivity.

// sql.Open doesn't connect - it only validates arguments db, err := sql.Open("postgres", "host=localhost user=app dbname=mydb sslmode=disable") if err != nil { log.Fatal(err) // Driver not found or invalid DSN format } // This actually connects if err := db.Ping(); err != nil { log.Fatal(err) // Connection failed }

Ping method

Ping() and PingContext() verify that the database connection is alive and reachable, establishing a connection if none exists in the pool. Use it after sql.Open() to validate connectivity and in health checks for monitoring.

func healthCheck(db *sql.DB) http.HandlerFunc { return func(w http.ResponseWriter, r *http.Request) { ctx, cancel := context.WithTimeout(r.Context(), 1*time.Second) defer cancel() if err := db.PingContext(ctx); err != nil { http.Error(w, "Database unreachable", http.StatusServiceUnavailable) return } w.Write([]byte("OK")) } }

Query vs QueryRow vs QueryContext

Query returns multiple rows (*sql.Rows) that must be iterated and closed; QueryRow returns a single row and defers errors until Scan; QueryContext variants accept a context for cancellation and timeouts. Always use Context variants in production for proper request lifecycle management.

// Query - multiple rows (MUST close rows) rows, err := db.QueryContext(ctx, "SELECT id, name FROM users WHERE active = $1", true) if err != nil { return err } defer rows.Close() for rows.Next() { var id int; var name string rows.Scan(&id, &name) } // QueryRow - single row (errors deferred to Scan) var name string err := db.QueryRowContext(ctx, "SELECT name FROM users WHERE id = $1", 1).Scan(&name) if err == sql.ErrNoRows { /* not found */ }

Exec vs ExecContext

Exec and ExecContext execute statements that don't return rows (INSERT, UPDATE, DELETE, DDL) and return a sql.Result with affected rows and last insert ID. Always prefer ExecContext to support cancellation and timeouts.

result, err := db.ExecContext(ctx, "UPDATE users SET last_login = $1 WHERE id = $2", time.Now(), userID) if err != nil { return err } rowsAffected, _ := result.RowsAffected() fmt.Printf("Updated %d rows\n", rowsAffected) // For inserts (driver-dependent) lastID, _ := result.LastInsertId() // Works with MySQL, not PostgreSQL

Prepared Statements

Prepared statements (sql.Stmt) parse SQL once and execute multiple times with different parameters, improving performance for repeated queries and protecting against SQL injection. They're connection-bound internally but database/sql handles re-preparation on different connections transparently.

stmt, err := db.PrepareContext(ctx, "INSERT INTO events (user_id, action) VALUES ($1, $2)") if err != nil { return err } defer stmt.Close() // Execute multiple times efficiently for _, event := range events { _, err := stmt.ExecContext(ctx, event.UserID, event.Action) if err != nil { return err } }

Transactions (sql.Tx)

sql.Tx represents a database transaction that groups multiple operations into an atomic unit—either all succeed (Commit) or all fail (Rollback). All queries within a transaction use the same connection, and you should use defer with Rollback to ensure cleanup on errors.

func transferFunds(ctx context.Context, db *sql.DB, from, to int, amount float64) error { tx, err := db.BeginTx(ctx, nil) if err != nil { return err } defer tx.Rollback() // No-op if committed _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance - $1 WHERE id = $2", amount, from) if err != nil { return err } _, err = tx.ExecContext(ctx, "UPDATE accounts SET balance = balance + $1 WHERE id = $2", amount, to) if err != nil { return err } return tx.Commit() }

Transaction Isolation Levels

Go supports standard SQL isolation levels via sql.TxOptions, controlling visibility of concurrent transactions' changes. Higher isolation prevents more anomalies but reduces concurrency; choose based on your consistency requirements.

tx, err := db.BeginTx(ctx, &sql.TxOptions{ Isolation: sql.LevelSerializable, // Strongest isolation ReadOnly: false, }) /* ┌─────────────────────┬──────────────┬────────────────┬───────────────┐ │ Isolation Level │ Dirty Reads │ Non-Repeatable │ Phantom Reads │ ├─────────────────────┼──────────────┼────────────────┼───────────────┤ │ LevelReadUncommitted│ Yes │ Yes │ Yes │ │ LevelReadCommitted │ No │ Yes │ Yes │ │ LevelRepeatableRead │ No │ No │ Yes │ │ LevelSerializable │ No │ No │ No │ └─────────────────────┴──────────────┴────────────────┴───────────────┘ */

Scan method

Scan copies column values from a row into Go variables, requiring pointer arguments in the exact order of SELECT columns. It performs type conversion automatically but fails if types are incompatible or column count mismatches.

rows, _ := db.QueryContext(ctx, "SELECT id, name, email, age FROM users") defer rows.Close() for rows.Next() { var ( id int64 name string email string age int ) // Order must match SELECT columns exactly if err := rows.Scan(&id, &name, &email, &age); err != nil { return err } fmt.Printf("User: %s (%s)\n", name, email) } // Always check for iteration errors if err := rows.Err(); err != nil { return err }

Null types (sql.NullString, sql.NullInt64, etc.)

Go's zero values differ from SQL NULL, so database/sql provides nullable wrapper types that track both the value and validity. Use these when columns allow NULL, or consider sql.Null[T] (Go 1.22+) for generic nullable types.

var user struct { ID int64 Name string Nickname sql.NullString // Nullable Age sql.NullInt64 // Nullable } err := row.Scan(&user.ID, &user.Name, &user.Nickname, &user.Age) if user.Nickname.Valid { fmt.Println("Nickname:", user.Nickname.String) } else { fmt.Println("No nickname set") } // Go 1.22+ generic version var score sql.Null[float64]

Context with database operations

Context integration allows cancellation, timeouts, and deadline propagation for all database operations, essential for preventing resource leaks when requests are cancelled. Always use *Context methods in HTTP handlers and pass the request context down.

func GetUser(ctx context.Context, db *sql.DB, id int) (*User, error) { // Inherit timeout from parent (e.g., HTTP request) ctx, cancel := context.WithTimeout(ctx, 3*time.Second) defer cancel() var user User err := db.QueryRowContext(ctx, "SELECT * FROM users WHERE id = $1", id). Scan(&user.ID, &user.Name, &user.Email) if errors.Is(ctx.Err(), context.DeadlineExceeded) { return nil, fmt.Errorf("database query timed out") } return &user, err }

SQL Databases

PostgreSQL driver (lib/pq, pgx)

lib/pq is the mature, pure-Go PostgreSQL driver, while pgx is a newer, higher-performance alternative with additional features like connection pooling, COPY protocol, and native type support. For new projects, pgx is recommended; it also provides a database/sql compatible interface.

// lib/pq (standard database/sql) import _ "github.com/lib/pq" db, _ := sql.Open("postgres", "postgres://user:pass@localhost/db?sslmode=disable") // pgx native (more features, better performance) import "github.com/jackc/pgx/v5/pgxpool" pool, _ := pgxpool.New(ctx, "postgres://user:pass@localhost/db") defer pool.Close() var name string pool.QueryRow(ctx, "SELECT name FROM users WHERE id = $1", 1).Scan(&name) // pgx with database/sql compatibility import _ "github.com/jackc/pgx/v5/stdlib" db, _ := sql.Open("pgx", connString)

MySQL driver (go-sql-driver/mysql)

The go-sql-driver/mysql is the standard MySQL driver for Go, supporting all MySQL features including prepared statements, transactions, TLS, and custom types. It uses ? placeholders instead of $1 and supports DSN configuration for connection parameters.

import ( "database/sql" _ "github.com/go-sql-driver/mysql" ) // DSN format: user:password@protocol(address)/dbname?param=value db, err := sql.Open("mysql", "user:password@tcp(127.0.0.1:3306)/mydb?parseTime=true") if err != nil { log.Fatal(err) } // MySQL uses ? placeholders rows, err := db.Query("SELECT * FROM users WHERE status = ? AND age > ?", "active", 18) // Important params: parseTime=true (converts DATE/DATETIME to time.Time) // charset=utf8mb4 // loc=Local (timezone)

SQLite driver (mattn/go-sqlite3)

mattn/go-sqlite3 is a CGO-based SQLite driver, requiring a C compiler for installation but providing full SQLite functionality including in-memory databases and file-based storage. Great for testing, embedded applications, and prototyping.

import ( "database/sql" _ "github.com/mattn/go-sqlite3" ) // File-based database db, _ := sql.Open("sqlite3", "./myapp.db") // In-memory database (perfect for testing) db, _ := sql.Open("sqlite3", ":memory:") // With options db, _ := sql.Open("sqlite3", "file:mydb.db?cache=shared&mode=rwc") // Pure Go alternative (no CGO required) import _ "modernc.org/sqlite" db, _ := sql.Open("sqlite", "./myapp.db")

Microsoft SQL Server driver

The microsoft/go-mssqldb driver provides full SQL Server support including Windows authentication, TLS, and Azure SQL compatibility. It's the official Microsoft-maintained driver and uses @p1, @p2 style parameter placeholders.

import ( "database/sql" _ "github.com/microsoft/go-mssqldb" ) // SQL Server connection string connString := "sqlserver://user:password@localhost:1433?database=mydb" // Or ADO-style connString := "server=localhost;user id=sa;password=Pass123;database=mydb" db, _ := sql.Open("sqlserver", connString) // Uses @p1, @p2 placeholders (or @named) db.Query("SELECT * FROM users WHERE id = @p1 AND status = @p2", 1, "active")

Connection string format

Connection strings (DSN) format varies by driver but typically include host, port, user, password, and database name with optional parameters. Use environment variables for credentials and consider using structured builders for complex configurations.

┌─────────────────────────────────────────────────────────────────────────────┐ │ PostgreSQL │ │ postgres://user:pass@host:5432/dbname?sslmode=disable │ ├─────────────────────────────────────────────────────────────────────────────┤ │ MySQL │ │ user:pass@tcp(host:3306)/dbname?parseTime=true&charset=utf8mb4 │ ├─────────────────────────────────────────────────────────────────────────────┤ │ SQL Server │ │ sqlserver://user:pass@host:1433?database=dbname&encrypt=disable │ ├─────────────────────────────────────────────────────────────────────────────┤ │ SQLite │ │ file:path/to/db.sqlite?cache=shared&mode=rwc │ └─────────────────────────────────────────────────────────────────────────────┘

Database migrations

Migrations manage database schema changes as versioned files, enabling reproducible deployments and rollbacks. Popular Go tools include golang-migrate/migrate, goose, and atlas, supporting both SQL files and Go-based migrations.

// Using golang-migrate/migrate import "github.com/golang-migrate/migrate/v4" m, _ := migrate.New( "file://migrations", // Migration files location "postgres://localhost/mydb") m.Up() // Apply all pending migrations m.Down() // Rollback last migration m.Steps(2) // Apply/rollback N steps m.Migrate(5) // Migrate to specific version // Migration file naming: 000001_create_users.up.sql // 000001_create_users.down.sql
-- 000001_create_users.up.sql CREATE TABLE users ( id SERIAL PRIMARY KEY, email VARCHAR(255) UNIQUE NOT NULL, created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP ); -- 000001_create_users.down.sql DROP TABLE users;

Query builders

Query builders provide programmatic SQL construction with type safety and protection against SQL injection, sitting between raw SQL and full ORMs. Popular options include squirrel, goqu, and sqlf.

import sq "github.com/Masterminds/squirrel" // SELECT with conditions sql, args, _ := sq.Select("id", "name", "email"). From("users"). Where(sq.Eq{"status": "active"}). Where(sq.Gt{"age": 18}). OrderBy("name ASC"). Limit(10). PlaceholderFormat(sq.Dollar). // $1, $2 for PostgreSQL ToSql() // Result: SELECT id, name, email FROM users WHERE status = $1 AND age > $2 ORDER BY name ASC LIMIT 10 // args: ["active", 18] // INSERT sq.Insert("users").Columns("name", "email").Values("John", "john@example.com").ToSql()

ORM Libraries

GORM

GORM is Go's most popular ORM, providing ActiveRecord-style database operations with auto-migrations, associations, hooks, and query building. It abstracts SQL complexity but can generate inefficient queries if misused—always monitor generated SQL in production.

import "gorm.io/gorm" import "gorm.io/driver/postgres" db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{}) // Auto-migrate schema db.AutoMigrate(&User{}) // CRUD operations db.Create(&User{Name: "John", Email: "john@example.com"}) db.First(&user, 1) // Find by primary key db.Where("email = ?", email).First(&user) // Find with condition db.Model(&user).Update("Name", "Jane") // Update db.Delete(&user) // Delete

GORM models

GORM models are Go structs with field tags controlling column names, types, constraints, and indexes. Embed gorm.Model for standard ID, CreatedAt, UpdatedAt, and DeletedAt fields, or define custom primary keys.

type User struct { gorm.Model // ID, CreatedAt, UpdatedAt, DeletedAt Name string `gorm:"size:100;not null"` Email string `gorm:"uniqueIndex;size:255"` Age int `gorm:"default:18"` Profile Profile `gorm:"foreignKey:UserID"` // Has one Orders []Order `gorm:"foreignKey:UserID"` // Has many } type Profile struct { ID uint UserID uint `gorm:"uniqueIndex"` Bio string `gorm:"type:text"` } // Custom table name func (User) TableName() string { return "app_users" }

GORM associations

GORM supports Belongs To, Has One, Has Many, and Many-to-Many relationships with automatic foreign key inference and eager loading via Preload. Association mode provides methods for adding, removing, and replacing related records.

// Define associations type User struct { gorm.Model Profile Profile // Has One Orders []Order // Has Many Languages []Language `gorm:"many2many:user_languages;"` // Many2Many } // Eager loading db.Preload("Profile").Preload("Orders").Find(&users) // Nested preloading db.Preload("Orders.Items").Find(&users) // Association mode db.Model(&user).Association("Languages").Append(&Language{Name: "Go"}) db.Model(&user).Association("Languages").Delete(&language) db.Model(&user).Association("Languages").Count()

GORM hooks

Hooks are callback methods on models that execute before/after CRUD operations, useful for validation, data transformation, and side effects. Available hooks include BeforeCreate, AfterCreate, BeforeUpdate, AfterUpdate, BeforeDelete, and AfterDelete.

type User struct { gorm.Model Name string Email string Password string `gorm:"-:all"` // Ignored in DB PassHash string } func (u *User) BeforeCreate(tx *gorm.DB) error { if u.Email == "" { return errors.New("email is required") } hash, _ := bcrypt.GenerateFromPassword([]byte(u.Password), 14) u.PassHash = string(hash) return nil } func (u *User) AfterCreate(tx *gorm.DB) error { // Send welcome email, emit event, etc. log.Printf("User created: %d", u.ID) return nil }

GORM transactions

GORM transactions use db.Transaction() with an automatic rollback on error/panic, or manual control via Begin(), Commit(), and Rollback(). Nested transactions are supported via savepoints.

// Automatic transaction (recommended) err := db.Transaction(func(tx *gorm.DB) error { if err := tx.Create(&order).Error; err != nil { return err // Rollback } if err := tx.Create(&orderItems).Error; err != nil { return err // Rollback } return nil // Commit }) // Manual transaction tx := db.Begin() defer func() { if r := recover(); r != nil { tx.Rollback() } }() tx.Create(&order) tx.Commit()

GORM migrations

GORM's AutoMigrate creates/modifies tables based on model structs, adding missing columns and indexes but never deleting columns or data. For production, use a dedicated migration tool like atlas or golang-migrate instead.

// Auto-migrate creates tables, missing columns, indexes db.AutoMigrate(&User{}, &Product{}, &Order{}) // Migrator interface for manual control db.Migrator().CreateTable(&User{}) db.Migrator().HasTable(&User{}) db.Migrator().AddColumn(&User{}, "Age") db.Migrator().DropColumn(&User{}, "Age") db.Migrator().CreateIndex(&User{}, "Email") // ⚠️ AutoMigrate limitations: // - Won't delete columns (safety) // - Won't modify column types // - Not suitable for production migrations

Soft deletes

Soft deletes mark records as deleted (via DeletedAt timestamp) instead of removing them, preserving data for auditing and recovery. GORM automatically excludes soft-deleted records from queries; use Unscoped() to include them.

type User struct { gorm.Model // Includes DeletedAt gorm.DeletedAt Name string } db.Delete(&user) // Sets DeletedAt = now(), doesn't remove row // Normal queries exclude soft-deleted db.Find(&users) // WHERE deleted_at IS NULL // Include soft-deleted db.Unscoped().Find(&users) // Permanently delete db.Unscoped().Delete(&user) // Find only soft-deleted db.Unscoped().Where("deleted_at IS NOT NULL").Find(&users)

ent

ent (by Facebook/Meta) is a type-safe entity framework using code generation from schema definitions, providing compile-time query validation and graph traversal. It excels at complex data models with relationships and generates fully typed Go code.

// Define schema: ent/schema/user.go func (User) Fields() []ent.Field { return []ent.Field{ field.String("name"), field.String("email").Unique(), field.Time("created_at").Default(time.Now), } } func (User) Edges() []ent.Edge { return []ent.Edge{ edge.To("posts", Post.Type), } } // Generated code usage client.User.Create().SetName("John").SetEmail("john@example.com").Save(ctx) client.User.Query().Where(user.NameEQ("John")).WithPosts().All(ctx)

sqlx

sqlx extends database/sql with struct scanning, named parameters, and embedded struct support without being a full ORM. It's ideal when you want raw SQL control with quality-of-life improvements.

import "github.com/jmoiron/sqlx" db, _ := sqlx.Connect("postgres", dsn) // Struct scanning (maps columns to struct fields) type User struct { ID int `db:"id"` Name string `db:"name"` Email string `db:"email"` } var user User db.Get(&user, "SELECT * FROM users WHERE id = $1", 1) var users []User db.Select(&users, "SELECT * FROM users WHERE status = $1", "active") // Named parameters db.NamedExec("INSERT INTO users (name, email) VALUES (:name, :email)", user)

sqlc (type-safe SQL)

sqlc generates type-safe Go code from SQL queries, eliminating runtime errors from SQL typos or type mismatches. You write SQL, annotate it with query names, and sqlc generates interfaces with proper types.

-- queries.sql -- name: GetUser :one SELECT id, name, email FROM users WHERE id = $1; -- name: ListUsers :many SELECT * FROM users WHERE status = $1 ORDER BY name; -- name: CreateUser :one INSERT INTO users (name, email) VALUES ($1, $2) RETURNING *;
// Generated code usage queries := db.New(conn) user, err := queries.GetUser(ctx, 1) // Returns User, error users, err := queries.ListUsers(ctx, "active") // Returns []User, error newUser, err := queries.CreateUser(ctx, db.CreateUserParams{ Name: "John", Email: "john@example.com", })

sqlboiler

sqlboiler generates type-safe ORM code by inspecting your existing database schema, creating models that exactly match your tables. It's the "database-first" approach versus GORM's "code-first" approach.

# Generate models from existing database sqlboiler psql # Generates models in models/ directory
import "myapp/models" // Type-safe queries generated from DB schema user, err := models.Users( models.UserWhere.Email.EQ("john@example.com"), qm.Load(models.UserRels.Posts), ).One(ctx, db) // Relationships are typed for _, post := range user.R.Posts { fmt.Println(post.Title) } // Insert/Update with exact column types user := &models.User{Name: "John", Email: "john@example.com"} user.Insert(ctx, db, boil.Infer())

NoSQL Databases

MongoDB (mongo-driver)

The official mongo-go-driver provides full MongoDB support with BSON marshaling, connection pooling, and aggregation pipelines. It uses strongly-typed operations and supports both simple CRUD and complex queries.

import "go.mongodb.org/mongo-driver/mongo" import "go.mongodb.org/mongo-driver/bson" client, _ := mongo.Connect(ctx, options.Client().ApplyURI("mongodb://localhost:27017")) defer client.Disconnect(ctx) coll := client.Database("mydb").Collection("users") // Insert coll.InsertOne(ctx, bson.D{{"name", "John"}, {"age", 30}}) // Find var result bson.M coll.FindOne(ctx, bson.D{{"name", "John"}}).Decode(&result) // Find many with filter cursor, _ := coll.Find(ctx, bson.D{{"age", bson.D{{"$gt", 25}}}}) var users []User cursor.All(ctx, &users)

Redis (go-redis, redigo)

go-redis is the modern, feature-rich Redis client supporting all Redis commands, clustering, sentinel, and pipelining. redigo is an older but still maintained alternative with a simpler API.

import "github.com/redis/go-redis/v9" rdb := redis.NewClient(&redis.Options{ Addr: "localhost:6379", Password: "", DB: 0, }) // Basic operations rdb.Set(ctx, "key", "value", 10*time.Minute) val, _ := rdb.Get(ctx, "key").Result() // Data structures rdb.HSet(ctx, "user:1", "name", "John", "email", "john@example.com") rdb.LPush(ctx, "queue", "task1", "task2") rdb.SAdd(ctx, "tags", "go", "redis") // Pipelining pipe := rdb.Pipeline() pipe.Set(ctx, "a", "1", 0) pipe.Get(ctx, "a") cmds, _ := pipe.Exec(ctx)

Cassandra (gocql)

gocql is the Go driver for Apache Cassandra, supporting CQL queries, prepared statements, cluster awareness, and configurable consistency levels. It handles automatic node discovery and connection pooling for distributed deployments.

import "github.com/gocql/gocql" cluster := gocql.NewCluster("127.0.0.1", "127.0.0.2") cluster.Keyspace = "myapp" cluster.Consistency = gocql.Quorum session, _ := cluster.CreateSession() defer session.Close() // Insert session.Query(`INSERT INTO users (id, name) VALUES (?, ?)`, gocql.TimeUUID(), "John").Exec() // Select var name string session.Query(`SELECT name FROM users WHERE id = ?`, id).Scan(&name) // Iterate iter := session.Query(`SELECT * FROM users`).Iter() for iter.Scan(&id, &name) { /* process */ }

CouchDB

CouchDB is accessed via HTTP/REST API in Go, using packages like go-kivik/kivik for a database-agnostic interface or direct HTTP calls. It's document-oriented with built-in replication and conflict resolution.

import "github.com/go-kivik/kivik/v4" client, _ := kivik.New("couch", "http://localhost:5984/") db := client.DB("mydb") // Create document rev, _ := db.Put(ctx, "doc-id", map[string]interface{}{ "name": "John", "type": "user", }) // Get document row := db.Get(ctx, "doc-id") var doc map[string]interface{} row.ScanDoc(&doc) // Query view rows := db.Query(ctx, "_design/users", "_view/by_name", kivik.Params(map[string]interface{}{ "key": "John", }))

DynamoDB (AWS SDK)

AWS SDK for Go v2 provides a fully-featured DynamoDB client with support for all operations, expression builders, and automatic retry with exponential backoff. Use the attributevalue package for marshaling Go types to DynamoDB items.

import ( "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/dynamodb" "github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue" ) cfg, _ := config.LoadDefaultConfig(ctx) client := dynamodb.NewFromConfig(cfg) type User struct { PK string `dynamodbav:"PK"` SK string `dynamodbav:"SK"` Name string `dynamodbav:"name"` } // Put item item, _ := attributevalue.MarshalMap(User{PK: "USER#1", SK: "PROFILE", Name: "John"}) client.PutItem(ctx, &dynamodb.PutItemInput{ TableName: aws.String("users"), Item: item, }) // Query client.Query(ctx, &dynamodb.QueryInput{ TableName: aws.String("users"), KeyConditionExpression: aws.String("PK = :pk"), ExpressionAttributeValues: map[string]types.AttributeValue{ ":pk": &types.AttributeValueMemberS{Value: "USER#1"}, }, })

Elasticsearch

olivere/elastic and the official elastic/go-elasticsearch provide Elasticsearch client libraries for Go, supporting search, indexing, aggregations, and cluster management. Choose the official client for ES 7+ compatibility.

import "github.com/elastic/go-elasticsearch/v8" es, _ := elasticsearch.NewDefaultClient() // Index document es.Index("users", strings.NewReader(`{"name":"John","email":"john@example.com"}`), es.Index.WithDocumentID("1")) // Search query := `{"query": {"match": {"name": "john"}}}` res, _ := es.Search( es.Search.WithIndex("users"), es.Search.WithBody(strings.NewReader(query)), ) // Using typed client (v8) import "github.com/elastic/go-elasticsearch/v8/typedapi/core/search" es.Search().Index("users").Query(&types.Query{ Match: map[string]types.MatchQuery{"name": {Query: "john"}}, }).Do(ctx)

Neo4j

Neo4j Go driver provides native Cypher query execution for graph database operations, supporting transactions, causal clustering, and automatic connection management. Use it for relationship-heavy data models.

import "github.com/neo4j/neo4j-go-driver/v5/neo4j" driver, _ := neo4j.NewDriverWithContext( "neo4j://localhost:7687", neo4j.BasicAuth("neo4j", "password", "")) defer driver.Close(ctx) session := driver.NewSession(ctx, neo4j.SessionConfig{}) defer session.Close(ctx) // Create nodes and relationship result, _ := session.ExecuteWrite(ctx, func(tx neo4j.ManagedTransaction) (any, error) { return tx.Run(ctx, ` CREATE (a:Person {name: $name1}) CREATE (b:Person {name: $name2}) CREATE (a)-[:KNOWS]->(b) RETURN a, b`, map[string]any{"name1": "Alice", "name2": "Bob"}) }) // Query session.ExecuteRead(ctx, func(tx neo4j.ManagedTransaction) (any, error) { return tx.Run(ctx, "MATCH (p:Person)-[:KNOWS]->(f) RETURN p.name, f.name", nil) })

Caching

In-memory caching

In-memory caching stores data in application memory for ultra-fast access, ideal for frequently-read, rarely-changing data. Consider memory limits, cache invalidation, and the lack of persistence/sharing between instances.

// Simple map-based cache with sync.RWMutex type Cache struct { mu sync.RWMutex items map[string]cacheItem } type cacheItem struct { value interface{} expiration int64 } func (c *Cache) Get(key string) (interface{}, bool) { c.mu.RLock() defer c.mu.RUnlock() item, found := c.items[key] if !found || time.Now().UnixNano() > item.expiration { return nil, false } return item.value, true } // Or use sync.Map for simpler concurrent access var cache sync.Map cache.Store("key", "value") val, ok := cache.Load("key")

go-cache

go-cache is a simple, thread-safe in-memory key-value cache with expiration, providing a drop-in caching solution with minimal configuration.

import "github.com/patrickmn/go-cache" // Create cache with 5min default expiration, cleanup every 10min c := cache.New(5*time.Minute, 10*time.Minute) // Set with default expiration c.Set("key", "value", cache.DefaultExpiration) // Set with custom expiration c.Set("session", sessionData, 30*time.Minute) // Set with no expiration c.Set("config", configData, cache.NoExpiration) // Get if val, found := c.Get("key"); found { str := val.(string) } // Delete c.Delete("key") // Increment/Decrement numeric values c.Increment("counter", 1)

bigcache

bigcache is a fast, concurrent cache designed for gigabytes of data, avoiding GC overhead by storing data as byte slices and using minimal pointers. Ideal for high-throughput systems with large cache sizes.

import "github.com/allegro/bigcache/v3" cache, _ := bigcache.New(ctx, bigcache.DefaultConfig(10*time.Minute)) // Only stores []byte - you handle serialization cache.Set("user:1", []byte(`{"name":"John"}`)) data, err := cache.Get("user:1") if err == bigcache.ErrEntryNotFound { // Cache miss } // Custom config for high throughput config := bigcache.Config{ Shards: 1024, // Number of shards LifeWindow: 10 * time.Minute, // Entry lifetime MaxEntriesInWindow: 1000 * 10 * 60, // Max entries in window MaxEntrySize: 500, // Max entry size in bytes HardMaxCacheSize: 8192, // Max cache size in MB }

freecache

freecache is a zero-GC cache that preallocates memory and uses a ring buffer, avoiding Go's garbage collector entirely. Best for fixed-size caches where you know memory requirements upfront.

import "github.com/coocood/freecache" // Allocate 100MB cache cache := freecache.NewCache(100 * 1024 * 1024) // Set with expiration in seconds cache.Set([]byte("key"), []byte("value"), 300) // 5 minute TTL // Get value, err := cache.Get([]byte("key")) if err == freecache.ErrNotFound { // Cache miss } // TTL ttl, err := cache.TTL([]byte("key")) // Stats fmt.Printf("Hit rate: %.2f%%\n", cache.HitRate()*100) fmt.Printf("Entry count: %d\n", cache.EntryCount())

groupcache

groupcache is a distributed caching library (by memcached author) that provides consistent hashing, deduplication of concurrent requests, and automatic hot cache population. It's designed for read-heavy workloads across multiple servers.

import "github.com/golang/groupcache" // Create a peer pool (for distributed setup) peers := groupcache.NewHTTPPool("http://localhost:8080") peers.Set("http://peer1:8080", "http://peer2:8080") // Create a cache group with getter function userCache := groupcache.NewGroup("users", 64<<20, groupcache.GetterFunc( func(ctx context.Context, key string, dest groupcache.Sink) error { // Called on cache miss - fetch from source user, err := db.GetUser(ctx, key) if err != nil { return err } dest.SetBytes([]byte(user.JSON())) return nil }, )) // Get - automatically deduplicates concurrent requests for same key var data []byte userCache.Get(ctx, "user:123", groupcache.AllocatingByteSliceSink(&data))

Redis caching

Redis provides distributed caching with rich data structures, persistence options, and cluster support. Use it when you need shared cache across multiple app instances or require data structures beyond simple key-value.

import "github.com/redis/go-redis/v9" rdb := redis.NewClient(&redis.Options{Addr: "localhost:6379"}) // Cache with TTL func GetUser(ctx context.Context, id string) (*User, error) { // Try cache first cached, err := rdb.Get(ctx, "user:"+id).Result() if err == nil { var user User json.Unmarshal([]byte(cached), &user) return &user, nil } // Cache miss - fetch from DB user, err := db.GetUser(ctx, id) if err != nil { return nil, err } // Cache result data, _ := json.Marshal(user) rdb.Set(ctx, "user:"+id, data, 15*time.Minute) return user, nil }

Cache-aside pattern

Cache-aside (lazy loading) reads from cache first, fetches from database on miss, then populates cache. The application manages both cache and database, providing flexibility but requiring cache invalidation on writes.

┌──────────────────────────────────────────────────────────────────┐ │ Cache-Aside Pattern │ ├──────────────────────────────────────────────────────────────────┤ │ │ │ READ: │ │ ┌─────┐ 1. Get ┌───────┐ 3. Return ┌─────┐ │ │ │ App │ ───────> │ Cache │ ───────────> │ App │ │ │ └─────┘ └───────┘ └─────┘ │ │ │ │ │ │ │ 2. On miss │ 4. Store │ │ v v │ │ ┌──────────────────────┐ │ │ │ Database │ │ │ └──────────────────────┘ │ │ │ │ WRITE: │ │ App ──> Write to DB ──> Invalidate Cache │ │ │ └──────────────────────────────────────────────────────────────────┘
func GetProduct(ctx context.Context, id string) (*Product, error) { if p, ok := cache.Get(id); ok { return p.(*Product), nil // Cache hit } p, _ := db.GetProduct(ctx, id) // Cache miss - fetch cache.Set(id, p, time.Hour) // Populate cache return p, nil }

Write-through pattern

Write-through writes to both cache and database synchronously, ensuring cache consistency but adding write latency. The cache always contains fresh data for reads.

┌──────────────────────────────────────────────────────────────────┐ │ Write-Through Pattern │ ├──────────────────────────────────────────────────────────────────┤ │ │ │ ┌─────┐ 1. Write ┌───────┐ 2. Write ┌──────────┐ │ │ │ App │ ─────────> │ Cache │ ─────────> │ Database │ │ │ └─────┘ └───────┘ └──────────┘ │ │ ^ │ │ │ │ │ │ 3. Ack │ 4. Ack │ │ └───────────────────┴─────────────────────┘ │ │ │ └──────────────────────────────────────────────────────────────────┘
func UpdateProduct(ctx context.Context, p *Product) error { // Write to database first if err := db.UpdateProduct(ctx, p); err != nil { return err } // Then update cache (synchronously) cache.Set(p.ID, p, time.Hour) return nil } // Write-behind variant: write cache first, async DB write func UpdateProductAsync(p *Product) { cache.Set(p.ID, p, time.Hour) go db.UpdateProduct(context.Background(), p) // Async, risk of data loss }

Cache invalidation strategies

Cache invalidation removes stale data when source data changes. Common strategies include TTL-based expiration, explicit invalidation on writes, and event-driven invalidation via pub/sub.

// 1. TTL-based (simplest) cache.Set("key", value, 5*time.Minute) // 2. Explicit invalidation on write func UpdateUser(ctx context.Context, user *User) error { if err := db.Update(ctx, user); err != nil { return err } cache.Delete("user:" + user.ID) cache.Delete("user:email:" + user.Email) // Related cache return nil } // 3. Event-driven invalidation func handleUserUpdated(event UserUpdatedEvent) { cache.Delete("user:" + event.UserID) } redis.Subscribe(ctx, "user:updated", handleUserUpdated) // 4. Cache tags for group invalidation /* products:1 -> tagged with ["products", "category:5"] products:2 -> tagged with ["products", "category:5"] InvalidateTag("category:5") -> clears all products in category */

TTL and eviction policies

TTL (Time-To-Live) automatically expires entries after a duration, while eviction policies remove entries when memory limits are reached. Common policies include LRU (Least Recently Used), LFU (Least Frequently Used), and random eviction.

┌─────────────────────────────────────────────────────────────────┐ │ Eviction Policies │ ├───────────────┬─────────────────────────────────────────────────┤ │ Policy │ Description │ ├───────────────┼─────────────────────────────────────────────────┤ │ TTL │ Remove after fixed time duration │ │ LRU │ Remove least recently accessed │ │ LFU │ Remove least frequently accessed │ │ FIFO │ Remove oldest entry (first in, first out) │ │ Random │ Remove random entry │ │ Size-based │ Remove when max size exceeded │ └───────────────┴─────────────────────────────────────────────────┘
// Redis eviction policies (maxmemory-policy) // volatile-lru: LRU among keys with TTL // allkeys-lru: LRU among all keys // volatile-ttl: Evict shortest TTL first // noeviction: Return error when full // bigcache with custom TTL cache, _ := bigcache.New(ctx, bigcache.Config{ LifeWindow: 10 * time.Minute, // TTL CleanWindow: 5 * time.Minute, // Cleanup interval })

Message Queues

RabbitMQ (amqp091-go)

amqp091-go is the maintained RabbitMQ/AMQP 0.9.1 client for Go, supporting exchanges, queues, bindings, and acknowledgments. Use it for reliable message delivery with complex routing patterns.

import amqp "github.com/rabbitmq/amqp091-go" conn, _ := amqp.Dial("amqp://guest:guest@localhost:5672/") defer conn.Close() ch, _ := conn.Channel() defer ch.Close() // Declare queue q, _ := ch.QueueDeclare("tasks", true, false, false, false, nil) // Publish ch.PublishWithContext(ctx, "", q.Name, false, false, amqp.Publishing{ ContentType: "application/json", Body: []byte(`{"task":"process"}`), DeliveryMode: amqp.Persistent, }) // Consume msgs, _ := ch.Consume(q.Name, "", false, false, false, false, nil) for msg := range msgs { process(msg.Body) msg.Ack(false) // Acknowledge }

Kafka (sarama, franz-go)

sarama is the mature Go Kafka client by Shopify, while franz-go is a newer, high-performance alternative with better API ergonomics. Both support producers, consumers, consumer groups, and admin operations.

// sarama producer import "github.com/IBM/sarama" config := sarama.NewConfig() config.Producer.Return.Successes = true producer, _ := sarama.NewSyncProducer([]string{"localhost:9092"}, config) msg := &sarama.ProducerMessage{ Topic: "events", Key: sarama.StringEncoder("user-123"), Value: sarama.StringEncoder(`{"action":"login"}`), } partition, offset, _ := producer.SendMessage(msg) // sarama consumer group consumer, _ := sarama.NewConsumerGroup([]string{"localhost:9092"}, "my-group", config) consumer.Consume(ctx, []string{"events"}, &consumerHandler{}) // franz-go (simpler API) import "github.com/twmb/franz-go/pkg/kgo" client, _ := kgo.NewClient(kgo.SeedBrokers("localhost:9092")) client.Produce(ctx, &kgo.Record{Topic: "events", Value: []byte("data")}, nil)

NATS

NATS is a lightweight, high-performance messaging system with pub/sub, request/reply, and queue groups. nats.go is the official client, supporting JetStream for persistence and exactly-once delivery.

import "github.com/nats-io/nats.go" nc, _ := nats.Connect("nats://localhost:4222") defer nc.Close() // Simple pub/sub nc.Publish("events", []byte("Hello")) nc.Subscribe("events", func(m *nats.Msg) { fmt.Println(string(m.Data)) }) // Queue groups (load balancing) nc.QueueSubscribe("tasks", "workers", handler) // Request/Reply msg, _ := nc.Request("api.users.get", []byte("1"), time.Second) // JetStream (persistence) js, _ := nc.JetStream() js.Publish("orders", []byte(`{"id":1}`)) sub, _ := js.PullSubscribe("orders", "order-processor") msgs, _ := sub.Fetch(10)

Redis Pub/Sub

Redis Pub/Sub provides simple fire-and-forget messaging where messages are delivered to all subscribers but lost if no subscribers exist. Use it for real-time notifications and broadcasts, not reliable message queuing.

import "github.com/redis/go-redis/v9" rdb := redis.NewClient(&redis.Options{Addr: "localhost:6379"}) // Publisher err := rdb.Publish(ctx, "notifications", `{"type":"alert","msg":"Hello"}`).Err() // Subscriber pubsub := rdb.Subscribe(ctx, "notifications", "events") defer pubsub.Close() ch := pubsub.Channel() for msg := range ch { fmt.Printf("Channel: %s, Message: %s\n", msg.Channel, msg.Payload) } // Pattern subscription pubsub.PSubscribe(ctx, "user.*") // Matches user.created, user.deleted, etc. // ⚠️ Note: No persistence - messages lost if no subscribers // For reliable queuing, use Redis Streams instead

AWS SQS

AWS SQS provides fully managed message queuing with automatic scaling, dead-letter queues, and at-least-once delivery. Use AWS SDK v2 for Go with proper polling strategies.

import ( "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/sqs" ) cfg, _ := config.LoadDefaultConfig(ctx) client := sqs.NewFromConfig(cfg) queueURL := "https://sqs.region.amazonaws.com/account/queue" // Send message client.SendMessage(ctx, &sqs.SendMessageInput{ QueueUrl: &queueURL, MessageBody: aws.String(`{"task":"process"}`), DelaySeconds: 0, }) // Receive messages (long polling) output, _ := client.ReceiveMessage(ctx, &sqs.ReceiveMessageInput{ QueueUrl: &queueURL, MaxNumberOfMessages: 10, WaitTimeSeconds: 20, // Long polling }) for _, msg := range output.Messages { process(*msg.Body) client.DeleteMessage(ctx, &sqs.DeleteMessageInput{ QueueUrl: &queueURL, ReceiptHandle: msg.ReceiptHandle, }) }

Google Pub/Sub

Google Cloud Pub/Sub is a fully managed messaging service with global availability, push/pull delivery, and exactly-once processing. The Go client handles connection management and message acknowledgment.

import "cloud.google.com/go/pubsub" client, _ := pubsub.NewClient(ctx, "project-id") // Publisher topic := client.Topic("my-topic") result := topic.Publish(ctx, &pubsub.Message{ Data: []byte(`{"event":"user_created"}`), Attributes: map[string]string{"type": "user"}, }) id, _ := result.Get(ctx) // Block until ack'd // Subscriber sub := client.Subscription("my-subscription") sub.Receive(ctx, func(ctx context.Context, msg *pubsub.Message) { fmt.Println(string(msg.Data)) msg.Ack() // or msg.Nack() to requeue }) // Subscriber with concurrency control sub.ReceiveSettings.MaxOutstandingMessages = 100 sub.ReceiveSettings.NumGoroutines = 10

Message serialization

Messages must be serialized to bytes for transmission; JSON is human-readable but slow, Protocol Buffers and Avro are compact and fast but require schema management. Choose based on performance needs and interoperability requirements.

// JSON - simple, human-readable, slower type Event struct { Type string `json:"type"` Data string `json:"data"` } data, _ := json.Marshal(Event{Type: "user.created", Data: "..."}) // Protocol Buffers - compact, fast, requires .proto files // event.proto -> generates event.pb.go data, _ := proto.Marshal(&pb.Event{Type: "user.created"}) // MessagePack - compact JSON alternative import "github.com/vmihailenco/msgpack/v5" data, _ := msgpack.Marshal(event) // Performance comparison (approximate): // JSON: 1x speed, 1x size // MsgPack: 2x speed, 0.7x size // Protobuf: 4x speed, 0.5x size

Consumer groups

Consumer groups distribute message processing across multiple consumers, where each message is delivered to only one consumer in the group. This enables horizontal scaling of message processing.

┌────────────────────────────────────────────────────────────────┐ │ Consumer Group "workers" │ │ │ │ Topic/Queue ──┬──> Consumer 1 (Partition 0, 1) │ │ [Messages] ├──> Consumer 2 (Partition 2, 3) │ │ └──> Consumer 3 (Partition 4, 5) │ │ │ │ Each message goes to exactly ONE consumer in the group │ └────────────────────────────────────────────────────────────────┘
// Kafka consumer group (sarama) group, _ := sarama.NewConsumerGroup(brokers, "order-processors", config) group.Consume(ctx, []string{"orders"}, handler) // NATS queue group nc.QueueSubscribe("orders", "processors", func(m *nats.Msg) { // Only one subscriber in "processors" group receives each message }) // RabbitMQ competing consumers (same queue, multiple consumers) ch.Consume("orders", "consumer-1", ...) ch.Consume("orders", "consumer-2", ...)

Dead letter queues

Dead letter queues (DLQ) capture messages that fail processing after retry limits, preventing poison messages from blocking queues. They enable debugging, manual intervention, and metrics on failure rates.

┌─────────────────────────────────────────────────────────────────┐ │ Dead Letter Queue Flow │ │ │ │ ┌─────────┐ ┌──────────┐ ┌─────────────┐ │ │ │ Message │───>│ Consumer │───>│ Processing │ │ │ └─────────┘ └──────────┘ │ Failed? │ │ │ └──────┬──────┘ │ │ Yes │ No │ │ ┌─────┴─────┐ │ │ ┌──────▼──────┐ │ │ │ │ Retry < Max │ │ ✓ Success │ │ └──────┬──────┘ │ │ │ No │ Yes │ │ │ ┌──────▼──────┐ │ │ │ │ Move to │ │ │ │ │ DLQ │ │ │ │ └─────────────┘ │ │ └─────────────────────────────────────────────────────────────────┘
// RabbitMQ DLQ setup ch.QueueDeclare("orders", true, false, false, false, amqp.Table{ "x-dead-letter-exchange": "", "x-dead-letter-routing-key": "orders-dlq", "x-message-ttl": 60000, // Optional: move to DLQ after 60s }) ch.QueueDeclare("orders-dlq", true, false, false, false, nil) // SQS DLQ via RedrivePolicy // AWS Console: Configure DLQ and maxReceiveCount

Exactly-once delivery

Exactly-once semantics guarantee each message is processed exactly once, preventing duplicates and lost messages. It's achieved through idempotent consumers, transactional processing, or broker-level support like Kafka transactions.

/* ┌─────────────────────────────────────────────────────────────────┐ │ Delivery Guarantees │ ├─────────────────┬───────────────────────────────────────────────┤ │ At-most-once │ Fire and forget. May lose messages. │ │ At-least-once │ Retry until ack. May have duplicates. │ │ Exactly-once │ No loss, no duplicates. Hard to achieve. │ └─────────────────┴───────────────────────────────────────────────┘ */ // Idempotent consumer pattern (most common approach) func processOrder(msg Message) error { messageID := msg.ID // Check if already processed if processed, _ := redis.SIsMember(ctx, "processed", messageID).Result(); processed { return nil // Skip duplicate } // Process in transaction tx := db.Begin() if err := tx.Create(&order).Error; err != nil { tx.Rollback() return err } // Mark as processed atomically tx.Exec("INSERT INTO processed_messages (id) VALUES (?)", messageID) tx.Commit() redis.SAdd(ctx, "processed", messageID) return nil }