Go Standard Library: Mastering I/O, Files, JSON & Time
Stop importing external dependencies for basic tasks. This deep dive covers the essential packages that make Go powerful: from composable I/O pipes and file locking to efficient JSON streaming and time layout parsing.
I/O Operations
io.Reader interface
The io.Reader interface is the fundamental abstraction for reading data in Go, defining a single method Read(p []byte) (n int, err error) that reads up to len(p) bytes into p. It returns the number of bytes read and io.EOF when no more data is available, enabling uniform handling of files, network connections, strings, and any data source.
type Reader interface { Read(p []byte) (n int, err error) } // Example: reading from any Reader func readAll(r io.Reader) ([]byte, error) { buf := make([]byte, 1024) n, err := r.Read(buf) return buf[:n], err }
io.Writer interface
The io.Writer interface is the counterpart to Reader, defining Write(p []byte) (n int, err error) that writes len(p) bytes from p to the underlying data stream. It returns the number of bytes written and any error encountered; if n < len(p), it must return a non-nil error.
type Writer interface { Write(p []byte) (n int, err error) } // Example: writing to any Writer func writeMessage(w io.Writer, msg string) error { _, err := w.Write([]byte(msg)) return err }
io.Closer interface
The io.Closer interface defines Close() error for releasing resources like file handles, network connections, or database connections. Always defer Close() after successfully opening a resource, but check the error on writes since deferred Close may hide write errors.
type Closer interface { Close() error } // Example: proper resource cleanup func processFile(path string) error { f, err := os.Open(path) if err != nil { return err } defer f.Close() // Always close! // ... process file return nil }
io.ReadWriter, io.ReadCloser, io.WriteCloser
These are composite interfaces combining multiple basic interfaces, enabling functions to accept types that satisfy multiple contracts simultaneously. ReadWriter combines Reader+Writer, ReadCloser combines Reader+Closer, WriteCloser combines Writer+Closer—Go's way of achieving interface composition without inheritance.
┌─────────────────────────────────────────────────┐ │ Interface Composition │ ├─────────────────────────────────────────────────┤ │ ReadWriter = Reader + Writer │ │ ReadCloser = Reader + Closer │ │ WriteCloser = Writer + Closer │ │ ReadWriteCloser = Reader + Writer + Closer │ └─────────────────────────────────────────────────┘ type ReadWriteCloser interface { Reader Writer Closer }
io.Copy
io.Copy(dst Writer, src Reader) efficiently copies data from src to dst until EOF or error, returning bytes copied. It uses an internal 32KB buffer (or src's WriteTo/dst's ReadFrom if available), making it the go-to function for streaming data between any Reader and Writer.
// Copy file contents src, _ := os.Open("source.txt") dst, _ := os.Create("dest.txt") defer src.Close() defer dst.Close() written, err := io.Copy(dst, src) // Returns bytes written // io.CopyN(dst, src, 1024) - copy exactly N bytes // io.CopyBuffer(dst, src, buf) - use custom buffer
io.Pipe
io.Pipe() creates a synchronous in-memory pipe returning connected *PipeReader and *PipeWriter—writes block until reads consume the data with no internal buffering. Perfect for connecting code expecting an io.Writer to code expecting an io.Reader, commonly used in testing and streaming scenarios.
pr, pw := io.Pipe() go func() { defer pw.Close() pw.Write([]byte("hello from writer")) }() data, _ := io.ReadAll(pr) fmt.Println(string(data)) // "hello from writer" // Common use: stream JSON encoding to HTTP request pr, pw := io.Pipe() go func() { json.NewEncoder(pw).Encode(data) pw.Close() }() http.Post(url, "application/json", pr)
bufio package
The bufio package implements buffered I/O wrapping io.Reader/Writer to reduce system calls by batching reads/writes into larger chunks. It provides Reader, Writer, and Scanner types that dramatically improve performance when dealing with small, frequent I/O operations.
┌──────────────────────────────────────────────────────┐ │ Buffered I/O Flow │ ├──────────────────────────────────────────────────────┤ │ │ │ App ──write──► [Buffer 4KB] ──flush──► OS/File │ │ │ │ App ◄──read─── [Buffer 4KB] ◄──fill─── OS/File │ │ │ │ Fewer syscalls = Better performance │ └──────────────────────────────────────────────────────┘
bufio.Reader
bufio.Reader wraps an io.Reader with buffering (default 4KB), providing convenient methods like ReadString, ReadBytes, ReadLine, and Peek. It significantly reduces system calls when reading small amounts repeatedly and enables reading until a delimiter.
file, _ := os.Open("data.txt") reader := bufio.NewReader(file) // reader := bufio.NewReaderSize(file, 64*1024) // 64KB buffer // Read until newline line, err := reader.ReadString('\n') // Peek without consuming next, _ := reader.Peek(5) // Look at next 5 bytes // Read single byte b, _ := reader.ReadByte() reader.UnreadByte() // Put it back
bufio.Writer
bufio.Writer wraps an io.Writer with buffering, collecting writes until the buffer fills or Flush() is called, then writing in one operation. Always call Flush() before closing to ensure all data is written—a common source of bugs when forgotten.
file, _ := os.Create("output.txt") writer := bufio.NewWriter(file) defer func() { writer.Flush() // CRITICAL: flush before close! file.Close() }() writer.WriteString("Hello ") writer.Write([]byte("World")) writer.WriteByte('!') // Data still in buffer until Flush() or buffer full
bufio.Scanner
bufio.Scanner provides a convenient interface for reading data line-by-line or by custom tokens, handling buffer management and EOF automatically. It's the idiomatic way to read lines from a file, with Scan() returning false when done (check Err() for errors vs EOF).
file, _ := os.Open("data.txt") scanner := bufio.NewScanner(file) // Default: split by lines for scanner.Scan() { fmt.Println(scanner.Text()) // or scanner.Bytes() } if err := scanner.Err(); err != nil { log.Fatal(err) } // Custom split function scanner.Split(bufio.ScanWords) // Split by words scanner.Split(bufio.ScanRunes) // Split by runes // scanner.Buffer(buf, maxSize) // Handle long lines
bytes.Buffer
bytes.Buffer is a variable-sized buffer of bytes with Read and Write methods, growing automatically as needed. It's the standard way to build strings efficiently, implement io.Reader/Writer in memory, and it has zero allocation when the result fits in 64 bytes.
var buf bytes.Buffer // Zero value ready to use buf.WriteString("Hello ") buf.Write([]byte("World")) buf.WriteByte('!') fmt.Fprintf(&buf, " %d", 2024) str := buf.String() // "Hello World! 2024" data := buf.Bytes() // []byte (shares memory!) // Reset for reuse (keeps allocated memory) buf.Reset() // As io.Reader io.Copy(os.Stdout, &buf)
bytes.Reader
bytes.Reader implements io.Reader, io.Seeker, io.ReaderAt, and io.WriterTo for reading from a byte slice. Unlike bytes.Buffer, it's read-only and supports seeking, making it ideal for passing existing byte data to functions expecting an io.Reader.
data := []byte("Hello, World!") reader := bytes.NewReader(data) buf := make([]byte, 5) reader.Read(buf) // "Hello" // Seek to position reader.Seek(7, io.SeekStart) reader.Read(buf) // "World" // ReadAt (concurrent-safe random access) reader.ReadAt(buf, 0) // "Hello" - doesn't affect position fmt.Println(reader.Len()) // Remaining bytes fmt.Println(reader.Size()) // Total size
strings.Reader
strings.Reader is identical to bytes.Reader but for strings, implementing io.Reader, io.Seeker, and other interfaces without copying the string to a byte slice. Use it when you have a string and need an io.Reader—more efficient than bytes.NewReader([]byte(s)).
s := "Hello, Gopher!" reader := strings.NewReader(s) // Pass string as io.Reader io.Copy(os.Stdout, reader) // Seek and read reader.Seek(7, io.SeekStart) buf := make([]byte, 6) reader.Read(buf) // "Gopher" // Use with http (no allocation) http.Post(url, "text/plain", strings.NewReader(body))
File Operations
os package
The os package provides platform-independent interfaces to operating system functionality including file operations, environment variables, process management, and signals. It's your primary interface to the OS, with most functions returning error that should always be checked.
// Environment os.Getenv("HOME") os.Setenv("MY_VAR", "value") os.Environ() // []string{"KEY=value", ...} // Process os.Getpid() os.Getuid() os.Args // Command line arguments os.Exit(1) // Exit with code (deferred funcs DON'T run!) // Working directory wd, _ := os.Getwd() os.Chdir("/tmp") os.Hostname()
os.File
os.File represents an open file descriptor providing methods for reading, writing, seeking, and getting file information. It implements io.Reader, io.Writer, io.Closer, io.Seeker, making it usable with all standard library I/O functions.
// os.File implements many interfaces var _ io.Reader = (*os.File)(nil) var _ io.Writer = (*os.File)(nil) var _ io.Closer = (*os.File)(nil) var _ io.Seeker = (*os.File)(nil) var _ io.ReaderAt = (*os.File)(nil) var _ io.WriterAt = (*os.File)(nil) f, _ := os.Open("file.txt") f.Read(buf) // Read into buffer f.Write(data) // Write data f.Seek(0, 0) // Seek to beginning f.Stat() // Get file info f.Fd() // Get file descriptor f.Name() // Get file path f.Close() // Release resources
os.Open, os.Create, os.OpenFile
These are the three ways to open files: Open for read-only, Create for write-only (truncates or creates), and OpenFile for full control over flags and permissions. Always handle the error and defer Close() on success.
// Read-only (O_RDONLY) f, err := os.Open("input.txt") // Write-only, create/truncate (O_WRONLY|O_CREATE|O_TRUNC, 0666) f, err := os.Create("output.txt") // Full control f, err := os.OpenFile("data.log", os.O_APPEND|os.O_CREATE|os.O_WRONLY, // Flags 0644) // Permissions // Common flags: // O_RDONLY - read-only // O_WRONLY - write-only // O_RDWR - read-write // O_APPEND - append to file // O_CREATE - create if not exists // O_TRUNC - truncate on open // O_EXCL - fail if exists (with O_CREATE)
File reading and writing
Go offers multiple ways to read/write files depending on needs: os.ReadFile/os.WriteFile for simple cases, buffered I/O for performance, and streaming for large files. Choose based on file size and access patterns.
// Simple: whole file in memory (small files) data, err := os.ReadFile("config.json") err = os.WriteFile("output.txt", data, 0644) // Streaming: for large files f, _ := os.Open("large.bin") defer f.Close() buf := make([]byte, 32*1024) // 32KB chunks for { n, err := f.Read(buf) if err == io.EOF { break } process(buf[:n]) } // Buffered: for many small writes f, _ := os.Create("output.txt") w := bufio.NewWriter(f) w.WriteString("line 1\n") w.WriteString("line 2\n") w.Flush() f.Close()
File permissions
Unix-style permissions in Go use octal notation with owner/group/other bits for read(4)/write(2)/execute(1). The os.FileMode type represents permissions with methods for checking and modifying, and umask affects actual permissions.
┌─────────────────────────────────────────┐ │ Permission Bits (Octal) │ ├─────────────────────────────────────────┤ │ 0644 = rw-r--r-- │ │ 0755 = rwxr-xr-x │ │ 0600 = rw------- (private) │ │ 0777 = rwxrwxrwx (avoid!) │ ├─────────────────────────────────────────┤ │ Owner │ Group │ Other │ │ rwx │ rwx │ rwx │ │ 421 │ 421 │ 421 │ └─────────────────────────────────────────┘
os.Chmod("file.txt", 0644) os.Chown("file.txt", uid, gid) info, _ := os.Stat("file.txt") mode := info.Mode() fmt.Printf("%o\n", mode.Perm()) // 644 mode.IsDir() // false mode.IsRegular() // true
os.Stat
os.Stat returns a FileInfo interface describing the named file, following symlinks. Use os.Lstat to not follow symlinks, and os.IsNotExist(err) to check if a file doesn't exist.
info, err := os.Stat("file.txt") if os.IsNotExist(err) { fmt.Println("File does not exist") return } if err != nil { return err } // Check if path exists func exists(path string) bool { _, err := os.Stat(path) return !os.IsNotExist(err) } // os.Lstat - doesn't follow symlinks info, _ := os.Lstat("symlink") if info.Mode()&os.ModeSymlink != 0 { target, _ := os.Readlink("symlink") }
File info
The os.FileInfo interface returned by Stat provides file metadata: name, size, mode, modification time, and whether it's a directory. Access these through methods, and for system-specific info, type-assert to *syscall.Stat_t.
info, _ := os.Stat("file.txt") info.Name() // Base name: "file.txt" info.Size() // Size in bytes: 1024 info.Mode() // FileMode: -rw-r--r-- info.ModTime() // Modification time: time.Time info.IsDir() // Is directory: false // System-specific info (Linux) sys := info.Sys().(*syscall.Stat_t) sys.Ino // Inode number sys.Nlink // Number of hard links sys.Uid // Owner user ID sys.Gid // Owner group ID
Directory operations
Go provides functions for creating, removing, and reading directories. Use os.MkdirAll for recursive creation (like mkdir -p), and os.RemoveAll for recursive deletion (careful—it's destructive!).
// Create directory os.Mkdir("mydir", 0755) // Single directory os.MkdirAll("a/b/c/d", 0755) // Recursive (mkdir -p) // Remove os.Remove("file.txt") // Single file or empty dir os.RemoveAll("mydir") // Recursive (rm -rf) ⚠️ // Rename/move os.Rename("old.txt", "new.txt") os.Rename("file.txt", "subdir/file.txt") // Move // Symlinks os.Symlink("target", "linkname") target, _ := os.Readlink("linkname") // Hard links os.Link("original", "hardlink")
os.ReadDir
os.ReadDir reads a directory returning a slice of DirEntry interfaces, which is more efficient than the older ioutil.ReadDir as it doesn't call Stat on each entry. Entries are sorted by filename.
entries, err := os.ReadDir(".") if err != nil { log.Fatal(err) } for _, entry := range entries { fmt.Printf("%s", entry.Name()) if entry.IsDir() { fmt.Print("/") } // Get full FileInfo only if needed (calls Stat) info, _ := entry.Info() fmt.Printf(" %d bytes\n", info.Size()) } // Type returns file type bits entry.Type().IsRegular() entry.Type().IsDir() entry.Type()&os.ModeSymlink != 0
filepath package
The filepath package provides functions for manipulating file paths in a platform-independent way, using the correct separator for the OS. Always use filepath (not path) for file system paths.
import "path/filepath" filepath.Join("a", "b", "c.txt") // "a/b/c.txt" (Unix) // "a\\b\\c.txt" (Windows) filepath.Dir("/foo/bar/baz.txt") // "/foo/bar" filepath.Base("/foo/bar/baz.txt") // "baz.txt" filepath.Ext("file.tar.gz") // ".gz" filepath.Abs("relative/path") // Full absolute path filepath.Rel("/a/b", "/a/b/c/d") // "c/d" filepath.Split("/foo/bar/baz.txt") // "/foo/bar/", "baz.txt" filepath.SplitList(os.Getenv("PATH")) // Split PATH variable filepath.Match("*.txt", "file.txt") // true, nil filepath.Glob("*.go") // []string of matches
Path manipulation
Path manipulation functions handle normalizing paths, resolving symlinks, and converting between absolute and relative paths. Use Clean to normalize paths and EvalSymlinks to resolve the actual file location.
// Clean normalizes path filepath.Clean("a//b/../c") // "a/c" filepath.Clean("./a/b/") // "a/b" // Resolve symlinks to real path realPath, err := filepath.EvalSymlinks("/usr/bin/python") // Returns: "/usr/bin/python3.11" // Check if absolute filepath.IsAbs("/foo/bar") // true filepath.IsAbs("foo/bar") // false // Volume name (Windows) filepath.VolumeName("C:\\foo") // "C:" filepath.VolumeName("/foo") // "" (Unix) // Convert slashes filepath.ToSlash("a\\b\\c") // "a/b/c" filepath.FromSlash("a/b/c") // OS-specific
Walk function
filepath.Walk and the newer filepath.WalkDir recursively traverse a directory tree, calling a function for each file/directory. WalkDir is more efficient as it doesn't call Stat unless you need FileInfo.
// WalkDir (preferred, more efficient) err := filepath.WalkDir(".", func(path string, d fs.DirEntry, err error) error { if err != nil { return err // Handle permission errors, etc. } if d.IsDir() && d.Name() == ".git" { return filepath.SkipDir // Skip directory } if !d.IsDir() && filepath.Ext(path) == ".go" { fmt.Println(path) } return nil }) // Special return values: // filepath.SkipDir - skip this directory // filepath.SkipAll - stop walking entirely (Go 1.20+)
Temporary files and directories
The os package provides functions to create temporary files/directories with unique names, useful for tests and intermediate processing. Always clean up temp files; they're not automatically deleted.
// Temporary file f, err := os.CreateTemp("", "myapp-*.txt") // Creates: /tmp/myapp-123456789.txt defer os.Remove(f.Name()) // Clean up! defer f.Close() f.Write([]byte("temp data")) // Temporary directory dir, err := os.MkdirTemp("", "myapp-*") // Creates: /tmp/myapp-123456789/ defer os.RemoveAll(dir) // Clean up! // Specify directory (empty string = os.TempDir()) os.CreateTemp("/var/cache", "data-*.json") // Get temp directory os.TempDir() // "/tmp" on Unix, varies on Windows
File locking
Go doesn't have built-in cross-platform file locking, but you can use OS-specific syscalls or the golang.org/x/sys package. A common pattern is creating a .lock file with O_EXCL flag for advisory locking.
// Advisory locking with lock file (portable) func acquireLock(path string) (*os.File, error) { f, err := os.OpenFile(path+".lock", os.O_CREATE|os.O_EXCL|os.O_RDWR, 0600) if err != nil { return nil, fmt.Errorf("lock exists: %w", err) } return f, nil } func releaseLock(f *os.File) { name := f.Name() f.Close() os.Remove(name) } // Unix flock (not portable) // import "syscall" syscall.Flock(int(f.Fd()), syscall.LOCK_EX) // Exclusive syscall.Flock(int(f.Fd()), syscall.LOCK_UN) // Unlock
Formatting and I/O
fmt package
The fmt package implements formatted I/O with functions analogous to C's printf/scanf. It handles printing to stdout, formatting to strings, and scanning input, with verb-based formatting that works automatically with any type.
import "fmt" // Three families of functions: // Print* → os.Stdout // Sprint* → string // Fprint* → io.Writer fmt.Println("Hello") // + newline fmt.Print("No newline") // No newline fmt.Printf("%s: %d\n", "count", 42) s := fmt.Sprintf("formatted: %v", value) fmt.Fprintln(os.Stderr, "Error!") fmt.Fprintf(file, "data: %v\n", data)
fmt.Print, fmt.Println, fmt.Printf
These functions write to standard output: Print uses default formatting, Println adds spaces between arguments and a newline, and Printf uses format verbs for precise control.
name, age := "Alice", 30 fmt.Print("Hello ", name) // "Hello Alice" (no newline) fmt.Println("Hello", name) // "Hello Alice\n" (spaces added) fmt.Printf("Hello %s\n", name) // Formatted output // Print returns number of bytes written and error n, err := fmt.Println("test") // Multiple values fmt.Println("a", "b", "c") // "a b c\n" fmt.Print("a", "b", "c") // "abc"
fmt.Sprint, fmt.Sprintln, fmt.Sprintf
The Sprint* family returns formatted strings instead of writing to stdout, essential for building strings dynamically. Sprintf is the most common for constructing formatted strings.
s1 := fmt.Sprint("Hello ", name) // "Hello Alice" s2 := fmt.Sprintln("Hello", name) // "Hello Alice\n" s3 := fmt.Sprintf("Hello %s!", name) // "Hello Alice!" // Building complex strings msg := fmt.Sprintf("[%s] %s: %d errors at %v", "ERROR", filename, count, time.Now()) // Use with errors return fmt.Errorf("failed to open %s: %w", path, err)
fmt.Fprint, fmt.Fprintln, fmt.Fprintf
The Fprint* family writes to any io.Writer (files, buffers, network connections, HTTP responses), making them essential for writing formatted output to various destinations.
// Write to file f, _ := os.Create("output.txt") fmt.Fprintln(f, "Line 1") fmt.Fprintf(f, "Value: %d\n", 42) // Write to buffer var buf bytes.Buffer fmt.Fprintf(&buf, "formatted: %v", data) // HTTP response func handler(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello, %s!", r.URL.Query().Get("name")) } // Write to stderr fmt.Fprintln(os.Stderr, "Warning: something happened")
Format verbs
Format verbs are placeholders in format strings that specify how to format values. Go has general verbs that work with any type, plus type-specific verbs for precise control.
┌────────────────────────────────────────────────────────┐ │ Common Format Verbs │ ├──────┬─────────────────────────────────────────────────┤ │ %v │ Default format │ │ %+v │ Struct with field names │ │ %#v │ Go syntax representation │ │ %T │ Type of value │ ├──────┼─────────────────────────────────────────────────┤ │ %t │ Boolean: true/false │ │ %d │ Integer: decimal │ │ %b │ Integer: binary │ │ %x │ Integer: hexadecimal (lowercase) │ │ %o │ Integer: octal │ ├──────┼─────────────────────────────────────────────────┤ │ %f │ Float: decimal point, no exponent │ │ %e │ Float: scientific notation │ │ %g │ Float: %e for large exponents, %f otherwise │ ├──────┼─────────────────────────────────────────────────┤ │ %s │ String │ │ %q │ Quoted string │ │ %p │ Pointer │ └──────┴─────────────────────────────────────────────────┘
// Width and precision fmt.Printf("%8d", 42) // " 42" (width 8) fmt.Printf("%-8d", 42) // "42 " (left align) fmt.Printf("%08d", 42) // "00000042" (zero pad) fmt.Printf("%.2f", 3.14159) // "3.14" (precision) fmt.Printf("%8.2f", 3.14) // " 3.14" (width + precision)
fmt.Scan, fmt.Scanln, fmt.Scanf
The Scan* functions read formatted input from stdin. Scan reads space-separated tokens, Scanln stops at newline, and Scanf uses format verbs. They return the number of items successfully scanned.
var name string var age int // Space-separated input fmt.Print("Enter name age: ") n, err := fmt.Scan(&name, &age) // "Alice 30" // Line-based input (stops at newline) fmt.Scanln(&name, &age) // Formatted input fmt.Scanf("%s is %d", &name, &age) // "Alice is 30" // From reader (not stdin) fmt.Fscan(reader, &name, &age) // From string fmt.Sscan("Alice 30", &name, &age) fmt.Sscanf("Alice is 30", "%s is %d", &name, &age)
Custom format with Stringer
Implementing the fmt.Stringer interface (String() string) allows types to define their default text representation, used by %s and %v verbs. This is the most common way to make types printable.
type Person struct { Name string Age int } func (p Person) String() string { return fmt.Sprintf("%s (%d years old)", p.Name, p.Age) } p := Person{"Alice", 30} fmt.Println(p) // "Alice (30 years old)" fmt.Printf("%s\n", p) // "Alice (30 years old)" fmt.Printf("%v\n", p) // "Alice (30 years old)" // For error types, implement error interface func (p Person) Error() string { ... }
Custom format with Formatter
The fmt.Formatter interface provides complete control over formatting for all verbs. Implement Format(f fmt.State, verb rune) when you need different output for different format verbs.
type Person struct { Name string Age int } func (p Person) Format(f fmt.State, verb rune) { switch verb { case 's', 'v': fmt.Fprintf(f, "%s", p.Name) case 'q': fmt.Fprintf(f, "%q", p.Name) case 'd': fmt.Fprintf(f, "%d", p.Age) default: fmt.Fprintf(f, "%%!%c(Person)", verb) } // f.Flag('+'), f.Width(), f.Precision() available } p := Person{"Alice", 30} fmt.Printf("%s\n", p) // Alice fmt.Printf("%d\n", p) // 30 fmt.Printf("%q\n", p) // "Alice"
Time and Date
time.Time
time.Time represents an instant in time with nanosecond precision, including location (timezone) information. It's a value type (not a pointer), so pass by value and compare with methods, not ==.
var t time.Time // Zero value: "0001-01-01 00:00:00 +0000 UTC" t.IsZero() // true - check for zero value t := time.Now() // Current local time t.Year() // 2024 t.Month() // time.January (month constant) t.Day() // 15 t.Hour(), t.Minute(), t.Second(), t.Nanosecond() t.Weekday() // time.Monday t.YearDay() // Day of year (1-365/366) t.Unix() // Seconds since epoch t.UnixMilli() // Milliseconds since epoch t.UnixNano() // Nanoseconds since epoch
time.Now
time.Now() returns the current local time, containing both wall clock and monotonic clock readings. For UTC time, chain with UTC() method; for specific timezone, use In().
now := time.Now() // Local time with monotonic clock utc := time.Now().UTC() // UTC time // Components year, month, day := now.Date() hour, min, sec := now.Clock() // Unix timestamps now.Unix() // 1705312345 now.UnixMilli() // 1705312345678 now.UnixNano() // 1705312345678901234 // Create time from Unix time.Unix(1705312345, 0) // From seconds time.UnixMilli(1705312345678) // From milliseconds
time.Parse
time.Parse parses a formatted string into a time.Time using a layout string based on the reference time Mon Jan 2 15:04:05 MST 2006. This specific date is used because its components are 1, 2, 3, 4, 5, 6, 7.
// The magic reference time: Mon Jan 2 15:04:05 MST 2006 // 01/02 03:04:05PM '06 -0700 t, err := time.Parse("2006-01-02", "2024-01-15") t, err := time.Parse("2006-01-02 15:04:05", "2024-01-15 14:30:00") t, err := time.Parse(time.RFC3339, "2024-01-15T14:30:00Z") // Parse in specific timezone loc, _ := time.LoadLocation("America/New_York") t, err := time.ParseInLocation("2006-01-02 15:04", "2024-01-15 14:30", loc) // Common error: month/day confusion // "01" = month, "02" = day - never swap!
time.Format
time.Format formats a time.Time into a string using the same reference time layout. Unlike most languages, Go uses a mnemonic reference time instead of abstract format codes.
t := time.Now() t.Format("2006-01-02") // "2024-01-15" t.Format("02/01/2006") // "15/01/2024" t.Format("15:04:05") // "14:30:00" t.Format("3:04 PM") // "2:30 PM" t.Format("Monday, January 2, 2006") // "Monday, January 15, 2024" t.Format(time.RFC3339) // "2024-01-15T14:30:00Z" // Memory aid: 1 2 3 4 5 6 7 // Month=1, Day=2, Hour=3(PM)/15, Min=4, Sec=5, Year=6, TZ=-7
Time layout constants
The time package provides predefined layout constants for common formats like RFC3339, RFC822, and more. Use these for standard formats; they're less error-prone than writing layouts manually.
const ( Layout = "01/02 03:04:05PM '06 -0700" ANSIC = "Mon Jan _2 15:04:05 2006" UnixDate = "Mon Jan _2 15:04:05 MST 2006" RubyDate = "Mon Jan 02 15:04:05 -0700 2006" RFC822 = "02 Jan 06 15:04 MST" RFC822Z = "02 Jan 06 15:04 -0700" RFC850 = "Monday, 02-Jan-06 15:04:05 MST" RFC1123 = "Mon, 02 Jan 2006 15:04:05 MST" RFC1123Z = "Mon, 02 Jan 2006 15:04:05 -0700" RFC3339 = "2006-01-02T15:04:05Z07:00" // ISO 8601 RFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00" Kitchen = "3:04PM" Stamp = "Jan _2 15:04:05" )
Duration
time.Duration represents elapsed time as an int64 nanosecond count. Use duration constants for readability and the ParseDuration function to parse strings like "1h30m".
// Duration constants time.Nanosecond // 1 time.Microsecond // 1000 nanoseconds time.Millisecond // 1000 microseconds time.Second // 1000 milliseconds time.Minute // 60 seconds time.Hour // 60 minutes // Creating durations d := 5 * time.Second d := time.Duration(500) * time.Millisecond d, err := time.ParseDuration("1h30m45s") d, err := time.ParseDuration("2.5s") // 2.5 seconds // Duration methods d.Hours() // float64 d.Minutes() // float64 d.Seconds() // float64 d.Milliseconds() // int64 d.String() // "1h30m45s"
Sleep
time.Sleep pauses the current goroutine for at least the specified duration. It doesn't return early, but in concurrent code, prefer context with timeout or time.After for cancellability.
time.Sleep(2 * time.Second) time.Sleep(500 * time.Millisecond) // Avoid: not cancellable time.Sleep(30 * time.Second) // Prefer: with context for cancellation select { case <-time.After(30 * time.Second): // Timeout case <-ctx.Done(): // Cancelled case result := <-resultCh: // Got result }
Timers (time.Timer)
time.Timer fires once after a specified duration, delivering the current time on its channel. Use Stop() to cancel and drain the channel if Stop returns false. Unlike Sleep, timers can be cancelled.
timer := time.NewTimer(5 * time.Second) select { case t := <-timer.C: fmt.Println("Timer fired at", t) case <-cancel: if !timer.Stop() { <-timer.C // Drain channel if already fired } fmt.Println("Cancelled") } // Reset for reuse timer.Reset(10 * time.Second) // time.After - simpler but can't be cancelled select { case <-time.After(5 * time.Second): // Timeout } // ⚠️ time.After leaks until it fires - avoid in loops
Tickers (time.Ticker)
time.Ticker delivers time values at regular intervals until stopped. Always call Stop() to release resources; unlike timers, tickers repeat forever. Perfect for periodic tasks.
ticker := time.NewTicker(1 * time.Second) defer ticker.Stop() // Always stop! for { select { case t := <-ticker.C: fmt.Println("Tick at", t) doPeriodicWork() case <-done: return } } // Tick - convenience function (can't be stopped, leaks!) // Only use if you never need to stop for range time.Tick(1 * time.Second) { // Runs forever }
Time zones
Time zones in Go are represented by *time.Location, loaded by name from the system's timezone database. Always handle timezone operations explicitly—implicit conversions are a common source of bugs.
// Load timezone loc, err := time.LoadLocation("America/New_York") loc, err := time.LoadLocation("Europe/London") loc, err := time.LoadLocation("Asia/Tokyo") // Fixed offset (avoid if possible) loc := time.FixedZone("IST", 5*60*60+30*60) // UTC+5:30 // Convert time to timezone nyTime := t.In(loc) // Create time in timezone t := time.Date(2024, 1, 15, 14, 30, 0, 0, loc) // Important locations time.UTC // *Location for UTC time.Local // *Location for local system time
Time arithmetic
Time arithmetic uses Add for adding durations and Sub for getting the duration between times. For calendar operations (add months/years), use AddDate which handles variable month lengths correctly.
t := time.Now() // Add duration future := t.Add(2 * time.Hour) past := t.Add(-24 * time.Hour) // Negative for past // Difference between times (returns Duration) elapsed := time.Since(startTime) // Same as: time.Now().Sub(startTime) remaining := time.Until(deadline) // Same as: deadline.Sub(time.Now()) // Calendar arithmetic nextMonth := t.AddDate(0, 1, 0) // Add 1 month nextYear := t.AddDate(1, 0, 0) // Add 1 year lastWeek := t.AddDate(0, 0, -7) // 7 days ago // Comparison t.Before(other) t.After(other) t.Equal(other) // Use this, not ==
Monotonic clocks
Go's time.Time contains both wall clock (displayable) and monotonic clock (for measuring elapsed time). The monotonic reading ensures accurate duration measurement even if the system clock changes (NTP updates, DST).
// time.Now() includes monotonic reading start := time.Now() doWork() elapsed := time.Since(start) // Uses monotonic clock ✓ // Wall clock operations strip monotonic t := time.Now() t.Round(0) // Strips monotonic t.UTC() // Strips monotonic t.In(loc) // Strips monotonic t.AddDate(...) // Strips monotonic // Monotonic preserved in: t.Add(duration) // Preserved t.Sub(other) // Uses monotonic if both have it // Check monotonic (for debugging) fmt.Println(t) // Includes "m=±<value>" if monotonic present
JSON Handling
encoding/json package
The encoding/json package implements JSON encoding/decoding per RFC 7159. It handles marshaling Go values to JSON bytes and unmarshaling JSON to Go values, with struct tags controlling the mapping.
import "encoding/json" type Person struct { Name string `json:"name"` Age int `json:"age"` } // Encode to JSON p := Person{Name: "Alice", Age: 30} data, err := json.Marshal(p) // []byte(`{"name":"Alice","age":30}`) // Decode from JSON var p2 Person err = json.Unmarshal(data, &p2) // Pretty print data, _ = json.MarshalIndent(p, "", " ")
json.Marshal
json.Marshal converts a Go value to JSON bytes. It handles structs, maps, slices, and basic types automatically. Private fields are ignored; only exported (capitalized) fields are marshaled.
type Response struct { Status string `json:"status"` Results []Result `json:"results"` } data, err := json.Marshal(response) if err != nil { log.Fatal(err) } // Type mapping: // struct → {} // map[string]T → {} // slice/array → [] // string → "string" // int/float → number // bool → true/false // nil → null // MarshalIndent for pretty printing data, _ := json.MarshalIndent(response, "", " ")
json.Unmarshal
json.Unmarshal parses JSON into a Go value. Pass a pointer to the target value; unknown fields are ignored by default. Returns an error for malformed JSON or type mismatches.
data := []byte(`{"name":"Alice","age":30,"city":"NYC"}`) var person Person err := json.Unmarshal(data, &person) // Unmarshal into map (dynamic/unknown structure) var result map[string]any json.Unmarshal(data, &result) // Unmarshal into slice data := []byte(`[1, 2, 3]`) var nums []int json.Unmarshal(data, &nums) // Partial unmarshal var partial struct { Name string `json:"name"` } json.Unmarshal(data, &partial) // Only gets "name"
JSON tags
Struct tags control JSON field names, omission rules, and type handling. The tag format is json:"name,options" where options can include omitempty, -, and string.
type User struct { // Different JSON name ID int `json:"id"` // Same name (optional but explicit) Username string `json:"username"` // Omit if zero value Email string `json:"email,omitempty"` // Ignore completely Password string `json:"-"` // Keep field named "-" Minus string `json:"-,"` // Number as string Balance int `json:"balance,string"` // Embedded uses type name Address // becomes {"Address": {...}} // Flatten embedded Profile `json:",inline"` // Doesn't exist! Use no tag. }
Custom JSON marshaling
Implement json.Marshaler interface with MarshalJSON() ([]byte, error) to control how a type is encoded to JSON. Useful for custom formats, computed values, or complex transformations.
type Time struct { time.Time } func (t Time) MarshalJSON() ([]byte, error) { formatted := t.Format("2006-01-02") return json.Marshal(formatted) // Returns "2024-01-15" } type Status int const ( StatusPending Status = iota StatusActive StatusDone ) func (s Status) MarshalJSON() ([]byte, error) { names := []string{"pending", "active", "done"} return json.Marshal(names[s]) } // Usage: {"status": "active"} instead of {"status": 1}
Custom JSON unmarshaling
Implement json.Unmarshaler interface with UnmarshalJSON([]byte) error to control how a type is decoded from JSON. The receiver must be a pointer to modify the value.
type Time struct { time.Time } func (t *Time) UnmarshalJSON(data []byte) error { var s string if err := json.Unmarshal(data, &s); err != nil { return err } parsed, err := time.Parse("2006-01-02", s) if err != nil { return err } t.Time = parsed return nil } type FlexibleInt int func (f *FlexibleInt) UnmarshalJSON(data []byte) error { // Accept both number and string var n int if err := json.Unmarshal(data, &n); err == nil { *f = FlexibleInt(n) return nil } var s string if err := json.Unmarshal(data, &s); err != nil { return err } i, _ := strconv.Atoi(s) *f = FlexibleInt(i) return nil }
json.Encoder
json.Encoder writes JSON values to an io.Writer, useful for streaming JSON to files, HTTP responses, or network connections. More efficient than Marshal for writing to streams.
// Write to file file, _ := os.Create("data.json") encoder := json.NewEncoder(file) encoder.SetIndent("", " ") // Pretty print for _, item := range items { if err := encoder.Encode(item); err != nil { log.Fatal(err) } } // Write to HTTP response func handler(w http.ResponseWriter, r *http.Request) { w.Header().Set("Content-Type", "application/json") json.NewEncoder(w).Encode(response) } // Control escaping encoder.SetEscapeHTML(false) // Don't escape <>&
json.Decoder
json.Decoder reads JSON values from an io.Reader, supporting streaming decode of multiple values. Use for HTTP request bodies, files, and any stream. More memory-efficient than Unmarshal for large inputs.
// Read from HTTP request func handler(w http.ResponseWriter, r *http.Request) { var req Request if err := json.NewDecoder(r.Body).Decode(&req); err != nil { http.Error(w, err.Error(), 400) return } } // Read multiple JSON objects decoder := json.NewDecoder(file) for decoder.More() { var obj Object if err := decoder.Decode(&obj); err != nil { break } process(obj) } // Strict mode: reject unknown fields decoder.DisallowUnknownFields()
Streaming JSON
Streaming JSON handles large arrays without loading everything into memory. Use Decoder.Token() to read JSON tokens one at a time, processing array elements as they're read.
decoder := json.NewDecoder(largeFile) // Expect array start if t, _ := decoder.Token(); t != json.Delim('[') { return errors.New("expected array") } // Stream array elements for decoder.More() { var item Item if err := decoder.Decode(&item); err != nil { return err } processItem(item) // Handle each without loading all } // Read array end if t, _ := decoder.Token(); t != json.Delim(']') { return errors.New("expected array end") } // Tokens: json.Delim ([ ] { }), strings, numbers, bools, nil
JSON and embedded structs
Embedded structs are "flattened" into the containing struct's JSON representation. This enables composition and field inheritance. Use pointer embedding if the embedded struct might be nil.
type Timestamps struct { CreatedAt time.Time `json:"created_at"` UpdatedAt time.Time `json:"updated_at"` } type User struct { ID int `json:"id"` Name string `json:"name"` Timestamps // Embedded - fields are flattened } u := User{ID: 1, Name: "Alice"} json.Marshal(u) // {"id":1,"name":"Alice","created_at":"...","updated_at":"..."} // NOT: {"id":1,"name":"Alice","Timestamps":{"created_at":"..."}} // If you want nesting, use named field: type User struct { ID int `json:"id"` Times Timestamps `json:"timestamps"` }
omitempty
The omitempty option causes zero-value fields to be omitted from JSON output. Zero values are: 0 for numbers, "" for strings, nil for pointers/slices/maps, false for bools.
type User struct { Name string `json:"name"` Age int `json:"age,omitempty"` // Omit if 0 Email string `json:"email,omitempty"` // Omit if "" Admin bool `json:"admin,omitempty"` // Omit if false Profile *Prof `json:"profile,omitempty"` // Omit if nil Tags []string `json:"tags,omitempty"` // Omit if nil } u := User{Name: "Alice"} json.Marshal(u) // {"name":"Alice"} - all zero fields omitted // ⚠️ Gotcha: 0 and false are valid values! // Use pointer types if 0/false is meaningful: type Config struct { Count *int `json:"count,omitempty"` // nil = omit, *0 = include Debug *bool `json:"debug,omitempty"` }
Ignoring fields (-)
Use json:"-" to completely exclude a field from JSON marshaling and unmarshaling. The field won't appear in output and won't be populated from input, useful for internal or sensitive data.
type User struct { ID int `json:"id"` Username string `json:"username"` Password string `json:"-"` // Never marshal/unmarshal // Internal tracking cache map[string]any `json:"-"` mutex sync.Mutex // Unexported, already ignored } u := User{ID: 1, Username: "alice", Password: "secret"} data, _ := json.Marshal(u) // {"id":1,"username":"alice"} - no password! // To have a field literally named "-": type Weird struct { Minus string `json:"-,"` // Note the comma } // {""-":"value"}
RawMessage
json.RawMessage is a raw encoded JSON value that delays parsing or allows passing through unchanged. Useful for polymorphic JSON, partial parsing, or preserving exact encoding.
type Event struct { Type string `json:"type"` Payload json.RawMessage `json:"payload"` // Raw JSON } data := []byte(`{"type":"user","payload":{"id":1,"name":"Alice"}}`) var event Event json.Unmarshal(data, &event) // Parse payload based on type switch event.Type { case "user": var user User json.Unmarshal(event.Payload, &user) case "order": var order Order json.Unmarshal(event.Payload, &order) } // Pass through unchanged output, _ := json.Marshal(event) // payload preserved exactly
Other Encoding
encoding/xml
The encoding/xml package provides XML encoding/decoding similar to encoding/json, using struct tags to map between Go structs and XML elements/attributes. It handles namespaces, CDATA, and nested elements.
type Person struct { XMLName xml.Name `xml:"person"` ID int `xml:"id,attr"` // Attribute Name string `xml:"name"` // Element Email string `xml:"contact>email"` // Nested element Bio string `xml:",cdata"` // CDATA section Notes string `xml:",innerxml"` // Raw XML content } p := Person{ID: 1, Name: "Alice", Email: "a@b.com"} data, _ := xml.MarshalIndent(p, "", " ") // <person id="1"> // <name>Alice</name> // <contact><email>a@b.com</email></contact> // </person> xml.Unmarshal(data, &p)
encoding/csv
The encoding/csv package reads and writes CSV files following RFC 4180. It handles quoted fields, embedded commas, and newlines within fields. Use Reader.Read() for row-by-row or ReadAll() for entire file.
// Writing CSV file, _ := os.Create("data.csv") writer := csv.NewWriter(file) defer writer.Flush() writer.Write([]string{"name", "age", "city"}) writer.Write([]string{"Alice", "30", "NYC"}) writer.Write([]string{"Bob", "25", "LA"}) // Reading CSV file, _ := os.Open("data.csv") reader := csv.NewReader(file) // Read all at once records, _ := reader.ReadAll() // Or read row by row for { row, err := reader.Read() if err == io.EOF { break } if err != nil { log.Fatal(err) } fmt.Println(row) } reader.Comma = ';' // Custom delimiter reader.LazyQuotes = true // Allow non-standard quotes
encoding/gob
encoding/gob is Go's native binary serialization format, optimized for Go-to-Go communication. It's more efficient than JSON for Go types, handles cyclic data structures, and preserves type information.
type User struct { ID int Name string Tags []string } // Encode var buf bytes.Buffer enc := gob.NewEncoder(&buf) enc.Encode(User{ID: 1, Name: "Alice", Tags: []string{"admin"}}) // Decode dec := gob.NewDecoder(&buf) var user User dec.Decode(&user) // Register interface types gob.Register(User{}) // For interface{} fields gob.Register(MyError{}) // Custom error types // ⚠️ Not language-agnostic - Go only! // ⚠️ Format may change between Go versions
encoding/base64
The encoding/base64 package implements base64 encoding per RFC 4648. Use StdEncoding for standard, URLEncoding for URL-safe (uses - and _ instead of + and /), and RawStdEncoding to omit padding.
import "encoding/base64" data := []byte("Hello, World!") // Standard encoding (with padding) encoded := base64.StdEncoding.EncodeToString(data) // "SGVsbG8sIFdvcmxkIQ==" decoded, _ := base64.StdEncoding.DecodeString(encoded) // URL-safe encoding (for URLs/filenames) encoded = base64.URLEncoding.EncodeToString(data) // Uses - and _ instead of + and / // Without padding (=) encoded = base64.RawStdEncoding.EncodeToString(data) encoded = base64.RawURLEncoding.EncodeToString(data) // Stream encoding/decoding encoder := base64.NewEncoder(base64.StdEncoding, writer) decoder := base64.NewDecoder(base64.StdEncoding, reader)
encoding/hex
The encoding/hex package implements hexadecimal encoding/decoding, converting bytes to/from hex strings. Commonly used for displaying binary data, checksums, and cryptographic hashes.
import "encoding/hex" data := []byte{0xDE, 0xAD, 0xBE, 0xEF} // Encode to string str := hex.EncodeToString(data) // "deadbeef" // Decode from string data, err := hex.DecodeString("deadbeef") // Dump format (like hexdump utility) dump := hex.Dump(data) // 00000000 de ad be ef |....| // Encoder/Decoder for streams encoder := hex.NewEncoder(writer) decoder := hex.NewDecoder(reader) // Common use: display hash hash := sha256.Sum256([]byte("hello")) fmt.Println(hex.EncodeToString(hash[:]))
Protocol Buffers (google.golang.org/protobuf)
Protocol Buffers (protobuf) is Google's language-neutral binary serialization format. Define schemas in .proto files, generate Go code with protoc, and use the generated types for efficient serialization.
// user.proto syntax = "proto3"; package main; option go_package = "./pb"; message User { int32 id = 1; string name = 2; repeated string tags = 3; }
// go install google.golang.org/protobuf/cmd/protoc-gen-go@latest // protoc --go_out=. user.proto import "yourmodule/pb" import "google.golang.org/protobuf/proto" user := &pb.User{ Id: 1, Name: "Alice", Tags: []string{"admin"}, } // Serialize data, _ := proto.Marshal(user) // Deserialize var user2 pb.User proto.Unmarshal(data, &user2) // Much smaller and faster than JSON
Regular Expressions
regexp package
The regexp package implements RE2 regular expressions, guaranteeing linear time execution. Unlike PCRE, it doesn't support backreferences or lookahead, but it's safe from exponential blowup. Compile once, use many times.
import "regexp" // Check if matches matched, _ := regexp.MatchString(`^\d+$`, "12345") // true // Compile for reuse (preferred) re := regexp.MustCompile(`\d+`) re.MatchString("abc123") // true // Case insensitive re := regexp.MustCompile(`(?i)hello`) re.MatchString("HELLO") // true // RE2 features: .+*?|()[] ^$ \d\w\s etc. // NOT supported: backreferences (\1), lookahead (?=), lookbehind (?<=)
Compile vs MustCompile
Compile returns an error for invalid patterns while MustCompile panics, making it suitable for package-level variables where patterns are known to be valid. Use Compile when patterns come from user input.
// MustCompile - panics on invalid pattern // Use for compile-time known patterns var emailRe = regexp.MustCompile(`^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$`) // Compile - returns error // Use for runtime/user-provided patterns func search(pattern string) error { re, err := regexp.Compile(pattern) if err != nil { return fmt.Errorf("invalid pattern: %w", err) } // use re... return nil } // POSIX - longest match semantics re, _ := regexp.CompilePOSIX(`a+`)
MatchString
MatchString and Match check if the pattern matches anywhere in the string/byte slice. They're convenience functions; for repeated matching, compile the pattern first.
// Quick check (compiles pattern each time - avoid in loops) matched, err := regexp.MatchString(`\d+`, "abc123") // Preferred: compile once re := regexp.MustCompile(`\d+`) re.MatchString("abc123") // true re.Match([]byte("abc123")) // true // Match at beginning re := regexp.MustCompile(`^Hello`) re.MatchString("Hello World") // true re.MatchString("Say Hello") // false // Full match (anchor both ends) re := regexp.MustCompile(`^\d+$`) re.MatchString("123") // true re.MatchString("abc123") // false
FindString, FindAllString
FindString returns the first match, FindAllString returns all matches. The second argument to FindAll* limits the number of matches (-1 for all).
re := regexp.MustCompile(`\d+`) // First match only match := re.FindString("abc123def456") // "123" // Returns "" if no match // All matches matches := re.FindAllString("abc123def456ghi789", -1) // ["123", "456", "789"] // Limit matches matches := re.FindAllString("a1b2c3d4", 2) // ["1", "2"] // Byte slice versions re.Find([]byte("...")) re.FindAll([]byte("..."), -1) // Find index (start, end positions) loc := re.FindStringIndex("abc123") // [3, 6]
FindStringSubmatch
FindStringSubmatch returns the match plus all capturing groups. Index 0 is the full match, 1+ are the groups. Use named groups with (?P<name>...) and access with SubexpNames().
re := regexp.MustCompile(`(\w+)@(\w+)\.(\w+)`) match := re.FindStringSubmatch("user@example.com") // ["user@example.com", "user", "example", "com"] // match[0] = full match, match[1-3] = groups // All submatches matches := re.FindAllStringSubmatch(text, -1) // Named groups re := regexp.MustCompile(`(?P<user>\w+)@(?P<domain>\w+)\.(?P<tld>\w+)`) match := re.FindStringSubmatch("user@example.com") names := re.SubexpNames() // ["", "user", "domain", "tld"] // Build map of named groups result := make(map[string]string) for i, name := range names[1:] { result[name] = match[i+1] } // {"user":"user", "domain":"example", "tld":"com"}
ReplaceAllString
ReplaceAllString replaces all matches with a replacement string. Use $1, $2 for captured groups, ${name} for named groups, and ReplaceAllStringFunc for dynamic replacements.
re := regexp.MustCompile(`\d+`) result := re.ReplaceAllString("a1b2c3", "X") // "aXbXcX" // Using captured groups re := regexp.MustCompile(`(\w+)@(\w+)`) result := re.ReplaceAllString("foo@bar", "$2@$1") // "bar@foo" // Named groups re := regexp.MustCompile(`(?P<first>\w+)-(?P<second>\w+)`) result := re.ReplaceAllString("a-b", "${second}-${first}") // "b-a" // Dynamic replacement with function re := regexp.MustCompile(`\d+`) result := re.ReplaceAllStringFunc("a1b2c3", func(s string) string { n, _ := strconv.Atoi(s) return strconv.Itoa(n * 2) }) // "a2b4c6" // Literal replacement (no $ expansion) re.ReplaceAllLiteralString("a1b2", "$$") // "a$$b$$"
Regexp methods
Beyond matching and replacing, Regexp provides methods for splitting strings, finding match positions, and expanding templates. Most methods have variants for strings, bytes, and io.Reader.
re := regexp.MustCompile(`\s+`) // Split parts := re.Split("a b c d", -1) // ["a", "b", "c", "d"] parts := re.Split("a b c", 2) // ["a", "b c"] // Expand template re := regexp.MustCompile(`(?P<first>\w+)-(?P<last>\w+)`) template := []byte("Name: $last, $first") match := re.FindSubmatchIndex([]byte("John-Doe")) result := re.Expand(nil, template, []byte("John-Doe"), match) // "Name: Doe, John" // Number of capturing groups re.NumSubexp() // Get group names re.SubexpNames() // ["", "first", "last"] re.SubexpIndex("first") // 1
Regexp performance considerations
RE2's linear time guarantee means patterns are always safe, but compilation is expensive. Compile patterns once (usually at package level), avoid unnecessary capturing groups, and prefer literal string methods when possible.
// ❌ Bad: compiles every call func hasDigit(s string) bool { matched, _ := regexp.MatchString(`\d`, s) return matched } // ✅ Good: compile once var digitRe = regexp.MustCompile(`\d`) func hasDigit(s string) bool { return digitRe.MatchString(s) } // ✅ Better: use strings package if possible func hasDigit(s string) bool { return strings.ContainsAny(s, "0123456789") } // Use non-capturing groups when you don't need captures re := regexp.MustCompile(`(?:\d+)-(\w+)`) // Only capture \w+ // Anchors speed up negative matches re := regexp.MustCompile(`^prefix`) // Fails fast if no prefix