Go Monorepo Patterns: Structuring Large-Scale Backend Systems
A deep dive into proven patterns for organizing Go monorepos that house multiple backend services, shared libraries, and tooling under a single repository.
Building backend systems at scale inevitably raises the question of repository structure. At Klivvr, Andromeda began as a handful of microservices scattered across separate repositories. As the number of services grew, so did the friction: shared library versioning became a chore, cross-service refactors required synchronized pull requests, and onboarding new engineers meant cloning a dozen repos before they could run anything meaningful. Migrating to a Go monorepo resolved these pain points and introduced new organizational patterns that have served us well as the codebase expanded to dozens of services and hundreds of thousands of lines of Go.
This article walks through the directory layout, module strategies, dependency management techniques, and code-sharing patterns we use in Andromeda. Whether you are starting a new monorepo or migrating existing services, these patterns will help you avoid the most common pitfalls.
Directory Layout: Convention Over Configuration
The single most impactful decision in a monorepo is the top-level directory structure. A well-chosen layout communicates intent, enforces boundaries, and makes tooling easier to build. Here is the skeleton we use in Andromeda:
// Root of the monorepo
// ├── cmd/ -- entry points for each service binary
// │ ├── gateway/
// │ │ └── main.go
// │ ├── accounts/
// │ │ └── main.go
// │ └── notifications/
// │ └── main.go
// ├── internal/ -- private packages, not importable outside the module
// │ ├── accounts/
// │ │ ├── domain/
// │ │ ├── app/
// │ │ ├── infra/
// │ │ └── ports/
// │ ├── notifications/
// │ └── shared/
// ├── pkg/ -- public, reusable libraries
// │ ├── grpcutil/
// │ ├── natsutil/
// │ ├── pagination/
// │ └── observability/
// ├── proto/ -- protobuf definitions
// ├── migrations/ -- database migrations per service
// ├── scripts/ -- build, lint, deploy helpers
// ├── go.mod
// ├── go.sum
// └── MakefileSeveral conventions are worth calling out. First, every deployable binary lives under cmd/. Each subdirectory contains exactly one main.go file, keeping entry points minimal and pushing all logic into internal/ packages. Second, internal/ enforces Go's built-in visibility rule: packages under internal/ cannot be imported by code outside the module root. This is a powerful boundary mechanism that prevents accidental coupling. Third, pkg/ houses genuinely reusable libraries that could, in theory, live in their own module. We keep them in the monorepo for convenience but design them as if they were standalone.
Within each service directory inside internal/, we follow a domain-driven design layout with domain/, app/, infra/, and ports/ sub-packages. This is covered in more detail in our article on DDD in Go, but the key takeaway is that every service follows the same internal shape, making it trivial to navigate unfamiliar code.
Single Module vs. Multi-Module
Go supports both single-module and multi-module monorepos. In a single-module repo, a single go.mod at the root governs all dependencies. In a multi-module repo, each service or library has its own go.mod, enabling independent versioning and dependency trees.
We chose a single-module approach for Andromeda and have not regretted it. The benefits include:
- Atomic refactors. Renaming a shared type or changing a function signature is a single commit that updates every call site. No version bumps, no coordinated releases.
- Unified dependency graph. There is exactly one version of every third-party dependency. This eliminates "diamond dependency" issues where service A needs library v1.2 and service B needs v1.3.
- Simpler tooling.
go build ./...,go test ./..., andgo vet ./...work from the root without wrappers.
The main downside is that a change to a shared library triggers rebuilds and test runs for every dependent service. We mitigate this with intelligent CI caching and affected-service detection, which we discuss in our CI/CD article.
For teams where services have wildly divergent dependency requirements or are maintained by completely separate organizations, multi-module may be more appropriate. But for a single product team building tightly integrated backend services, single-module is the pragmatic choice.
Managing Shared Code
Shared code is both the greatest advantage and the greatest risk of a monorepo. Done well, it eliminates duplication and ensures consistency. Done poorly, it creates a tangled web of dependencies that makes every change risky.
We follow three rules for shared code in Andromeda:
Rule 1: Shared code must be generic. If a package in pkg/ or internal/shared/ references a specific service's domain types, it does not belong there. Shared code should operate on interfaces or primitive types.
// pkg/pagination/cursor.go
package pagination
import "encoding/base64"
// Cursor represents an opaque pagination cursor.
type Cursor string
// Encode produces a cursor from arbitrary key-value pairs.
func Encode(pairs map[string]string) Cursor {
// Implementation encodes pairs deterministically
raw := marshalPairs(pairs)
return Cursor(base64.URLEncoding.EncodeToString(raw))
}
// Decode reverses Encode.
func Decode(c Cursor) (map[string]string, error) {
raw, err := base64.URLEncoding.DecodeString(string(c))
if err != nil {
return nil, err
}
return unmarshalPairs(raw)
}Rule 2: Depend inward, not outward. Service code may import shared libraries, but shared libraries must never import service code. This creates a clean dependency DAG.
Rule 3: Prove the need with duplication. We do not extract shared code preemptively. A pattern must appear in at least two services before it earns a place in pkg/ or internal/shared/. This prevents premature abstraction.
Build Tags and Conditional Compilation
In a large monorepo, you occasionally need code that behaves differently depending on the build context. Go's build tags provide a clean mechanism for this without runtime branching.
A common use case in Andromeda is swapping infrastructure adapters between production and testing:
//go:build integration
package postgres
import (
"database/sql"
"os"
"testing"
_ "github.com/lib/pq"
)
// TestDB returns a connection to a real PostgreSQL instance
// for integration tests. This file is only compiled when
// the "integration" build tag is present.
func TestDB(t *testing.T) *sql.DB {
t.Helper()
dsn := os.Getenv("TEST_DATABASE_URL")
if dsn == "" {
t.Skip("TEST_DATABASE_URL not set")
}
db, err := sql.Open("postgres", dsn)
if err != nil {
t.Fatalf("opening test db: %v", err)
}
t.Cleanup(func() { db.Close() })
return db
}We also use build tags to include or exclude profiling endpoints, debug handlers, and vendor-specific integrations. The key is to keep tag usage minimal and well-documented. Every file with a build tag should have a comment explaining when and why that tag is used.
Dependency Governance
As a monorepo grows, so does the temptation to add dependencies. Left unchecked, the go.mod file balloons, build times increase, and security surface area expands. We enforce governance through a combination of tooling and code review norms.
First, we run go mod tidy in CI and fail the build if the result differs from what is committed. This catches unused dependencies and ensures reproducibility. Second, we maintain an ALLOWLIST.txt that lists approved third-party modules. A custom linter checks go.mod against this list and flags any unapproved additions for manual review.
// scripts/check_deps.go (simplified)
package main
import (
"bufio"
"fmt"
"os"
"strings"
)
func main() {
allowed := loadAllowlist("ALLOWLIST.txt")
deps := loadGoModRequires("go.mod")
for _, dep := range deps {
if !allowed[dep] {
fmt.Fprintf(os.Stderr, "unapproved dependency: %s\n", dep)
os.Exit(1)
}
}
}
func loadAllowlist(path string) map[string]bool {
f, _ := os.Open(path)
defer f.Close()
m := make(map[string]bool)
sc := bufio.NewScanner(f)
for sc.Scan() {
line := strings.TrimSpace(sc.Text())
if line != "" && !strings.HasPrefix(line, "#") {
m[line] = true
}
}
return m
}
func loadGoModRequires(path string) []string {
f, _ := os.Open(path)
defer f.Close()
var deps []string
sc := bufio.NewScanner(f)
inRequire := false
for sc.Scan() {
line := strings.TrimSpace(sc.Text())
if line == "require (" {
inRequire = true
continue
}
if line == ")" {
inRequire = false
continue
}
if inRequire {
parts := strings.Fields(line)
if len(parts) >= 1 {
deps = append(deps, parts[0])
}
}
}
return deps
}This script is intentionally simple. The goal is not to build a framework but to create a speed bump that forces engineers to think before adding a dependency.
Conclusion
A Go monorepo is not inherently better or worse than a polyrepo setup. Its value depends on the team, the product, and the discipline applied to its organization. The patterns outlined here, a conventional directory layout, a single module, disciplined shared code, build tags for conditional compilation, and dependency governance, have allowed Andromeda to scale from a handful of services to a large, actively developed platform without the repository becoming a burden.
The most important takeaway is that a monorepo is a living system. The patterns you adopt on day one will evolve. What matters is establishing conventions early, automating enforcement, and revisiting decisions as the team and codebase grow. Start simple, add structure when pain emerges, and resist the urge to over-engineer the repository itself. The repository is a tool; the product is what matters.
Related Articles
Testing Strategies for Go Backend Services
A comprehensive guide to testing Go backend services, covering unit tests, integration tests, end-to-end tests, table-driven patterns, test fixtures, and strategies for testing gRPC and NATS-based systems.
How Monorepos Boost Team Productivity
An exploration of how monorepo architecture improves developer velocity, code quality, and cross-team collaboration, based on real-world experience with Andromeda.
Observability in Go: Tracing, Metrics, and Logging
A practical guide to implementing observability in Go backend services using OpenTelemetry for tracing, Prometheus for metrics, and structured logging with log/slog.