Using Protocol Buffers Across a Microservices Architecture

A business and architecture-focused guide to adopting Protocol Buffers as the standard contract language across a microservices ecosystem, covering shared types, dependency management, team workflows, and the role of a centralized schema registry.

business10 min readBy Klivvr Engineering
Share:

Microservices promise independent deployment, technology diversity, and team autonomy. But independence without coordination leads to chaos: incompatible data formats, duplicated type definitions, undocumented APIs, and integration failures that consume more engineering time than the features they support. Protocol Buffers, paired with a centralized schema registry like Nebula, provide the coordination layer that makes microservices autonomy sustainable.

This article examines how Protocol Buffers fit into a microservices architecture from both a technical and organizational perspective. It covers the patterns that work, the pitfalls to avoid, and the role of the Nebula schema registry in making protobuf adoption practical across a diverse engineering organization.

The Integration Problem

A typical fintech microservices architecture at Klivvr involves dozens of services: identity, accounts, payments, lending, notifications, analytics, compliance, and more. Each service is owned by a small team, deployed independently, and communicates with other services through APIs.

Without a shared contract standard, each integration is a bespoke negotiation. Team A defines their API in Go structs, Team B defines theirs in TypeScript interfaces, and the integration between them relies on hand-written JSON parsing that is tested primarily through hope. Field names are inconsistent (account_id in one service, accountId in another, AccountID in a third). Nullable semantics differ. Date formats differ. Error structures differ.

Protocol Buffers solve this by providing a single, language-neutral contract language. The .proto file is the authoritative definition of the API. Generated code ensures that every consumer in every language sees exactly the same fields, types, and structures.

// This single definition serves Go, TypeScript, Kotlin, and Swift consumers
syntax = "proto3";
package nebula.accounts.v1;
 
service AccountService {
  rpc GetAccount(GetAccountRequest) returns (Account);
  rpc ListAccounts(ListAccountsRequest) returns (ListAccountsResponse);
}
 
message GetAccountRequest {
  string account_id = 1;
}
 
message Account {
  string account_id = 1;
  string display_name = 2;
  string email = 3;
  AccountStatus status = 4;
  google.protobuf.Timestamp created_at = 5;
}
 
enum AccountStatus {
  ACCOUNT_STATUS_UNSPECIFIED = 0;
  ACCOUNT_STATUS_ACTIVE = 1;
  ACCOUNT_STATUS_SUSPENDED = 2;
  ACCOUNT_STATUS_CLOSED = 3;
}

Every consumer of the Account service imports the generated client from the Nebula registry's published packages. There is no ambiguity about the field names, no uncertainty about the status enum values, and no risk of one language seeing a different API surface than another.

Shared Types and the Common Package

Microservices inevitably share common concepts: monetary amounts, timestamps, pagination, error details, and geographic coordinates. Without coordination, each team reinvents these types independently, leading to subtle incompatibilities.

The Nebula registry addresses this through a set of common packages that all domain-specific schemas can import:

// nebula/common/v1/money.proto
syntax = "proto3";
package nebula.common.v1;
 
// Money represents a monetary amount with currency.
// All monetary values in the Nebula platform use this type.
message Money {
  // ISO 4217 currency code (e.g., "EGP", "USD").
  string currency_code = 1;
  // The whole units of the amount.
  int64 units = 2;
  // Number of nano (10^-9) units of the amount.
  // Must be between -999,999,999 and +999,999,999.
  // For negative amounts, units and nanos must both be negative.
  int32 nanos = 3;
}
// nebula/common/v1/pagination.proto
syntax = "proto3";
package nebula.common.v1;
 
// PaginationRequest provides cursor-based pagination parameters.
message PaginationRequest {
  // Maximum number of items to return. Default is 20, maximum is 100.
  int32 page_size = 1;
  // Opaque token from a previous response's next_page_token.
  string page_token = 2;
}
 
// PaginationResponse provides cursor-based pagination metadata.
message PaginationResponse {
  // Token to retrieve the next page. Empty if no more pages.
  string next_page_token = 1;
  // Total number of items across all pages, if known.
  // Set to -1 if the total is not efficiently computable.
  int64 total_count = 2;
}

Domain schemas import these common types:

import "nebula/common/v1/money.proto";
import "nebula/common/v1/pagination.proto";
 
message ListPaymentsRequest {
  string account_id = 1;
  nebula.common.v1.PaginationRequest pagination = 2;
}
 
message ListPaymentsResponse {
  repeated Payment payments = 1;
  nebula.common.v1.PaginationResponse pagination = 2;
}
 
message Payment {
  string payment_id = 1;
  nebula.common.v1.Money amount = 2;
  // ...
}

The common package is owned by the platform team and evolves carefully, since changes affect every service. New common types are added through an RFC process that solicits input from all domain teams.

Repository Structure: Monorepo vs. Multi-Repo

The Nebula schema registry uses a monorepo for all .proto files. This is a deliberate architectural decision with significant trade-offs.

In a monorepo, all schemas live in a single repository:

nebula/
  proto/
    nebula/
      common/
        v1/
          money.proto
          pagination.proto
          error_details.proto
      accounts/
        v1/
          accounts.proto
      payments/
        v1/
          payments.proto
      lending/
        v1/
          lending.proto
      notifications/
        v1/
          notifications.proto
  buf.yaml
  buf.gen.yaml

The advantages of a monorepo are substantial. Cross-schema refactoring is atomic: a change to a common type and all of its consumers can be reviewed and merged in a single pull request. Buf's breaking change detection runs across all schemas simultaneously, catching incompatibilities that would be invisible in a multi-repo setup. Code generation for all languages happens in a single pipeline, ensuring consistency.

The disadvantage is that the monorepo becomes a shared resource that requires governance. A poorly reviewed change to a common type can break every domain schema. Access control is coarser (GitHub's CODEOWNERS file helps, but it is not as granular as separate repositories). And the CI pipeline must scale to handle the full set of schemas on every change.

In a multi-repo approach, each domain team maintains its own proto repository and publishes to the Buf Schema Registry. Cross-schema dependencies are resolved through BSR module references:

# buf.yaml for the payments team's repo
version: v2
modules:
  - path: proto
    name: buf.build/klivvr/nebula-payments
deps:
  - buf.build/klivvr/nebula-common
  - buf.build/klivvr/nebula-accounts

This provides stronger ownership boundaries but makes cross-schema changes more difficult, since they require coordinated PRs across multiple repositories.

The Nebula team chose the monorepo for a straightforward reason: the coordination benefits outweigh the governance costs at their current scale. Teams with larger schema registries or stronger organizational boundaries may find the multi-repo approach more appropriate.

Team Workflows and Ownership

In a monorepo, clear ownership boundaries are essential. The Nebula registry uses GitHub's CODEOWNERS to enforce review requirements:

# CODEOWNERS
proto/nebula/common/     @klivvr/platform-team
proto/nebula/accounts/   @klivvr/identity-team
proto/nebula/payments/   @klivvr/payments-team
proto/nebula/lending/    @klivvr/lending-team
proto/nebula/notifications/ @klivvr/notifications-team

# Schema architects must review any cross-domain changes
proto/nebula/common/     @klivvr/schema-architects
buf.yaml                 @klivvr/schema-architects
buf.gen.yaml             @klivvr/schema-architects

Each domain team owns their schemas and can merge changes that pass lint and breaking change checks without waiting for platform team approval. Changes to common types or configuration files require an additional review from the schema architects, a small group of senior engineers who maintain cross-cutting consistency.

The workflow for a typical schema change is:

  1. Developer creates a branch and modifies the relevant .proto files.
  2. They run buf lint and buf breaking locally to verify compliance.
  3. They open a pull request. CI runs lint, breaking change detection, and code generation verification.
  4. The domain team reviews and approves the change.
  5. On merge, CI generates and publishes updated packages for all languages.
  6. Consuming teams update their dependency version to pick up the change.

This workflow keeps schema changes lightweight while maintaining quality through automated checks and human review.

Event-Driven Communication

Not all inter-service communication is synchronous RPC. Many microservices architectures rely heavily on asynchronous events published to message brokers like NATS, Kafka, or RabbitMQ. Protocol Buffers are equally effective for defining event schemas.

// nebula/events/payments/v1/events.proto
syntax = "proto3";
package nebula.events.payments.v1;
 
import "google/protobuf/timestamp.proto";
import "nebula/common/v1/money.proto";
 
// PaymentCreatedEvent is published when a new payment is initiated.
message PaymentCreatedEvent {
  string event_id = 1;
  string payment_id = 2;
  string sender_account_id = 3;
  string receiver_account_id = 4;
  nebula.common.v1.Money amount = 5;
  google.protobuf.Timestamp created_at = 6;
}
 
// PaymentCompletedEvent is published when a payment is settled.
message PaymentCompletedEvent {
  string event_id = 1;
  string payment_id = 2;
  google.protobuf.Timestamp completed_at = 3;
  nebula.common.v1.Money settled_amount = 4;
  nebula.common.v1.Money fee = 5;
}
 
// PaymentFailedEvent is published when a payment fails.
message PaymentFailedEvent {
  string event_id = 1;
  string payment_id = 2;
  google.protobuf.Timestamp failed_at = 3;
  string failure_reason = 4;
  string failure_code = 5;
}

These event schemas benefit from the same registry infrastructure as RPC schemas: versioned definitions, code generation, breaking change detection, and cross-language support. A notification service consuming PaymentCompletedEvent in TypeScript and an analytics service consuming the same event in Go both work against the same schema, eliminating format discrepancies.

The Nebula registry organizes event schemas in a parallel directory structure (nebula/events/) to distinguish them from RPC schemas (nebula/services/ or domain-level packages). This separation makes it easy to identify which schemas define synchronous contracts and which define asynchronous events.

Observability and Debugging

In a microservices architecture, tracing a request through multiple services is challenging. Protobuf-defined contracts help by providing a consistent vocabulary for logging and tracing.

When every service uses the same Money type, log aggregation queries can search for monetary amounts without accounting for format variations. When every service uses the same AccountStatus enum, dashboards can track status distributions without mapping between different string representations.

The Nebula team includes standard metadata fields in all RPC requests to support distributed tracing:

message RequestMetadata {
  string trace_id = 1;
  string span_id = 2;
  string caller_service = 3;
  string caller_version = 4;
}

Services extract these fields from gRPC metadata headers and propagate them through the call chain, enabling end-to-end trace visualization in tools like Jaeger and Grafana Tempo.

Practical Adoption Advice

For organizations considering protobuf adoption across their microservices, the Nebula team offers several recommendations based on experience.

Start with one high-traffic integration. Pick two services that communicate frequently and define their contract in protobuf. This demonstrates the value without requiring organization-wide buy-in.

Invest in the schema registry early. A shared repository with automated checks is the foundation that makes everything else work. Without it, proto files proliferate across service repositories, diverge, and lose their coordination value.

Make code generation effortless. If developers have to fight the toolchain, they will route around it. The generation pipeline should be a single command that just works.

Do not mandate migration of existing APIs. New services should use protobuf from day one. Existing services should migrate opportunistically, when they are already being refactored or when the integration pain justifies the effort.

Measure the impact. Track integration incident rates, time-to-first-successful-integration for new service pairs, and schema-related support requests. These metrics build the case for continued investment.

Conclusion

Protocol Buffers provide the contract language that microservices need to maintain their independence without losing their interoperability. Through shared type definitions, a centralized schema registry, automated quality checks, and language-specific code generation, the Nebula platform enables dozens of services to communicate reliably across language and team boundaries. The patterns described here are not theoretical: they are the daily workflow of the Klivvr engineering organization, refined through years of operating a production fintech system. Adopting protobuf across a microservices architecture is an investment, but it is one that pays dividends in every integration, every deployment, and every on-call rotation.

Related Articles

technical

Building a Schema Registry: Patterns and Best Practices

A comprehensive guide to building and operating a Protocol Buffers schema registry, covering architecture patterns, governance models, tooling integration, and the operational practices that keep a registry healthy as it scales.

9 min read
business

API Versioning Strategies with Protocol Buffers

A business-oriented guide to API versioning with Protocol Buffers, covering when and how to version, migration strategies, multi-version support, and the organizational processes that make versioning sustainable.

9 min read
technical

Protocol Buffers Performance: Benchmarks and Optimization

A rigorous examination of Protocol Buffers serialization performance, including benchmarks against JSON and other formats, memory allocation analysis, and practical optimization techniques for high-throughput systems.

8 min read