Events first: cloud-native patterns beyond request/response


Brief talk at Cloud Native @Scale, May 11, 2021, 5p PDT

Code for the demo for this talk lives on GitHub here.

Abstract

Going cloud-native is about how we design our applications, not just how we deploy them β€” and one key choice in distributed systems design is how our applications interact with each other. Request/response-based protocols are ubiquitous in software today, but the industry is still early in adopting event-driven approaches, which offer compelling advantages for highly distributed systems. This talk covers lessons learned from using event-driven patterns to scale real-world systems serving tens of millions of users and practical advice on how to use these patterns to make our applications more adaptable, reliable, and dynamically scalable.

Video

Slides and transcript

slide 1 slide 2
Hi everyone, I'm Melinda, and I'm here to talk about our favorite thing, cloud-native distributed systems. We'll do slides, then a live demo of a request/response architecture and moving it to an event-driven one.

Oh yeah: I work for eggy, where we love cloud-native distributed systems, and spent time at VSCO and some other places before that. (Hi Shruti!)

slide 3
If you’re writing software that touches the internet today, there’s no getting around it: you’re building a distributed system.

slide 4
Some parts of your software are running on mobile devices, or in a web browser, maybe; some parts are running on the server β€” probably many parts are running on many servers.

slide 5
And all of these parts are calling each over a network, which may or may not be working, or might be running out of batteries or have an exploded hard drive at any point.

But your software is expected to stay up and running perfectly, even though any of its parts might be gone.

That’s difficult. Cloud-native software is inherently distributed, and distributed systems are difficult to design, build, understand, and operate.

slide 6
But that’s why our jobs are fun.

When we build our systems, we’re designing for three main concerns:

First, reliability: how to make sure the system continues to work correctly and performantly, in the face of hardware or software faults, or human error.

Scalability: how to make sure we have reasonable ways to deal with our systems growing.

And maintainability: how to make sure that all the different people who are going to work on our system over time can understand and work on it productively.

slide 7

But one overarching concern is how we keep complexity under control.

When we start building something, it can be simple and delightful and clean; but as our projects get bigger, they often become complex and difficult to understand.

And this complexity slows down everyone who needs to work on the system.

We end up with tangled dependencies, hacks or special-casing to work around problems, an explosion in how much state we’re keeping β€” you’ve seen this.


slide 8
One way we try to manage complexity, as an industry, is by breaking our software down into independently-deployable services.

This has many advantages: services can be worked on in parallel, each service can focus on a single domain, we have flexibility in how we solve larger problems, and so on.

But this is only true if we decompose our software in the right way and use patterns that fit our context.

slide 9
If we aren’t careful, we could end up with software that’s still tightly coupled but now has an unreliable network between each piece of business logic, and is hard to deploy β€” and impossible to debug.

slide 10
Okay, so, why is this?

One key property of our software architecture is how our microservices interact with each other. (Because we’ve broken it down, our software only works when multiple services work together.)

The default way that we do interaction nowadays is request/response, using synchronous network calls.

This model tries to make requests to a remote network service look like calling functions in a programming language, within the same process.

This seems convenient at first, but seeing remote calls as function calls is fundamentally flawed.


slide 11

A network request is really different from a local function call.

A local function call is predictable and either succeeds or fails, depending on parameters that are totally under your control.

A network request is unpredictable: it could return successfully, albeit much more slowly than a function call. It could return successfully, but way more slowly β€” the network could be congested or the remote service could be overloaded, so it might take 20 seconds or 5 minutes to do the exact same thing. The request or response could get lost due to a network problem, or the remote service could be unavailable, or you could hit a timeout without receiving a result β€” and you would have no idea what happened.

If you don’t receive a response from your remote service, you have no way of knowing whether the request got through or not. Maybe the requests are actually getting through, and only the responses are getting lost β€” so it’s probably not safe for you to retry it, unless you’ve built in a mechanism for deduplication.

The cloud is a rough environment to be in, really different from a single computer.


slide 12
And because of this, our distributed software that interacts using request/response can be incredibly hard to reason about.

Imagine you have on the order of 100 or 1000 services communicating with each other in a point-to-point way: how many different point-to-point connections that is.

And this is what a large distributed service architecture can look like. This is Netflix’s architecture circa 2015, from a re:Invent talk showing a network of services and their synchronous, run-time dependencies on other services. (Some people refer to this representation as a “death star” diagram.

You can see that as the number of services grows over time, the number of synchronous interactions grows with them somewhere between linearly and quadratically.


slide 13
And if you zoom into one service, this is the call graph for a single Netflix service from the same talk (the list of list of movies or lolomo service). These are all the services it has to talk to in order to return a response.

You can see that this one service depends on many other ones, and any of these n dependencies being down can cause availability issues for our one service.

And apart from availability, we have latency which grows with the depth of the dependency graph, we probably need to coordinate read times between when each of these is querying its datastore β€” and we are likely going to find it extremely difficult to reason about the state of the system at any point in time.

There’s a lot going on.

slide 14
One solution to this availability problem, discussed in depth in this paper from Google, is just to make sure your individual services have significantly higher SLAs than what you need your system to have.


slide 15

You can also do things like: make automatic request retries when you don’t get a response back, with backpressure; add caches in between clients and servers; have multiple redundant copies of services that can take requests when one copy is broken, with readiness and liveness checks to route to the right ones; dynamic service discovery, load balancing, circuit breaking, and other traffic control to make this all possible; service meshes like Linkerd and Istio to make this more transparent to the user.

These are standard cloud-native patterns, but these are standard patterns because trusting request/response to work, in a hostile environment like the cloud, is inherently risky.

slide 16
And so much of the mind share in the microservices landscape β€” the libraries and tooling, the documentation, blog posts and books and diagrams, assumes this model, of services calling each other synchronously.

slide 17
And I think it’s a little unfortunate that we so often conflate the core values and goals of microservices with a specific model (request/response) for their implementation.

(This is also a hostile environment.) slide 18
But, what if, instead of patching these solutions for resilience on top of request/response, we could change the problem to a simpler one, by not binding our services together with synchronous ties?

The request/response model matches our sequential, imperative programming model of making in-place state changes on a single computer; and we’ve seen that it isn’t perfectly suited to a distributed environment.

But another programming style might match more-closely: functional programming.

Functional programming describes behavior in terms of the immutable input and output values of pure functions, not in terms of mutating objects in place.


slide 19

And we’ve seen in the last few years how thinking about things in a more functional way helps in other parts of the stack.

On the web, we’ve had reactive frameworks like React and Redux and Vue (and Elm before that) that almost everyone has shifted to after seeing how they can simplify things.

On mobile, the new generation of UI frameworks on both iOS and Android is functional and reactive, and we’ve had libraries for reactivity in business logic like Rx for a while, Combine more recently.

On the infrastructure side, we’ve seen how declarative APIs like Kubernetes and infrastructure-as-code tools like Terraform have made things so much easier to reason about and GitOps tools promise to take that further, but we’re still early in adopting this in backend application development.

slide 20

What would it look like if we extended the functional programming analogy to a microservices architecture?

This is basically event-driven architecture.

Like functional programming, it lets us know, consistently, the state of a system, as long as we know its input events.

slide 21
There are lots of different definitions of the details of what can be called an event-driven system, but the common thing is that the thing that triggers code execution in an event-driven system doesn’t wait for any response: it just sends it and then forgets about it.

So in this diagram, on the left, there’s a request and response: when a request is received, the receiving server has to return some type of a response to the client both the client and the server have to be alive, well, and focusing on each other in order for this to work.

On the other hand, on the right, there’s an event-driven service: the service that consumes the event does something with it, yes, but doesn’t have to send any response to the thing that originally created the event that triggered it. Only this service has to be alive for this code to be run; it doesn’t depend on other services being there. it’s not coupled to the thing that produced the event; it doesn’t have to know about the producer at all

slide 22
And the way this usually looks in practice is that we use a message broker or distributed log to store these events. These days this is usually Kafka or Kinesis.

Events are split into different streams called topics, which get partitioned across a horizontally-scalable cluster. Each of our services can consume from one or more topics, produce to one or more topics, or do both or neither.

And we’ll see this in a working example in a bit.

slide 23
But why is this better? It’s a really different way of having programs interact and if we’re going to make this change, we’d better have some good reasons.

slide 24
The biggest reason, I think, is the decoupling and easy composability this gives us.

slide 25
By getting rid of these point-to-point synchronous flows, our system can become much simpler.

Instead of having the work to add things scale quadratically, because every new thing has to be hooked up to every existing thing…

slide 26
…it becomes a simpler task to plug new services in, or change services that are there.


slide 27

We can break these long chains of blocking commands, where one piece being down at any given time means all of these are unavailable.



slide 28


And zooming out to a whole organization, these design principles let us build composable systems at a large scale.

In a large org, different teams can independently design and build services that consume from topics and produce to new topics. Since a topic can have any number of independent consumers, no coordination is required to set up a new consumer. And consumers can be deployed at will, or built with different technologies, be it different languages and libraries or different datastores.

Each team focuses on making their particular part of the system do its job well.



slide 29

As a real-life example, this is how this worked at VSCO. VSCO is a photo- and video-sharing consumer platform where users have their own profile, and can follow other users and get their content in their feed (you know β€” a social network).

What happens if a user follows another user?

The first thing that happens is they hit a synchronous follow service, which then tries to get that information into the distributed log (Kafka) as soon as possible.

And once it’s there, it can be used for many things: it gets consumed by the feed fanout worker so the follower can get the followee’s posts injected into their feed; it gets consumed by the push notification pipeline so the followee can see they’ve gotten a new follower, and so on.



slide 30

And the key is that you can keep on plugging more consumers. You could add monitoring, so you can see if the rate of follows goes way down (and business metrics might be affected by a change in our app design or something like that) β€” or if it goes way up, in case something weird is going on.

You could add an abuse prevention mechanism, to see if something weird is going on.

You could add analytics in case you need to make a report to Wall Street.

And so on.

All these systems can be developed independently, and connected and composed in a way that is and feels robust.

slide 31
Another side benefit is that you get asynchronous flows for free.

It used to be that we’d need to do a bunch of orchestration to set up batch jobs with like Airflow or something, but if we handle the real-time case with events, then we can do offline jobs by default, just like any other consumer.

slide 32
And this functional composition makes things easier to trace too.

There’s a central, immutable, saved journal of every interaction and its inputs and outputs, so we can go back and see exactly where something failed.

To get this with request/response, we’d have to invest a ton in a full distributed tracing system β€” and even then we might not see where latencies are hidden.

slide 33
Another reason event-driven systems are helpful is that they can provide scale by default, and make some of the patterns we use for resilience in request/response service architectures unnecessary.


slide 34

The distributed log acts as a buffer if the recipient is unavailable or overloaded, and automatically redelivers messages to a process that’s crashed and prevents messages from being lost; so this makes retries with backpressure and healthchecks less necessary.

The log prevents the sender from needing to know the IP address and port number of the recipient, which is particularly useful in the cloud, where servers go up and down all the time; so we don’t need complex service discovery and load balancing as critically.


slide 35
Second: there’s what’s called “polyglot persistence” nowadays, where we use lots of different kinds of databases to serve different use cases.

There’s no single “one-size-fits-all” database that’s able to fit all the different access patterns efficiently.

slide 36

For example, you might need users to perform a keyword search on a dataset and so you need a full-text search index service like Elasticsearch.

You might need a specialized database like a scientific or graph database, to do specialized data structure traversals or filters.

You might have a data warehouse for business analytics, where you need a really different storage layout, like a column-oriented storage engine.

When new data gets written, how do you make sure it ends up in all the right places?


slide 37
You could write to all the databases from a synchronous microservices β€” dual writes, or triple or quadruple writes β€” but we’ve seen how that is fragile.

It’s also theoretically impossible to make dual writes consistent without distributed transactions, which you can read about in detail in this paper.

So instead we can use event-driven systems to do what’s called change data capture. This is an old idea where we let an application subscribe to a stream of everything that is written to a database β€” and then we can use that feed to do the things listed on the last slide: update search indexes, invalidate caches, create snapshots, and so on.

There are open-source libraries for this now, there’s Debezium which uses Kafka Connect and databases are increasingly starting to support change streams as a first-class interface.

At VSCO, because we started doing this back in 2016 before there were common tools for this, we had our own version.


slide 38
Going back to the follows use case, the way that the event gets into the distributed log in the first place is that the follows service writes to an online transaction database, say something like MySQL or MongoDB.

Then, our change data capture service writes every change to the distributed log, where any number of consumers can do whatever they want with it.

This ends up being a nice way to migrate off a monolith β€” you can let the monolith keep doing the initial database write, suck those writes into your event store with database change capture, and pull out all the reads and all the extra stuff the monolith does for every write into composable workers reading off the event store.


slide 39
As a side benefit to this, this helps us fix one of the most common pitfalls in microservice architecture, where we have a bunch of independently deployable services but they all share a single database. This broad, shared contract makes it hard to figure out what effect our changes might have.

We would rather have our data not be so coupled but it’s hard to unbundle it.

Change data capture can make it much easier.

slide 40
We also can save much richer data with event-driven systems, if we want to.


slide 41

Writing data as a log can produce better-quality data by default than if we update a database directly,

For example, if someone adds an item to their shopping cart and then removes it again (we’re buying furniture), those add and delete actions have information value.

If we delete that information from the database when a customer removes an item from the cart, we’ve just thrown away information that might have been valuable for analytics and recommendation systems.

slide 42
And this approach is usually given the name event sourcing, which comes from the domain-driven design community.

Like change data capture, event sourcing involves saving all changes to the application state as a log of change events.

But instead of letting one part of the application use the database in a mutable way and update and delete records as it likes.

In event sourcing, our application logic is explicitly built on the basis of immutable events.

And the event store is append-only, and usually updates or deletes are discouraged or prohibited; and then to get a copy of your current working tree you’d aggregate over all the diffs over time.

In a fully event-sourced system the current state is derived.

Both this and the mutable version are valid approaches, but they come with slightly different tradeoffs in practice.

Regardless of which one we use, the important thing is to save facts, as they are observed, in an event log.

slide 43
And there’s a bonus reason which falls out of all the other reasons, which is that this is really good for machine learning.

There’s this idea of “software 2.0,” which is that, increasingly, we’ll want to plug machine-learnable programs into different parts of our software where we can get better results than by hand-coding a program.

Say, we’ll tune a database index with a model here, or decide whether this request is from a legitimate actor there.

And inserting ML like this will really just not be tractable if we do this in a point-to-point, request/response way. Our software will become totally incomprehensible.

But if we’ve already switched to using an event-driven system where the pluggable data integration is already there for us, adding ML will become much less painful.

slide 44

That said, there are some things to watch out for with this.

slide 45
The biggest one is concurrency management: if we have distributed data management, how do you make sure you read all the implications of your writes?

We don’t have time to go into this, because it’s a deep theoretical issue and request/response doesn’t really solve it either β€” the Martin Kleppmann paper mentioned earlier goes into it in great depth.

slide 46
You do have to be thoughtful about your schemas β€” how you choose to structure your event data.

Since the events are now your interface, you have to handle backward and forward compability with them like you would with an API.

Empirically I’ve found this less work than versioning request/response APIs, but it’s something to think about.

There are mature systems like protobuf and Avro for this that are quite good β€” would just recommend using a data serialization system, and not trying to handroll something on top of JSON or something here.

slide 47

As in request/response systems, privacy and data deletion are key β€” with GDPR, CCPA, and other privacy regulations (and also just doing right by our users), when we delete a single piece of data, we want to make sure to delete all pieces of data derived from it β€” which might now be in many datastores.

Event-driven processing can make it easier to perform this deletion, but might also make it more likely that you have more data to delete.

slide 48
The biggest gotcha, though it’s becoming less and less true, is that event-driven interactions are not the norm yet.

And so there are fewer libraries and tools and blog posts and example repos about this than about request/response architectures.

But running Kubernetes or using infrastructure as code wasn’t the default a few years either, and we’ve seen how the industry moving to it has made our lives easier.

slide 49

On GitHub we have a full-stack (infra/backend/web) working version of a breakfast delivery service with two different architectures: first, we run it on request-response microservices, then on event-driven microservices.

It’ll be fun to poke around, we promise.

(Like this content? πŸ₯ž Subscribe here to be notified about future updates!)

Protobuf two ways: Code generation and reflection with Go


(Example code for this blog post lives at eggybytes/protobuf-two-ways.)

We love protocol buffers. In our work, we depend on their out-of-box functionality:

  • to define core types and use them consistently across our server/web/mobile stack,
  • to serialize our data efficiently in transit and at rest,
  • to generate robust distributed-systems primitives for free with gRPC,
  • and for much else.

Protobuf has reasonably-good implementations for most of the languages we use (Go on the server, TypeScript and JS on the web, Swift/Python/Kotlin/Rust/C++ for iOS/ML/Android/low-level stuff/masochism respectively), and it also exposes hooks where we can customize generated and runtime code to better suit our use case.

A new Go API for protobuf was recently released that exposes much richer protobuf functionality than the previous API, with a lovely developer experience. We’d recommend it wholeheartedly to anyone writing Go.

In case it’s helpful, we’ve written up two examples of how to use the new API to extend your protobufs and make them more powerful.

A companion repo with compilable/runnable versions of the examples below is on GitHub here, and shows how the pieces fit together. (It also demonstrates building your protos with Bazel, which has made our protobuf experience much smoother.)

1. Extend compile-time functionality: code generation with protogen

One powerful thing about protobuf is that it’s “just” statically-generated code: your service- and message-definition files get compiled into the language or other output of your choice with the protoc protobuf compiler. This means that protobuf functionality is fully-inspectable and fast at runtime.

The previous Go API for protobufs exposed an internal package for writing plugins for the protobuf compiler. This was used internally to implement the Go protobuf and gRPC compilers, but also allowed programmers to extend or replace the code generated by default if they were willing to use the internal package. The new Go protobuf API makes this functionality publicly-available and pleasant to use.

There are powerful libraries to aid in writing protobuf compiler plugins across languages, like protoc-gen-star by the lovely @rodaine (who is a treasure, and who taught me about protobuf code generation and many other things in the first place). If you’re generating Go code, it’s also easy to write custom generators with the protogen API alone.

For example, we might want the protobuf compiler to generate mock client classes for our services to help with testing.

So for example, we might have as a service definition:

In example.proto (defined by us):
10service EggDeliveryService {
11  option (annotations.client_mock) = true;
12
13  rpc OrderEgg (OrderEggRequest) returns (OrderEggResponse);
14}

By default, this will generate a client interface:

In example_grpc.pb.go (generated by protoc-gen-grpc):
16// EggDeliveryServiceClient is the client API for EggDeliveryService service.
17//
18// For semantics around ctx use and closing/ending streaming RPCs, please refer to https://pkg.go.dev/google.golang.org/grpc/?tab=doc#ClientConn.NewStream.
19type EggDeliveryServiceClient interface {
20	OrderEgg(ctx context.Context, in *OrderEggRequest, opts ...grpc.CallOption) (*OrderEggResponse, error)
21}

For ease of testing, we want to generate a client mock that conforms to the same interface, so we can mock out and test features that use the service.

What we want to appear in example.pb.custom.go (generated by our plugin):
 1package example
 2
 3import (
 4	context "context"
 5	mock "github.com/stretchr/testify/mock"
 6	grpc "google.golang.org/grpc"
 7)
 8
 9// MockEggDeliveryServiceClient is a mock EggDeliveryServiceClient which
10// satisfies the EggDeliveryServiceClient interface.
11type MockEggDeliveryServiceClient struct {
12	mock.Mock
13}
14
15func NewMockEggDeliveryServiceClient() *MockEggDeliveryServiceClient {
16	return &MockEggDeliveryServiceClient{}
17}
18
19func (c *MockEggDeliveryServiceClient) OrderEgg(ctx context.Context, in *OrderEggRequest, opts ...grpc.CallOption) (*OrderEggResponse, error) {
20	args := c.Called(ctx, in)
21	if args.Get(0) == nil {
22		return nil, args.Error(1)
23	}
24	return args.Get(0).(*OrderEggResponse), args.Error(1)
25}

To generate these mocks every time we invoke the protobuf compiler, we can write a short plugin using google.golang.org/protobuf/compiler/protogen. A Go protobuf plugin is a binary that takes as stdin some paths to .proto files and other parameters, and prints generated output to stdout. This is kind of a weird interface, so protogen helps us satisfy it by providing helpers to wrap inputs and outputs, as well as access parsed protobuf type information. For example, a full plugin that generates a static useful comment might look like:

In example main.go (written by hand):
package main

import (
	"flag"
	"fmt"

	"google.golang.org/protobuf/compiler/protogen"
)

func main() {
	var flags flag.FlagSet

	protogen.Options{
		ParamFunc: flags.Set,
	}.Run(func(plugin *protogen.Plugin) error {
		for _, f := range plugin.Files {
			filename := fmt.Sprintf("%s.pb.eggy.go", f.GeneratedFilenamePrefix)
			g := plugin.NewGeneratedFile(filename, f.GoImportPath)

			// Code generation here
			g.P("package ", f.GoPackageName)
			g.P()
			g.P("// don't forget to take some time to take a sip of water please")
		}

		return nil
	})
}

If this is run on our input example.proto, it yields output:

In example.pb.custom.go (generated by our plugin):
package example

// don't forget to take some time to take a sip of water please

Not close to what we want yet. We can extend this to iterate through the services in the passed-in files; for example, we could generate a helper function for every service with code like:

In the example main.go above (written by hand):
func main() {
	...
		for _, f := range plugin.Files {
			...

			// Code generation here
			g.P("package ", f.GoPackageName)
			g.P()
			g.P("// don't forget to take some time to take a sip of water please")

			// Iterate through all the `service`s in the passed-in file
			for _, svc := range f.Services {
				// For each service, generate a function that prints its name
				g.P("// Here is a useful function")
				g.P(fmt.Sprintf("func %sNamePrinter() {", svc.GoName))
				g.P(fmt.Sprintf(`	log.Println("I'm the printer for the service named %s")`, svc.GoName))
				g.P("}")
			}

			g.QualifiedGoIdent(protogen.GoIdent{GoName: "log", GoImportPath: "log"})
		}

		return nil
	})
}

This would now generate, if run on our input example.proto, the output:

In example.pb.custom.go (generated by our plugin):
package example

// don't forget to take some time to take a sip of water please

// Here is a useful function
func EggDeliveryServiceNamePrinter() {
	log.Println("I'm the printer for the service named EggDeliveryService")
}

To finish implementing our client mock generator, we iterate one level deeper through the svcs’ protogen.Methods to generate a fully-formed mock.

The full example is shown here, and generates the client mock code we wanted.

2. Extend runtime functionality: reflection with protoreflect

Sometimes code-generation isn’t enough β€” sometimes you want to be able to inspect your types at runtime. The new Go protobuf API exposes a rich reflection API that exposes a view of types and values from the protobuf type system.

For example, you might want to have a function that sanitizes requests received from a client by replacing any empty primitive values with nil (in proto2), both at the top-level and recursively descending into messages. This might look like:

In go/reflect/clean.go (written by hand; runs at runtime not at compile-time):
11// Clean replaces every zero-valued primitive field with a nil value. It recurses
12// into nested messages, so cleans nested primitives also
13func Clean(pb proto.Message) proto.Message {
14	m := pb.ProtoReflect()
15
16	m.Range(cleanTopLevel(m))
17
18	return pb
19}
20
21func cleanTopLevel(m protoreflect.Message) func(protoreflect.FieldDescriptor, protoreflect.Value) bool {
22	return func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool {
23		switch kind := fd.Kind(); kind {
24		case protoreflect.BoolKind:
25			if fd.Default().Bool() == v.Bool() { m.Clear(fd) }
26		case protoreflect.Int32Kind, protoreflect.Sint32Kind, protoreflect.Sfixed32Kind, protoreflect.Int64Kind, protoreflect.Sint64Kind, protoreflect.Sfixed64Kind:
27			if fd.Default().Int() == v.Int() { m.Clear(fd) }
28		case protoreflect.Uint32Kind, protoreflect.Fixed32Kind, protoreflect.Uint64Kind, protoreflect.Fixed64Kind:
29			if fd.Default().Uint() == v.Uint() { m.Clear(fd) }
30		case protoreflect.FloatKind, protoreflect.DoubleKind:
31			if fd.Default().Float() == v.Float() { m.Clear(fd) }
32		case protoreflect.StringKind:
33			if fd.Default().String() == v.String() { m.Clear(fd) }
34		case protoreflect.BytesKind:
35			if len(v.Bytes()) == 0 { m.Clear(fd) }
36		case protoreflect.EnumKind:
37			if fd.Default().Enum() == v.Enum() { m.Clear(fd) }
38		case protoreflect.MessageKind:
39			nested := v.Message()
40			nested.Range(cleanTopLevel(nested))
41		}
42
43		return true
44	}
45}

We can configure this further by defining an optional proto annotation on fields:

 9extend google.protobuf.FieldOptions {
10  // If true, tells Clean() function in go/reflect not to clean this field
11  optional bool do_not_clean = 80001;
12}

We can then set this annotation value to false for any fields we don’t want to be Clean()ed:

16message OrderEggRequest {
17  optional string name = 1;
18  optional string description = 2 [(annotations.do_not_clean) = true];
19  optional int32 num_eggs = 3;
20  optional bool with_shell = 4;
21  optional Recipient recipient = 5;
22}

And exclude it in our cleaning:

In go/reflect/clean.go (written by hand; runs at runtime not at compile-time):
20...
21
22func cleanTopLevel(m protoreflect.Message) func(protoreflect.FieldDescriptor, protoreflect.Value) bool {
23	return func(fd protoreflect.FieldDescriptor, v protoreflect.Value) bool {
24
25		// Skip cleaning any fields that are annotated with do_not_clean
26		opts := fd.Options().(*descriptorpb.FieldOptions)
27		if proto.GetExtension(opts, annotations.E_DoNotClean).(bool) {
28			return true
29		}
30
31		// Otherwise, set any empty primitive fields to nil. For non-primitive fields, recurse down
32		// one level with this function
33		...
34	}
35}

What else

We extend protobufs at eggybytes to enforce access control, generate code for database access, and do many other things that involve repetitive but critical code. It’s extremely handy, and lets us ensure we have unified behavior around our types across all parts of our stack. Let us know if you’d like to dig deeper into protobuf, Bazel, Go, or anything else!

(Like this content? πŸ‘β€πŸ—¨ Subscribe here to be notified about future updates!)