A brief introduction to gRPC in Go

gRPC overview for beginners using Go language

Kamil Lelonek
Kamil Lelonek  - Software Engineer

--

RPC

An RPC is a network programming model or interprocess communication technique used for point-to-point communications between software applications.

RPC is a protocol which one program can use to request a service from a program located in another computer without having to understand the network’s details.

The RPC stands for “remote procedure call” and it’s a form of client-server interaction — caller is a client and executor is a server — typically implemented via a request-response message-passing system.

Client runtime program has the knowledge of how to address the remote server application and sends across a network the message that requests the remote procedure. Similarly, the server includes a runtime program and stub that interface with the remote procedure itself.

How does it work?

The way RPC works is that a sender or a client creates a request in the form of a procedure, function or method call to a remote server, which RPC translates and sends. When the remote server receives the request, it sends a response back to the client and the application continues its process.

When the server processes the call or request, the client waits for the server to finish processing before resuming its process. However, the use of lightweight processes or threads, that share the same address space, allows multiple RPCs to be performed concurrently.

The use case

We’ll be implementing a Gravatar service to generate URLs containing an MD5 hash of the associated email address. They can be used to load globally unique avatars from the Gravatar web server.

Our clients will be able to communicate with the server via RPC protocol, sending their emails and the desired image size. In response, they will receive a personalized link to their own avatar configured on https://gravatar.com.

Protocol Buffers

Protobuf (or Protocol Buffers) is a language-agnostic and platform-neutral serialization format invented at Google. Each protocol buffer message is a small logical record of information, containing a series of name-value pairs.

Unlike XML or JSON, here you first define the schema in a .proto file. They are a format like JSON but simpler, smaller, strictly typed, understandable only from the client to the server and faster to Marshall/Unmarshall. For example:

A message type is a list of numbered fields, and each field has a type and a name. After defining the .proto file, you run the protocol buffer compiler to generate code for the object (in the language of your choice), with get/set functions for the fields, as well as object serialization/deserialization functions. As you can see, you can package messages within namespaces as well.

Installation

We compile a protocol buffer using protoc compiler and the target file is generated for a programming language. For Go, the compiler generates a .pb.go file with a type for each message type in your file.

To install the compiler, run:

brew install protobuf

Then, create and initialize a new project inside your GOPATH:

mkdir profobuf-example
cd profobuf-example
go mod init

Next, install Go support for Google’s protocol buffers:

go get -u github.com/golang/protobuf/protoc-gen-go
go install github.com/golang/protobuf/protoc-gen-go

Finally, compile all .proto files:

protoc --go_out=. *.proto

My compiled file looks as follows:

gRPC

GRPC is a high-performance RPC framework that is built using protocol buffers (as both its Interface Definition Language and as its underlying message interchange format) and HTTP/2.

Once you’ve specified your data structures, you also define gRPC services in ordinary .proto files, with RPC method parameters and return types specified as protocol buffer messages. In our case it’s exactly:

service GravatarService {
rpc Generate(GravatarRequest) returns (GravatarResponse) {}
}

When you use protoc with a gRPC plugin to generate code from your proto file, you will get not only the regular protocol buffer code for populating, serializing, and retrieving your message types but also generated gRPC client and server code. To do that, just run:

protoc --go_out=plugins=grpc:. *.proto

The diff is as follow:

Implementation

We now have generated server and client code, so we need to implement and call these methods in our application.

Let’s start from implementing the base logic for our “core business”:

There’s nothing special about it, it’s just a regular MD5 generation in GO.

The implementation of the server is more interesting, though:

We defined the port to run the server on, and the gravatarService struct which covers GravatarService definition from .proto file. As you can see, we also implemented the required Generate method on it, which receives GravatarRequest and produces a corresponding GravatarResponse.

We open a tcp connection on the given port, create a new gRPC server which registers our handler and start it on the opened listener. We are ready to handle requests now.

The implementation of a client is not much harder either. It’s even easier I would say:

We open a specific connection on the given address (it’s localhost with the previously defined port in our case) and we register a new client on the given connection. Keep in mind we have to close both the connection and shut down the context while exiting our program.

Finally, we call Generate method on our client with the GravatarRequest and our data inside it. If it succeeds, we can print the received URL with our hash.

Subscribe to get the latest content immediately
https://tinyletter.com/KamilLelonek

Summary

Protocol Buffers offer very real advantages in terms of speed of encoding and decoding, size of the data on the wire, and more. You may wonder now, what are the benefits of gRPC over a regular JSON REST API. Let’s consider a couple of things:

Schemas

We rely too often on inconsistent code at the boundaries between our systems. It doesn’t enforce the structure of our components that is so important. Encoding the semantics of your business objects once, in proto format, is enough to help ensure the signal doesn’t get lost between applications and that the boundaries you create fulfill your business rules.

Backward Compatibility

With numbered fields, you never have to change the behavior of code going forward to maintain backward compatibility with older versions. As the documentation states, once Protocol Buffers were introduced:

“New fields could be easily introduced, and intermediate servers that didn’t need to inspect the data could simply parse it and pass through the data without needing to know about all the fields.”

Schema evolution

A stub class generated by Protocol Buffers (that you generally never have to touch) can provide much of the JSON functionality without all of its headaches. As your schema evolves along with your proto generated classes (once you regenerate them, admittedly), leaving more room for you to focus on the challenges of keeping your application going and building your product.

Validations

The required, optional, and repeated keywords in Protocol Buffers definitions are extremely powerful. They allow you to encode, at the schema level, the shape of your data structure, and the implementation details of how classes work in each language are handled for you. Libraries will raise exceptions, for example, if you try to encode an object instance which does not have the required fields filled in. You can also always change a field from being required to being optionalor vice-versa by simply rolling to a new numbered field for that value. Having this kind of flexibility encoded into the semantics of the serialization format is incredibly powerful.

Language Interoperability

Because Protocol Buffers are implemented in a variety of languages, they make interoperability between polyglot applications in your architecture much simpler. If you’re introducing a new service in NodeJS, Go, or even Elixir, you simply have to hand the proto file to the code generator written in the target language and you have some nice guarantees about the safety and interoperability between those architectures.

You can argue that you would still use JSON for some simple cases, and I agree, it’s not a wholesale replacement for JSON, especially for services which are directly consumed by a web browser. I just hope you will find the right place for them among your own use cases.

Many thanks for

who inspired me to write this article and made the code review.

--

--