Streamlining Code Generation for Go gRPC with Bob
If you've ever worked with gRPC in Go, you know that creating and managing the environment to generate code from proto files can be a bit daunting. On top of that with multiple programs consuming the generated code (e.g. client/server) it's either "build all" or manually remember which parts have to be rebuild. The former can be time consuming while the latter adds cognitive complexity to the workflow.
With that in mind let's have a look how it's possible to
- Managing dependencies like
go
,protobuf
orprotoc-gen-go
across systems - Automatically rebuilding affected binaries when a proto file changed
Our proto file
Let's just use a very simple proto
file which contains a single service with remote procedure call (RPC) SayHello
. This call gets a SayHelloRequest
with a string message and returns a SayHelloResponse
that also contains a string message.
Note: the option go_package
is required specifically for our go code generation.
syntax = "proto3";
option go_package = "pkg/proto/example";
message SayHelloRequest {
string Message = 1;
}
message SayHelloResponse {
string Message = 1;
}
service HelloWorldService {
rpc SayHello(SayHelloRequest) returns (SayHelloResponse);
}
Generate the code
To generate the code we make use of bob. This tool is capable of enforcing build orders and only builds what is necessary. It can determine by itself if action is needed depending on the given input (our .proto file) and target (our generated code). From these conditions bob can decide if it's necessary to generate code or not. Thus preventing unnecessary build steps further down the build tree. Bob is also able to bring in the required build tools and compilers like go
, protobuf
, protoc-gen-go
and protoc-gen-go
via a nix integration. You can find packages on the nix package search.
Let's see what the bob.yaml
looks like. At first we define our nix package store. This is the place bob can install the build tools from. Next we define all the build tool and plugins we need. In this case it will be:
- go_1_20
- protobuf
- protoc-gen-go
- protoc-gen-go-grpc
The last step is to define a build job with input example.proto
and target pkg/proto/example
which is the output path for the generated code. The command contains the protoc build command protoc --go_out=. --go-grpc_out=. example.proto
.
The full bob.yaml
will look like this:
nixpkgs: https://github.com/NixOS/nixpkgs/archive/nixos-22.11.tar.gz
dependencies: [
go_1_20,
protobuf,
protoc-gen-go,
protoc-gen-go-grpc,
]
build:
proto:
input: example.proto
cmd: |
mkdir -p pkg/proto/example
protoc --go_out=. --go-grpc_out=. example.proto
target: pkg/proto/example
Brief digression: Should generated code be committed to version control?
There's a lot different of opinions if generated code should reside within version control or be treated as a build artifact. These key points helped the teams I worked in to decide on how to handle generated code.
- You need the generated code in the CI pipeline to build your application but you will also need it for local development
- If the generated code is part of a critical application, it may be necessary to review the code to ensure that it meets certain standards or complies with specific requirements.
Treating generated code as a build artifact instead of being part of version control can help to keep the repository clean and manageable, and can also make it easier to maintain and update the code over time. Though it requires a build system which guarantees the code can be generated or load from a cache at any point in time.
A hybrid way we often used is to check in the generated code into version control. Then have a step in your CI pipeline to regenerate the code and compare with the checked in code. That way you have a safety mechanism to make sure the latest version of the generated code has been checked in, and it hasn't been altered manually. Another advantage is that it make you less dependent on a specific build tool if the sources are right there in the repository.
Simple grpc client
Let's turn our attention back to the code and create a basic client in client/main.go
. Our client will establish a connection to the server and send a message "Hello, Server!" using the SayHello
RPC call. The response from the server will be printed by the client. The following code demonstrates the client implementation:
package main
import (
"context"
"fmt"
"google.golang.org/grpc"
"google.golang.org/grpc/credentials/insecure"
"log"
pb "example.com/bob-grpc/pkg/proto/example" // Update the import path to the location of your generated Go protobuf files
)
func main() {
// Set up the connection to the gRPC server
conn, err := grpc.Dial("localhost:8080", grpc.WithTransportCredentials(insecure.NewCredentials()))
if err != nil {
log.Fatalf("Failed to dial: %v", err)
}
defer conn.Close()
// Create a new SayHello client
client := pb.NewHelloWorldServiceClient(conn)
// Prepare the SayHelloRequest
request := &pb.SayHelloRequest{
Message: "Hello, Server!",
}
// Call the SayHello API
response, err := client.SayHello(context.Background(), request)
if err != nil {
log.Fatalf("Failed to call SayHello: %v", err)
}
// Print the response message
fmt.Println("Response:", response.Message)
}
Simple grpc server
To implement the server, we first create a struct that implements the methods defined in the proto definition. In our case, we only have one method to implement, which is the SayHello
method. This method takes a context.Context
and a SayHelloRequest
object and returns a SayHelloResponse
and possibly an error
.
Inside the SayHello
method, we can access the message sent by the client and print it out. Then, we create and return a SayHelloResponseMessage
message with our response, in this case "Hello from Server!".
After defining the server struct and implementing the necessary methods, we need to create a TCP listener and register our server to a new instance of grpc.Server
. Finally, we can serve the server to listen and handle incoming requests.
Here's the code for our gRPC server implementation:
package main
import (
"context"
"fmt"
"google.golang.org/grpc/credentials/insecure"
"log"
"net"
pb "example.com/bob-grpc/pkg/proto/example"
"google.golang.org/grpc"
)
type helloServer struct {
pb.HelloWorldServiceServer
}
func (s *helloServer) SayHello(ctx context.Context, req *pb.SayHelloRequest) (*pb.SayHelloResponse, error) {
message := req.GetMessage()
fmt.Println("Received message:", message)
// Create the response message
responseMessage := "Hello from Server!"
response := &pb.SayHelloResponse{
Message: responseMessage,
}
return response, nil
}
func main() {
lis, err := net.Listen("tcp", ":8080")
if err != nil {
log.Fatalf("Failed to listen: %v", err)
}
s := grpc.NewServer(grpc.Creds(insecure.NewCredentials()))
pb.RegisterHelloWorldServiceServer(s, &helloServer{})
fmt.Println("Server started on port 8080")
if err := s.Serve(lis); err != nil {
log.Fatalf("Failed to serve: %v", err)
}
}
Run it
Before we can run our client and server it is necessary to generate the gRPC code. We can do this by running bob build proto
. On the first call for this command bob will install all the required build tools and plugins and then run the actual protoc
command. The resulting code will be saved to pkg/proto/example
.
Run the server with go run server/main.go
. Then run the client with go run client/main.go
.
Output from server:
Server started on port 8080
Received message: Hello, Server!
Output from client:
Response: Hello from Server!
Building and rebuilding your applications
Bob shines in managing build dependencies without requiring a fixed order in which build tasks should be executed. In our example, we create two build tasks, namely, server
and client
. Both of these tasks depend on the previously created proto task individually. Here's what it looks like for the client
task.
client:
dependsOn:
- proto
input: |
client/
pkg/
cmd: |
mkdir -p build
go build -o build/client client/main.go
target: build/client
Now, when building the client
using bob, the tool checks if the proto
task needs to be built first. If so, it delays the client's build until the target from proto
is ready. Let's try it out and see what happens. Running the command bob build client
will produce the following output:
As we previously built proto
the result is cached and the client
task can be run right away.
Let's take it a step further and create a task to build everything. This is easy as we only need to depend on the build tasks for both client
and server
. This "dummy" task doesn't have any executable code and only ensures that all its dependencies are built before it. In other words, it acts as a trigger for the build process.
build:
dependsOn:
- client
- server
As the client
and server
task are not dependent on each other they can even be build in parallel. Also we named our task build
, which is considered to be the default task. It can be executed by just calling bob build
:
This is the build output of a fresh environment, where no other tasks have been built before and therefore nothing has been cached yet. The proto
task is built first, and thanks to the parallel build capability of bob, the client
and server
tasks can be built simultaneously, resulting in a significantly faster build process. This can be seen from the full run duration, demonstrating the power of Bob's parallel build feature.
Conclusion
In this article, we've seen just how easy it is to create a simple gRPC client and server in Go, thanks in part to the powerful bob build tool. By using protocol buffers as the data interchange format and letting bob handle the code generation process, we can achieve better performance and scalability while keeping our repositories clean and manageable.
So if you're looking to streamline your Go development workflow and take advantage of the benefits of gRPC, be sure to check out bob. With its easy-to-use interface and automatic build tool integration, you'll be up and running with gRPC in no time.
You can find the full code for this article at github.com/benchkram/bob-grpc-example.
Do you need Kubernetes or Go Experts?
We are a Software Agency from Stuttgart, Germany and can support you on your journey to deliver outstanding user experiences and resilient infrastructure. Reach out to us.