protobuf vs gRPC
gRPC is an instantiation of RPC integration style that is based on protobuf serialization library. There are five integration styles: RPC, File Transfer, MOM, Distributed Objects, and Shared Database. RMI is another example of instantiation of RPC integration style. There are many others. MQ is an instantiation of MOM integration style. RabbitMQ as well. Oracle database schema is an instantiation of Shared Database integration style. CORBA is an instantiation of Distributed Objects integration style. And so on. Avro is an example of another (binary) serialization library.
Actually, gRPC and Protobuf are 2 completely different things. Let me simplify:
- gRPC manages the way a client and a server can interact (just like a web client/server with a REST API)
- protobuf is just a serialization/deserialization tool (just like JSON)
gRPC has 2 sides: a server side, and a client side, that is able to dial a server. The server exposes RPCs (ie. functions that you can call remotely). And you have plenty of options there: you can secure the communication (using TLS), add authentication layer (using interceptors), ...
You can use protobuf inside any program, that has no need to be client/server. If you need to exchange data, and want them to be strongly typed, protobuf is a nice option (fast & reliable).
That being said, you can combine both to build a nice client/server sytem: gRPC will be your client/server code, and protobuf your data protocol.
PS: I wrote this paper to show how one can build a client/server with gRPC and protobuf using Go, step by step.
grpc is a framework build by google and it is used in production projects from google itself and #HyperledgerFabric is built with grpc there are many opensource applications built with grpc
protobuff is a data representation like json this is also by google in fact they have some thousands of proto file are generated in their production projects
grpc
- gRPC is an open-source framework developed by google
- It allows us to create Request & Response for RPC and handle rest by the framework
- REST is CRUD oriented but grpc is API oriented(no constraints)
- Build on top of HTTP/2
- Provides >>>>> Auth, Loadbalancing, Monitoring, logging
- [HTTP/2]
- HTTP1.1 has released in 1997 a long time ago
- HTTP1 opens a new TCP connection to a server at each request
- It doesn't compress headers
- NO server push, it just works with Req, Res
- HTTP2 released in 2015 (SPDY)
- Supports multiplexing
- client & server can push messages in parallel over the same TCP connection
- Greatly reduces latency
- HTTP2 supports header compression
- HTTP2 is binary
- proto buff is binary so it is a great match for HTTP2
- [TYPES]
- Unary
- client streaming
- server streaming
- Bi directional streaming
- grpc servers are Async by default
- grpc clients can be sync or Async
protobuff
- Protocol buffers are language agnostic
- Parsing protocol buffers(binary format) is less CPU intensive
- [Naming]
- Use camel case for message names
- underscore_seperated for fields
- Use camelcase for Enums and CAPITAL_WITH_UNDERSCORE for value names
- [Comments]
- Support //
- Support /* */
- [Advantages]
- Data is fully Typed
- Data is fully compressed (less bandwidth usage)
- Schema(message) is needed to generate code and read the code
- Documentation can be embedded in the schema
- Data can be read across any language
- Schema can evolve any time in a safe manner
- faster than XML
- code is generated for you automatically
- Google invented proto buff, they use 48000 protobuf messages & 12000.proto files
- Lots of RPC frameworks, including grpc use protocol buffers to exchange data
Protocol buffers is (are?) an Interface Definition Language and serialization library:
- You define your data structures in its IDL i.e. describe the data objects you want to use
- It provides routines to translate your data objects to and from binary, e.g. for writing/reading data from disk
gRPC uses the same IDL but adds syntax "rpc" which lets you define Remote Procedure Call method signatures using the Protobuf data structures as data types:
- You define your data structures
- You add your rpc method definitions
- It provides code to serve up and call the method signatures over a network
- You can still serialize the data objects manually with Protobuf if you need to
In answer to the questions:
- gRPC works at layers 5, 6 and 7. Protobuf works at layer 6.
- When you say "message transfer", Protobuf is not concerned with the transfer itself. It only works at either end of any data transfer, turning bytes into objects
- Using gRPC by default means you are using Protobuf. You could write your own client that uses Protobuf but not gRPC to interoperate with gRPC, or plugin other serializers to gRPC - but using gRPC would be easier
- True
- Yes you can