In the microservice architecture, gRPC has the characteristics of high performance, cross-language and strong type interface, and is the mainstream protocol for communication between services. However, in the face of production-level requirements such as service discovery, load balancing, and security protection, it is directly exposed that gRPC services are risky and difficult to manage. Nginx, as a benchmark tool for reverse proxy, can perfectly solve these problems. How to use Nginx to achieve the forwarding, load balancing and security hardening of gRPC requests?
Before we start, we should understand why Nginx is needed to forward gRPC?
Based on load balancing, distributing requests to multiple backend instances can avoid single point of failure. There is also SSL/TLS termination, and certificates are uniformly managed at the Nginx layer to reduce the pressure on back-end services. For traffic, control strategies such as speed limit and circuit breaker can be implemented to protect the stability of the back-end service. The protocol conversion supports the mutual conversion between gRPC and HTTP/JSON (requiring plugins such as grpcgateway); Centralized collection of request metrics in monitoring and logs facilitates troubleshooting and performance analysis.
Environment preparation for compilation with Nginx
gRPC is based on the HTTP/2 protocol. It is necessary to ensure that the Nginx version is ≥1.13.10 and enable http_ssl_module and http_v2_module.
Install dependencies
sudo aptget install buildessential libpcre3 libpcre3dev zlib1g zlib1gdev libssldev
Compiling Nginx (taking 1.25.3 as an example)
Wget HTTP: / / https://nginx.org/download/nginx1.25.3.tar.gz
tar zxvf nginx1.25.3.tar.gz
cd nginx1.25.3
./configure withhttp_ssl_module withhttp_v2_module
make && sudo make install
Verification module
/usr/local/nginx/sbin/nginx V
The output should contain withhttp_ssl_module and withhttp_v2_module
Basic configuration: Forward gRPC requests
Suppose the back-end gRPC service is running on localhost:50051. The following configuration implements the HTTP/2 proxy:
The core fragment of nginx.conf
nginx
http {
Enable HTTP/2
server {
listen 443 ssl http2;
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;
location / {
grpc_pass grpc://backend_grpc;
}
}
Define the back-end service cluster
upstream backend_grpc {
server 127.0.0.1:50051;
}
}
Analysis of Key Parameters
grpc_pass: Specifies the address of the gRPC backend service, with the protocol header being grpc:// or grpcs:// (SSL encrypted); http2: The listening port must enable HTTP/2; ssl_ : If HTTPS is required, configure the certificate path.
Advanced Practice: Load Balancing and Health Check
Nginx supports multiple load balancing strategies (polling, IP hashing, minimum connection) and can eliminate faulty nodes through active health checks.
Configuration example
nginx
upstream backend_grpc {
server 192.168.1.101:50051 weight=3;
server 192.168.1.102:50051
server 192.168.1.103:50051 max_fails=3 fail_timeout=30s;
Health check (Nginx Plus or open-source alternative is required)
check interval=5000 rise=2 fall=3 timeout=1000 type=http;
Check_http_send HTTP / 2.0 "PRI \ r \ n \ r \ nSM \ r \ n \ r \ n";
check_http_expect_alive http_2xx;
}
Strategy Description
weight=3: Weight allocation. This node receives 3 times the traffic.
max_fails=3: Three consecutive failures are marked as unavailable;
check_http_send: Send the HTTP/2 health check probe.
SSL/TLS security hardening
Enable strong encryption and protocol restrictions for gRPC communication:
nginx
server {
...
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHEECDSAAES128GCMSHA256:ECDHERSAAES128GCMSHA256;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;
}
Certificate generation (Self-signature Example)
openssl req x509 newkey rsa:4096 nodes keyout key.pem out cert.pem days 365
Debugging and Verification
Step 1: Start Nginx
/usr/local/nginx/sbin/nginx
Step 2: Test using grpcurl
Install grpcurl
go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
Send a request
grpcurl proto service.proto plaintext localhost:443 mypackage.MyService/MyMethod
Step 3: Check the Nginx log
tail f /usr/local/nginx/logs/access.log
Output example:
127.0.0.1 "POST/mypackage MyService/HTTP / 2.0" MyMethod (int x) 200 45
Performance tuning techniques
Connection reuse is achieved by adding keepalive_requests and keepalive_timeout to reduce the TCP handshake overhead.
nginx
upstream backend_grpc {
keepalive 100;
keepalive_requests 10000;
keepalive_timeout 60s;
}
Buffer optimization: Adjust the buffer size of gRPC messages to avoid blocking of large packet transmission.
nginx
http {
grpc_buffer_size 1m;
grpc_next_upstream_timeout 10s;
}
Use limit_conn_zone to limit the number of concurrent connections per IP to prevent resource exhaustion.
nginx
limit_conn_zone $binary_remote_addr zone=grpc_conn:10m;
server {
limit_conn grpc_conn 100;
}
Common Problems and Solutions
502 Bad Gateway: Check whether the backend service is running. The Nginx error log (error.log) usually contains detailed reasons.
Protocol error: not a gRPC request: Confirm that the client uses the HTTP/2 protocol and the grpc_pass configuration is correct.
Performance bottleneck: Enable the stub_status module of Nginx to monitor the number of connections, and adjust the number of processes in combination with worker_processes and worker_connections.
A reasonable configuration of Nginx can achieve efficient forwarding of gRPC traffic and also obtain enterprise-level load balancing and security protection capabilities. The above is the configuration template and optimization method shared with you all. It can be verified in multiple production environments with tens of thousands of QPS, helping you improve the utilization rate of the microservice architecture.