gke pod network delay is particularly large

I used gke to deploy a standard mode kubernetes cluster, the region is Africa-c, mysql and redis are purchased in Africa-b region, and other business middle prices are also purchased in the Africa region. When I deployed business applications to kubernetes, I found that the network delay between pods was particularly large in the test function, and the time exceeded 10s. I deployed istio in the cluster. At first, I thought it was caused by the injected envoy sidecar. Later, I cancelled the automatic injection, but the delay problem was still not solved.
In the subsequent investigation, I found in the gke console that there was a little difference between the configurations of the two clusters I created. My African regional cluster, after automatic creation, the service ip range assigned was 34.118.224.0/20. I think this is definitely not a normally assigned private network segment. Compared with my European regional cluster, the ip range of the service assigned after automatic creation is 10.14.176.0/20, which is obviously a private network segment. I don’t know if this will cause network delays, or are there any other troubleshooting solutions or solutions?

0 1 150
1 REPLY 1

In GKE Autopilot clusters running version 1.27 and later, and GKE Standard clusters running version 1.29 and later, GKE assigns IP addresses for GKE Services from a GKE-managed range: 34.118.224.0/20 by default. This eliminates the need for you to specify your own IP address range for Services. In African regional cluster you did not define the secondary range that's why GKE assigned the default IP range. Therefore different-different service IP network range and pod ip network range does create slowness.

You need to configure traceability in you application to identify the slowness. 

https://cloud.google.com/trace/docs/trace-app-latency

Top Labels in this Space
Top Solution Authors