Google Kubernetes Engine (GKE) can expose HTTPS services using Kubernetes Gateway/Ingress.
To expose non-HTTP services, you need to separately set up
External TCP/UDP Network Load Balancing and configure routing to the Service.
While each GKE cluster can be shared among multiple gateways, TCP load balancer needs to be added separately for each port you want to expose.
Also, even for HTTPS services, manual configuration may be necessary to utilize additional GCP features.
Create NEG
You can create a Network Endpoint Group (NEG) to handle requests along with standard Kubernetes operations.
By adding the cloud.google.com/neg
annotation to the Service, a standalone NEG in GCP will be created.
The corresponding Pods, such as created by Deployment, operate with a fairly standard configuration.
apiVersion: v1
kind: Service
metadata:
name: mail
labels:
name: mail
annotations:
cloud.google.com/neg: '{"exposed_ports": {"25": {"name": "mail"}}}'
spec:
type: ClusterIP
selector:
name: mail
ports:
- port: 25
targetPort: mail-smtp
You can check the resource status using the command kubectl get ServiceNetworkEndpointGroups [-o yaml <NEG name>]
.
When NEG is no longer needed, it will be deleted simultaneously with the deletion of the Service using kubectl delete
.
The port numbers for exposed_ports
annotation should be specified ones that the load balancer will access. For TCP load balancers, this would be the number to expose, and for HTTPS load balancers, it would be 443 or 80.
Since exposed_ports
requires to be specified in spec.ports.[].port
, if Pod is serving on a port different from the exposed port, you need to use targetPort
to perform the conversion.
Error on NEG creation
NEG cannot be duplicated with the same name within a zone, but duplication is possible across different zones. Zones are determined by the location of the Kubernetes cluster.
In case of creation errors such as duplicates, a k8s Service will be created but the NEG will not be added, resulting in a confusing behavior.
You can check the error logs by executing kubectl describe service <service-name>
. It’s important to interpret the specific messages in Events
field. Even if the creation of the NEG fails, the log level remains at WARNING
.
Create LoadBalancer
HTTP services use Ingress, and GKE Ingress supports container-native load balancing.
On the other hand, for TCP services, there is no corresponding Kubernetes service, and it needs to be manually constructed using gcloud compute
command.
It should be created in the following order along the dependency chain:
health check, backend service, TCP proxy, forwarding rule.
Backend service creation
$ gcloud compute health-checks create tcp <tcp-basic-check> --use-serving-port
--use-serving-port
option monitors Service’s port configuration.
$ gcloud compute backend-services create <gke-bs> \
--global --protocol TCP \
--global-health-checks --health-checks <tcp-basic-check> \
--timeout 5m
Since the TCP proxy is a global-scoped service, the backend service needs to be created with --global
.
$ gcloud compute backend-services add-backend <gke-bs> --global \
--network-endpoint-group=<mail> --network-endpoint-group-zone=asia-northeast1-a \
--balancing-mode CONNECTION --max-connections-per-endpoint 1
You also need to specify the zone for NEG in addition to the global region for the backend service.
TCP proxy creation
$ gcloud compute target-tcp-proxies create <gke-lb-proxy> --backend-service <gke-bs> [--proxy-header PROXY_V1]
If you want to use the proxy protocol for IP preservation, specify PROXY_V1 in the target-tcp-proxies creation options.
Create targetHttpsProxy manually
HTTPS services can be set up using k8s Gateway like Ingress, and basic configuration doesn’t require network setup with gcloud
.
To leverage the unique features provided by GCP load balancers, you would create a set for HTTPS using target-https-proxy
, url-maps
, and ssl-certificates
, replacing target-tcp-proxy
.
Since HTTPS load balancer objects are complex, especially for the initial setup, it’s easier to create them via the web console.
The configuration specifies that url-maps
routes to backend-services
, therefore traffics will reach each k8s Service via NEG.
Forwarding rule creation
$ gcloud compute forwarding-rules create <tcp-gke-forwarding-rule> --global \
--target-tcp-proxy <gke-lb-proxy> --ports 25 --address <some-static-ip>
--address
option specifies a static IP pre-allocated. When creating multiple forwarding rules, you can set them to the same IP as long as the ports do not overlap.
Manage Routing objects
The four GCP-specific objects, target-tcp-proxy
, forwarding-rules
, backend-services
, and health-checks
, can be easily managed as code using Terraform.
Utilizing Terraformer allows you to export objects created with the gcloud command.
Migrate clusters
If you create a global external TCP proxy load balancer using the above steps, completing the migration of the cluster involves only changing the NEG referenced by the backend-services among the four GCP objects created.
Since the proxy is providing services globally, you can reuse it as is, and the IP address remains unchanged.
- Create a new Kubernetes cluster and deploy Services and Pods.
- The annotations in Service manifests automatically creates the NEG in the zone of the new cluster.
- Note that if you create the cluster in the same zone, you may get a duplicate NEG error, and modifying the annotation is required.
- Modify the
backend-services
configuration.
- Use gcloud or Terraform.
health-checks
is a global object, and it can be reused even if the zone changes for the same service.
If using Terraform, you can change the NEG pointed to by backendServices
by rewriting the URI in resource.backend.group
of backendServices.
In the exerpted example below, zones
section should be modified to match the location of the new cluster:
resource "google_compute_backend_service" "tfer--mail" {
backend {
group = "https://www.googleapis.com/compute/v1/projects/<some-project-name>/zones/asia-northeast1-b/networkEndpointGroups/<NEG-name>"
}
}
Once you’ve made the edits, you can proceed with the standard process by confirming with terraform plan
and applying the configuration with terraform apply
.
During this process, you’ll also reuse the health-checks
.
However, note that depending on gcloud
options used during construction, it is possible to create a regional TCP load balancer. Therefore, it is essential to deterine the configuration before starting. The various types of services are explained in
GCP Load Balancer Classification.
When you already created a regional TCP load balancer, an overall reconstruction is necessary, and the IP address changes. Ultimately, you need to switch over by making DNS configuration changes.
Furthermore, comprehensive guidelines for aspects beyond networking are explained in the GKE Cluster Migration Whitepaper.
Chuma Takahiro