Consider a scenario in which a Google Kubernetes Engine cluster is running an API-only application. This application provides data to various internal apps and is only accessible by internal business apps running on the same GKE cluster. The company is introduced to a new requirement to expose some API endpoints from its internal application to an external client outside the organisation. In general, we can expose application pods using the Kubernates service resource, which can be any of the following:
- NodePort: Exposes a service via a static port on each node’s IP
- ClusterIP: Default type of service which is only accessible from within the cluster
- LoadBalancer: Expose application to the public internet
- ExternalName: Maps a service to an externalName field by returning a value for the CNAME record
The client application for this use case is not running on our cluster and is on a completely different client’s network. As a result, we have to provide an interface to our internal application through which clients can interact and request data from the application. In this blog post, I will provide a high-level overview of how to achieve this requirement using the GCP API gateway and Cloud functions.
The Application
The application used in this post is a basic Rails API application that returns JSON responses with information about the products and associated reviews. The application would be far more complex in the real world.
Component Details
This flow begins at the API Gateway. Clients will be given API endpoints, and for the time being, endpoints only use API key authentication for requests. Using simply an API key, on the other hand, is not regarded safe because it is vulnerable to a man in the middle attack and has no expiration date. To counteract the attack, we can impose some constraints on the keys. In the real world, we’d use the API key in conjunction with another authentication technique, such as OAuth access tokens.
We need an intermediary layer to request data and return it to the API gateway because the API gateway can’t make direct requests to the internal GKE application. To accomplish this, we will use Cloud functions as a middle layer to send requests to the GKE service.
Because Cloud Functions are a serverless component of Google Cloud Platform, they run in a distinct GCP network from our VPC network. As a result, and in order to establish communication, Google Cloud Platform’s VPC Networking includes a VPC connector component that allows serverless environments to connect directly to your VPC network.
Once we’ve established communication between Cloud functions and the VPC network, we’ll need a way to call our application pod running within our GKE cluster. At this point, the Kubernetes service resource comes into play. We require a service that provides an interface to our application pod. This service should not be public (i.e., accessible to the public internet) or private (i.e., only accessible within the cluster).
To address this, GKE provides the ability to create an internal load balancer that is only accessible via the GCP internal VPC network. The difference between a standard load balancer service and an internal load balancer service is that the internal load balancer does not expose our application to the public internet; instead, it only makes our application available to applications outside the cluster that use the same VPC network and are located in the same Google Cloud region.
To create an internal load balancer service, first create a load balancer service and then add the following annotations entry in the service definition:
apiVersion: v1
kind: Service
metadata:
name: shop-api-internal-load-balancer
namespace: staging
annotations:
networking.gke.io/load-balancer-type: "Internal"
spec:
ports:
- port: 80
protocol: TCP
targetPort: 3000
selector:
app: shop-api
type: LoadBalancer
We’ll be able to route traffic to our application pods once the load balancer is up and running. As the base endpoint for our application, we’ll utilise the public IP address of the load balancer to make a call to our application from the cloud functions. We could also create a DNS entry for this and utilise DNS instead of the IP address, but I’m just going to use the IP address highlighted in the following screenshot.
Application testing
Once everything is in place, we can use an API key in the header to call the API Gateway endpoint.
curl --location --request GET 'https://shopi-api-gateway-gygw9k8.nw.gateway.dev/products' \
--header 'X-API-KEY: '
If no errors occur, we should see a response from the application running within the GKE cluster.
{
"products": [
{
"title": "Veranda Symphony",
"price": "83.84",
"reviews": [
{
"id": 4,
"name": "Msgr. Arron Morissette",
"message": "Nisi est ut consequatur.",
"rate": 2
},
{
"id": 9,
"name": "Colin Wilkinson IV",
"message": "Expedita ut nemo dolores.",
"rate": 2
}
]
},
{
"title": "Pumpkin-spice Cake",
"price": "76.56",
"reviews": []
},
{
"title": "Café Volcano",
"price": "31.24",
"reviews": [
{
"id": 3,
"name": "Nicky Towne",
"message": "Neque quas ratione aut.",
"rate": 4
},
{
"id": 6,
"name": "Wm Rempel",
"message": "Est atque rerum velit.",
"rate": 2
},
{
"id": 7,
"name": "Marcelo Funk",
"message": "Placeat repellat architecto dolor.",
"rate": 2
}
]
}
]
}
Conclusion
Serverless tools provide great convenience to developers by eliminating the need to manage infrastructure. It enables you to rapidly bootstrap a working service at a reasonable cost. In this blog, I provided a simple use case to demonstrate how serverless tools can cover a wide range of use cases.
Application repository links
Previously from our tech team:
Tracking our approach to technology exploration with the Kyan Tech Radar
Using Swift/SwiftUI to build a modern macOS Menu Bar app
Why this simple engineering practice is so valuable
We are Kyan, a technology agency powered by people.