In our previous post(https://zxer.in/playing-with-kubernetes/) we have briefly discussed about docker and deploying applications in kubernetes. We built a backend and a front end service. And we were able to access the front-end from internet. But we haven't connected both services together. If you remember we hard-coded the backend IP address in our front-end nodejs code to access the currency convertor API. But when we are deploying in kubernetes cluster we don't know what the IP address is going to get assigned for our application. If we are deploying multiple instances, then the problem becomes even complex as we need to distribute traffic to multiple IPs.

A Kubernetes Service is an abstraction which defines a logical set of Pods and a policy by which to access them - sometimes called a micro-service. The set of Pods targeted by a Service is (usually) determined by a Label Selector.  In our deployment we grouped front-end using selector

matchLabels:
      app: currency-app-frontend

What this means is all the pods with selector app : currency-app-frontend  will be grouped as a single service.  There are 2 primary modes for service discovery.

1. DNS

Each service defined in cluster gets a DNS name. And services can be resolved by the name from within the same namespace. And pods from other name space can access by adding namespace to the DNS path. In our case currency-app-backend-service.default.svc.cluster.local. So we can access the backend service by changing the backend IP to its DNS name and it will be accessible from cluster.


const express = require('express');
const request = require('request');


// Constants
const PORT = 8080;
const HOST = '0.0.0.0';
// Backend IP changed to DNS name
const BACKEND = 'currency-app-backend-service.default.svc.cluster.local'

// App
const app = express();
app.set('view engine', 'ejs');

app.get('/', (req, res) => {
  res.render('pages/index');
});

app.get('/currency', (req, resp) => {
  console.log('http://'+BACKEND+"/?currency="+req.query.currency+" API request send");
  request('http://'+BACKEND+"/?currency="+req.query.currency, { json: true }, (err, res, body) => {
  if (err) { return console.log(err); }
  resp.send(body);
});
});

app.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);```

2. Environment Variable

When a Pod is run on a Node, the kubelet adds a set of environment variables for each active Service.  While deploying services, we can set the environment values to be set in yaml file. In our deployment we can set SERVICE_IP and SERVICE_PORT as environment variables and access it from other PODs in same cluster.

Access the shell by following command by providing pod name

 kubectl --kubeconfig=currency-app-demo.yaml exec -it currency-app-frontend-deployment-95f9cf5-wbn24 sh

then print the variables using printenv

# printenv
KUBERNETES_SERVICE_PORT=443
KUBERNETES_PORT=tcp://10.245.0.1:443
NODE_VERSION=8.15.0
HOSTNAME=currency-app-frontend-deployment-95f9cf5-wbn24
YARN_VERSION=1.12.3
HOME=/root
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=10.245.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
KUBERNETES_SERVICE_PORT_HTTPS=443
KUBERNETES_PORT_443_TCP=tcp://10.245.0.1:443
KUBERNETES_SERVICE_HOST=10.245.0.1
PWD=/usr/src/app

This means any service a POD wants to communicate with should be created before otherwise the environment variables won't be populated.

Publishing Services - Service Types

If you want to make your service from outside (outside cluster) you need to publish the service. There are many ways to do that

  1. ClusterIP
    Exposes the service on a cluster-internal IP. Choosing this value makes the service only reachable from within the cluster. This is the default ServiceType. This what we have used in our first tutorial. Once our backend service got assigned with a ClusterIP, we hardcoded that in our front-end application.
  2. NodePort
    Exposes the service on each Node’s IP at a static port. This will be accessible even from out side of cluster. A ClusterIP service, to which the NodePort service will route, is automatically created. That mean even if there is no POD running on a particular NOD, we will be able to access the service using NodeIP:NodePort.
  3. LoadBalancer
    We can expose any service by creating a LoadBalancer. Whenever we create a Service with ServiceType as LoadBalancer underlying cloud provider(AWS/Google Cloud/Azure/DO) will create a separate load balancer and balance the traffic between the service nodes. This is the easiest method but you might have to pay separately for it.