One of the examples in the previous page on Pods was that of an nginx server. The server runs on the cluster just fine, but it won't be of much use until it's accessible from outside the cluster. The objects that would let your requests reach it are Services and Ingresses.

Services are objects that other things in the cluster connect to so that network traffic can reach your Pods.

  • The Service's name can be used by other Pods in a DNS name for requests they send within the cluster, as discussed in the next page.
  • There exist separate objects that can route external traffic to your Pods via Services. Ingresses are one object that can do this, and you'll find an overview of them later in this page.

Services

Below is an example manifest of a very simple Service. Create a file called "nginx-service.yml" that has these contents and put it on your cluster with kubectl apply -f nginx-service.yml.

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  ports:
  - port: 80
  selector:
    component: webserver

Note the "selector" field at the bottom. Services need this selector field to find out which Pods they should attach to. This selector has "component: webserver", which means it'll attach to any Pod which has a "component" label and for which the value of that label is "webserver".

The intention of this Service is to connect to the nginx-pod from the previous page's example. But there's no "component: webserver" label in that Pod's manifest. That label will have to be added.

Update your "nginx-pod.yml" file to have the below contents (which adds the "component" label), and update the Pod by using kubectl apply -f nginx-pod.yml.

apiVersion: v1
kind: Pod
metadata:
  name: nginx-pod
  labels:    component: webserverspec:
  containers:
  - name: nginx
    image: nginx:1.17-alpine
    ports:
    - containerPort: 80

Now both the Service and the Pod should be ready. Check the Service by using kubectl get service nginx-service. You should see something like below:

$ kubectl get service nginx-service
NAME            TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
nginx-service   ClusterIP   10.43.250.124   <none>        80/TCP    12m

So now the Service will have used the selector to latch on to the Pod. But you still can't get external traffic to it. If you're using Rancher Desktop, you could at this point enable port forwarding on the Service so that requests can be made to it. But that capability is there only for the purpose of testing, and is not something you'll be able to do in a normal cluster. Instead, we'll use an Ingress.

Ingresses

Ingresses are objects that manage network traffic. The material here will use them to expose Pods to the world outside the cluster. But be aware that they can also do such things as proxying, redirecting, and more (with some behaviors dependent on the cloud provider).

Below is a minimal Ingress manifest. Put it in a file named "my-app-ingress.yml" and add it to the cluster with kubectl apply -f my-app-ingress.yml.

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-app-ingress
spec:
  rules:
  - http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80

If you're following along with Rancher Desktop, you'll now be able to access your Pod through http://localhost:80, or just http://localhost.

Now take a look at the Ingress with kubectl get ingress my-app-ingress:

$ kubectl get ingress my-app-ingress
NAME             CLASS     HOSTS   ADDRESS        PORTS   AGE
my-app-ingress   traefik   *       192.168.5.15   80      10m
  • The "CLASS" here is "traefik", which is the default Ingress class used by Rancher Desktop. Other clusters may have different default classes. You may specify in the manifest what class to use, but this is rarely needed.
  • Ingresses can be configured to use host matching rules, which would go in the "HOSTS" column. But since our Ingress doesn't do that, the default * is displayed here.
  • "ADDRESS" is the host name or IP address at which traffic external to the cluster can reach the Ingress. Note that in the specific case of Rancher Desktop it doesn't work like this. Rancher Desktop runs its cluster in a Lima VM and uses the corresponding Lima network, which means the address the Ingress thinks its exposing (192.168.5.15) isn't actually accessible from your host. Rancher Desktop instead lets you access it through localhost. It works like this only in Rancher Desktop; in other clusters you should be able to access the Ingress through what's mentioned in the ADDRESS column and neighboring PORTS column.

Ingress alternatives

The capability space inhabited by Ingresses is one that is seeing many changes in the world of Kubernetes. This page demonstrates the interaction between a Service and an Ingress because it is likely to work on the clusters of most cloud providers. But there are many other potential ways of achieving the same outcome, and more are being introduced through continued development of Kubernetes.

Two alternatives to Ingresses are HTTPProxy and Gateway API. These are not available in all Kubernetes clusters, but rather are Custom Resource Definitions that cloud providers can implement if they choose. HTTPProxy and Gateway API both extend the functionality of Ingresses, and are likely to be more modernized than Ingresses going forward. You may wish to check if they're available in your cluster and use them instead.

In some Kubernetes clusters, the Services themselves can be open to external traffic. In such cases, Ingresses may not be needed, and the additional functionality would be accessed by changing the spec.type of the Service. The type used in the examples so far is the default ClusterIP. But there are two other types that are sometimes used for directly opening Pods to external traffic: LoadBalancer and NodePort. Keep in mind that these Services will not be available/functional on many clusters, due to the specifics of the cloud provider's implementation. Still, they are common enough to be worth knowing about.

LoadBalancer

Below is a simple example of a LoadBalancer Service's manifest:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-lb
spec:
  type: LoadBalancer
  ports:
  - port: 80
  selector:
    component: webserver

When using kubectl get service <service-name>, it'll show the same columns as when using the default ClusterIP service. In the case of LoadBalancers, outside traffic will be able to access the service via the host or IP address under "EXTERNAL-IP". Note that shortly after creating the service, the "EXTERNAL-IP" column will be <pending> until the cluster can set it up.

NodePort

NodePort Services allow outside access through a consistent IP address and port. Below is an example:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service-np
spec:
  type: NodePort
  ports:
  - nodePort: 30000
    port: 80
    targetPort: 80
  selector:
    component: webserver

You'll notice that there are three fields that specify some kind of port.

  • nodePort specifies the port that traffic outside the cluster uses to reach the Service. This port must be in the range 30000 to 32767, and if it's not specified, an unused port within this range will be chosen.
  • port is the port traffic within the cluster uses to communicate with the Service.
  • targetPort is the port listened on by the Pods the Service attaches to. If not set, it defaults to whatever value was set for "port".

So, to connect to this service from outside the cluster, you'll need to use the host name or IP address of the cluster itself (note that this will be different from the "CLUSTER-IP" column that is shown when using kubectl get service) and the value of the nodePort field as the port.