Kubernetes networking in the kernel

Core Connection

Network Policy with Cilium

NetworkPolicy is a standard Kubernetes object that allows ingress and egress traffic to be restricted at Layers 3 and 4. An ingress NetworkPolicy allows a source (specified by IP address, namespace name, pod labels, or a combination thereof) to access numbered ports on pods containing specified labels running within the namespace to which the network policy was added.

When no network policies are applied to a namespace, the namespace's traffic is not restricted; however, as soon as one policy is created for ingress or egress, a default deny policy kicks in for that traffic direction, meaning that rules have to be created to allow all permitted traffic.

For NetworkPolicy to be enforced, the cluster's network plugin must implement it; however, not all plugins do. For example, if your cluster uses the Flannel plugin, its NetworkPolicy objects will not have any effect because Flannel doesn't support them.

Cilium enforces both standard network policies and its custom resource CiliumNetworkPolicy, which is a superset of NetworkPolicy that adds support for L7 (application layer) policies with the aid of an Envoy proxy running inside the Cilium agent (the Cilium daemonset pod running on each node). L7 policies implement rules according to the application payload of packets, and Cilium supports them for HTTP, gRPC, Kafka, and DNS hostnames (which is useful, e.g., for limiting access to external websites and APIs in egress rules).

Imagine that an API-based application is running inside your cluster and you want to allow only pods running in a specified namespace to access particular endpoints of that API. The CiliumNetworkPolicy object in Listing 3 shows how you could allow pods in an admin namespace to access the API endpoints of pods in a backend namespace (represented by the /api/v1/.* regex in the HTTP rules), and allow all pods in a user namespace to access the docs page of the back-end pods, but not the API.

Listing 3

Example CiliumNetworkPolicy

apiVersion: cilium.io/v2
kind: CiliumNetworkPolicy
metadata:
  name: allow-api-from-admin
  namespace: backend
spec:
  endpointSelector:
    matchLabels:
      app: nginx
  ingress:
  - fromEndpoints:
    - matchLabels:
        io.kubernetes.pod.namespace: admin
        app: adminapp
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: GET
          path: /api/v1/.*
        - method: GET
          path: /docs
  - fromEndpoints:
    - matchLabels:
        io.kubernetes.pod.namespace: user
    toPorts:
    - ports:
      - port: "80"
        protocol: TCP
      rules:
        http:
        - method: GET
          path: /

Understanding YAML arrays and lists is a great help in interpreting this policy. A single dash (-) can make the difference between a policy that allows traffic meeting the constraints of "this rule and that rule," compared with "this rule or that rule." In the example, the ingress array contains two elements. One refers to pods labeled with adminapp in the admin namespace and allows access to the /docs path and any path starting with /api/v1. The other allows pods in the user namespace to access only the /docs path; attempts to access any other path will result in an Access denied return by Cilium's Envoy instance.

One notable point is the difference in denial behaviors between an L3/L4 policy rule compared with an L7 policy rule. When a connection attempt is denied at the L3/L4 level, the ingress request packet does not receive any response, so the request hangs and times out as though you were trying to send a request to a non-listening port. When the request is denied at the L7 level, it fails instantly with an Access denied response to the HTTP request and a 403 (forbidden) status code.

Traffic Flows and Policy Decisions

Cilium comes with a tightly coupled observability service and UI called Hubble, which gives an at-a-glance view of all your cluster traffic flows, filterable by namespace (Figure 4). It's easy to see whether one entity tried to access another and (if successful) which network policy allowed that connection to happen. The commands

cilium hubble enable --ui
cilium status --wait
kubectl -n kube-system port-forward service/hubble-ui :80

enable Hubble and connect to its UI.

Figure 4: Viewing traffic flows and policy outcomes in the Hubble user interface.

Conclusion

Pod networking is often the forgotten child of Kubernetes. Exploring the configuration and performance of popular network plugins such as Cilium can help you select and configure one that meets your performance and security needs.

The Author

Abe Sharp is a Distinguished Technologist at Hewlett Packard Enterprise, where he works on containerized AI and MLOps platforms in the Ezmeral business unit.

Buy this article as PDF

Express-Checkout as PDF
Price $2.95
(incl. VAT)

Buy ADMIN Magazine

SINGLE ISSUES
 
SUBSCRIPTIONS
 
TABLET & SMARTPHONE APPS
Get it on Google Play

US / Canada

Get it on Google Play

UK / Australia

Related content

comments powered by Disqus
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs



Support Our Work

ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.

Learn More”>
	</a>

<hr>		    
			</div>
		    		</div>

		<div class=