« Previous 1 2 3 Next »
Correctly integrating containers
Let's Talk
BGP, Routing, and iptables Rules
When creating, Felix assigns an IP address to each pod; iptables rules configure all allowed and prohibited connections. If the cluster is to adapt the routing, Felix also generates the necessary routes and adapts them to the kernel. However, unlike the iptables rules, routing is a concept that goes beyond a single node.
BGP now comes into play. A protocol that configures Internet-wide routes also helps connect containers on different nodes. You need to limit the scope of Calico BGP to the Kubernetes cluster. Most administrators are a little in awe of BGP as an almost all-powerful routing protocol that can redirect the traffic of entire countries.
At first glance, the options are rather frightening, especially in terms of their effect and security. A connection to the carrier back end is not necessary. Calico does not need access to this very heavily protected part of the Internet routing infrastructure, so it is only an apparent security risk.
Rules for the Network
One important aspect of Kubernetes is the ability to share clusters – that is, to set up a common cluster for several teams and possibly even for several stages of development, testing, and production. Listings 3 and 4 show typical configurations. Listing 3 adds an annotation to a namespace. In specific cases, you need to extend net.beta.kubernetes.io/network-policy
by including a JSON object that sets ingress isolation to the DefaultDeny
value. This means that other namespaces will fail to reach any of the pods in this namespace. If you want to restrict access further, isolate not only the namespaces, but the pods within the namespace, as well.
Listing 3
Annotation of a Namespace
01 [...] 02 kind: Namespace 03 apiVersion: v1 04 metadata: 05 annotations: 06 net.beta.kubernetes.io/network-policy: | 07 { 08 "ingress": { 09 "isolation": "DefaultDeny" 10 } 11 } 12 [...]
Listing 4 describes the policy for a pod with the db
role in which a Redis database is running. It only allows incoming TCP traffic on port 6379 (also named ingress
) of pods from the myproject
namespace with the frontend
role. All the other pods do not gain access. If you take a closer look at the definition, you will see significant similarities with known packet filters, such as iptables. However, the Kubernetes user has to adjust these rules very quickly at the level of pods and containers.
Listing 4
Network Policy
01 [...] 02 apiVersion: extensions/v1beta1 03 kind: NetworkPolicy 04 metadata: 05 name: test-network-policy 06 namespace: default 07 spec: 08 podSelector: 09 matchLabels: 10 role: db 11 ingress: 12 - from: 13 - namespaceSelector: 14 matchLabels: 15 project: myproject 16 - podSelector: 17 matchLabels: 18 role: frontend 19 ports: 20 - protocol: tcp 21 port: 6379 22 [...]
If the network layer receives a change in the container life cycle from a container network interface, such as the previously mentioned Calico, it generates and implements the rules in iptables and at the routing level. It is important here to separate the responsibilities (separation of concerns). Kubernetes merely provides the controller; the implementation is handled by the network layer.
Ingress
Kubernetes offers the framework, whereas a controller takes care of availability. Services based on HAProxy [16], Nginx reverse proxy [17], and F5 hardware [18] are available. In an earlier article [19], I describe the general concepts for these services.
Ingress extends these concepts significantly. Whereas the built-in service only allows simple round-robin load balancing, an external controller pulls out all the stops and can conjure up external public hosts with arbitrary reachable URLs, arbitrary load balancer algorithms, SSL termination, or virtual hosting out of thin air.
Listing 5 shows a simple example of a path-based rule. It redirects everything under /foo
to the service s1
and everything under /bar
to the service s2
. Listing 6 shows a host-based rule that is used to evaluate the host headers to identify the hostnames.
Listing 5
A Path-Based Rule
01 [...] 02 apiVersion: extensions/v1beta1 03 kind: Ingress 04 metadata: 05 name: test 06 spec: 07 rules: 08 - host: foo.bar.com 09 http: 10 paths: 11 - path: /foo 12 backend: 13 serviceName: s1 14 servicePort: 80 15 - path: /bar 16 backend: 17 serviceName: s2 18 servicePort: 80 19 [...]
Listing 6
A Host-Based Rule
01 [...] 02 apiVersion: extensions/v1beta1 03 kind: Ingress 04 metadata: 05 name: test 06 spec: 07 rules: 08 - host: foo.bar.com 09 http: 10 paths: 11 - backend: 12 serviceName: s1 13 servicePort: 80 14 - host: bar.foo.com 15 http: 16 paths: 17 - backend: 18 serviceName: s2 19 servicePort: 80 20 [...]
If you want to provide a service with SSL support, you can only do so on port 443. Listing 7 shows how to provide an SSL certificate, which ends up in a Kubernetes secret. To use the certificate, you need to reference it in an ingress rule and simply state the name in the .spec.tls.secretName[0]
field (Listing 8).
Listing 7
Setting up an SSL Certificate for a Service
01 [...] 02 apiVersion: v1 03 data: 04 tls.crt: <base64 encoded cert> 05 tls.key: <base64 encoded key> 06 kind: Secret 07 metadata: 08 name: mysecret 09 namespace: default 10 type: Opaque 11 [...]
Listing 8
Using Certificate Services
01 [...] 02 apiVersion: extensions/v1beta1 03 kind: Ingress 04 metadata: 05 name: no-rules-map 06 spec: 07 tls: 08 - secretName: mysecret 09 backend: 10 serviceName: s1 11 servicePort: 80 12 [...]
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.