Service mesh for Kubernetes microservices
Mesh Design
Deploying an Application
I picked WordPress as the demonstration application for this article [6], to get a usable result that could be developed in a number of ways. I selected the wordpress:php7.2 and mysql:5.6 images (which I'll call the "production" deployment) from Docker Hub for my containers. To run it as a microservice, I deployed it as shown in Figure 3, separating the deployments of the WordPress web app and its MySQL database, each having their own local storage volume (WordPress needs this for storing media files, themes, plugins, etc.).
To demonstrate Istio's traffic management features, I created a second WordPress deployment that uses the same storage volume as the first, but specifies a different WordPress image version – wordpress:php7.3 (which I call the "prerelease" deployment). Thus, each service is using the same WordPress version, but with a different underlying PHP version.
This (somewhat contrived) example shows how canary testing might be done on a highly available web app. Although it's tangential to the subject of this article, I'd certainly like to share this hard-won piece of knowledge. If you want to run MySQL in a Kubernetes environment, it'll run several times faster if you apply the skip-name-resolve
option to your MySQL container, because this will prevent it trying and failing to resolve the hostname of the requesting client service. This can be done using a Kubernetes ConfigMap
object to create a custom MySQL configuration file for the database service.
After applying the application manifests with kubectl
, I checked that the Istio sidecars were injected and running. On the assumption that your application follows the one-container-per-pod convention, check for the existence of two containers in each pod; you can tell this from the 2/2
values in the second column of Figure 6.
A full description of one of these pods (Figure 7) shows that the Istio sidecar is indeed present:
$ kubectl describe pod wordpress-77f7f9c485-k7tt9
So far, Istio has been installed on the cluster (in its own namespace, istio-system ), and the demo application (WordPress with a MySQL back end and some Persistent Volume Claims for content file storage) has been installed in the default namespace. Automatic sidecar injection has been enabled and the pods recreated, so all ingress and egress traffic from each pod is being routed through Istio's Envoy proxies; those proxies are in constant communication with the Istio control plane mother ship. Now, I'll direct the mother ship to manage and observe the pods that make up the application.
Internal Security
In the introduction, key internal security duties of a service mesh were identified as authentication, authorization, and encryption. Istio has a philosophy of "security by default," meaning that these three duties can be fulfilled without requiring any changes to the host infrastructure or application on which Istio is being deployed.
Istio uses mutual TLS (mTLS) as its authentication solution for service-to-service communication. As mentioned in the section on service mesh concepts, in a sidecar service mesh, all requests from one service to another are tunneled through the sidecar proxies. By requiring the proxies to perform mTLS handshakes with one another before exchanging application data and by making them verify each other's certificate with a trusted store (Istio Citadel) as part of that process, authentication is achieved.
To make this work, every Envoy proxy created in an Istio service mesh has a key and certificate installed. The certificates encode a service account identity correlated with the cluster hostname or DNS name of the service that they represent. This is referred to as secure naming , and a secure naming check forms a part of the mTLS handshake process.
When you deploy Istio with the istio-demo-auth.yaml
manifest supplied with the Istio download, mTLS is enabled mesh-wide. Depending on your requirements, you can also enable it on a per-namespace or per-service basis. There are two facets to configuring mTLS: configuring the way that incoming requests should be handled, by means of authentication policies, and configuring the way outgoing requests should be generated, by using DestinationRule
objects.
For incoming requests, the MeshPolicy
object configures a mesh-wide authentication policy; the Policy
object configures it within a given namespace, which can be configured down to the service level by using target selectors within a policy configuration (Listing 1). This is the .yaml
file you would have to apply to enable mTLS mesh-wide (and is already a part of the istio-demo-auth.yaml
installation used in this article). For outgoing requests, the DestinationRule
objects specify the traffic policy TLS mode (Listing 2).
Listing 1
Configuring Incoming Requests
apiVersion: "authentication.istio.io/v1alpha1" kind: "MeshPolicy" metadata: name: "default" spec: peers: - mtls: {}
Listing 2
Configuring Outgoing Requests
apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "mtls-rule" spec: host: "httpbin.default.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL
Having applied an authentication policy and destination rules to your desired mTLS setup, use the istioctl
CLI tool to check the authentication status of all services, which makes sure all destination rules are compatible with the policies applied to the services to which they route and will show a CONFLICT status for any that are not; it also shows the authentication policy and destination rule that are currently in effect on each service.
If, for example, an authentication policy requires a service to accept only mTLS requests, but destination rules pointing to that service specified mode: DISABLE
, its status would be CONFLICT, and as you would hope, any requests to it will fail with a 503 status code. Even with the demo profiles supplied by Istio, some control plane services are in a conflict status. I didn't fully investigate why this was the case, because it didn't affect the features I was using. However, if any of your own application's services show a conflict, it's certainly something you need to address. Run
$ istioctl authn tls-check <pod name>
to see that mTLS is successfully configured on all of your services (Figure 8).
External Security and Traffic Management
In this example, I'll demonstrate configuring HTTPS (after all, I want to end up with a legitimate website) and weighted routing between my WordPress versions for the purpose of canary testing. It's easy to do this by configuring a Gateway and a VirtualService
object on istio-ingressgateway
.
Figure 5 shows that istio-ingressgateway
has been assigned one of the real public IPs from my IBM Cloud Kubernetes Service cluster. As with any service providing a secure website, a valid SSL certificate and key are required. I am going to call this website blog.datadoc.info
. To generate an SSL certificate and key for this name, I pointed this subdomain at another server running httpd
, along with he Let's Encrypt certbot
program, and generated a free certificate and private key. I copied the certificate and key to my local machine and then used kubectl
to upload them to my cluster as a named secret:
$ kubectl create -n istio-system secret tls istio-ingressgateway-certs--key privkey.pem --cert fullchain.pem
Following the examples on the Istio website, I created a manifest for the ingress gateway, specifying to which services it should direct inbound traffic (Listing 3). The code creates three Istio objects: Gateway
, VirtualService
, and DestinationRules
.
Listing 3
wp-istio-ingressgw.yaml
01 apiVersion: networking.istio.io/v1alpha3 02 kind: Gateway 03 metadata: 04 name: wordpress-gateway 05 spec: 06 selector: 07 istio: ingressgateway 08 servers: 09 - port: 10 number: 443 11 name: https 12 protocol: HTTPS 13 tls: 14 mode: SIMPLE 15 serverCertificate: /etc/istio/ingressgateway-certs/tls.crt 16 privateKey: /etc/istio/ingressgateway-certs/tls.key 17 hosts: 18 - "blog.datadoc.info" 19 --- 20 apiVersion: networking.istio.io/v1alpha3 21 kind: VirtualService 22 metadata: 23 name: wordpress 24 spec: 25 hosts: 26 - "blog.datadoc.info" 27 gateways: 28 - wordpress-gateway 29 http: 30 - route: 31 - destination: 32 subset: v1 33 host: wordpress 34 weight: 90 35 - destination: 36 subset: v2 37 host: wordpress 38 weight: 10 39 --- 40 apiVersion: networking.istio.io/v1alpha3 41 kind: DestinationRule 42 metadata: 43 name: wordpress 44 spec: 45 trafficPolicy: 46 tls: 47 mode: ISTIO_MUTUAL 48 host: wordpress 49 subsets: 50 - name: v1 51 labels: 52 version: v1 53 - name: v2 54 labels: 55 version: v2
Gateway
is bound to the ingressgateway
service by a selector, configuring it as an HTTPS server with the certificate and key that were uploaded earlier. It specifies that this gateway is only for requests sent to the hostname blog.datadoc.info
and will prevent users from making requests direct to the IP address.
VirtualService
routes traffic to the WordPress service, which is listening internally on port 80.
DestinationRules
specifies that requests routed via these rules should use mTLS authentication. These traffic rules have to match the existing mTLS policy within the mesh, which (because I used the istio-demo-auth.yaml
installation option) does require authentication for all requests. Failure to specify it here would cause an authentication conflict, as explained in the "Internal Security" section of this article.
DestinationRules
also defines two named subsets of the WordPress service. Each subset specifies a different version of WordPress container to use. The version:
labels in this definition correspond to the version:
labels in the template specification of the WordPress app containers that I deployed earlier.
Buy this article as PDF
(incl. VAT)
Buy ADMIN Magazine
Subscribe to our ADMIN Newsletters
Subscribe to our Linux Newsletters
Find Linux and Open Source Jobs
Most Popular
Support Our Work
ADMIN content is made possible with support from readers like you. Please consider contributing when you've found an article to be beneficial.