Securing the TLS ecosystem with Certificate Transparency
A Curse and a Blessing
Certificate Transparency further secures the TLS ecosystem on the Internet by identifying unauthorized certificates. This transparency, however, offers attackers an opportunity to search for services (e.g., video conferencing systems) that are unprotected on the network. Administrators need to be aware that, thanks to Certificate Transparency, supposedly confidential domains or subdomains are published as soon as a certificate is issued for them.
The preventive measures that were implemented this year to protect people against COVID-19 have unexpectedly turned IT landscapes in many countries upside down. All of a sudden, home offices that had previously not been allowed in corporations were now necessary. In view of the lack of alternatives, many managers and IT departments had to establish new processes quickly and expand existing infrastructures. In the heat of the moment, many new services were set up, initially to test the possibilities.
Many of the new installations were in the area of tele- and video conferencing systems and have remained in operation as permanent provisional solutions – secured with a Let's Encrypt certificate, but without any further protection and often even usable without a login. Because the service was not linked to anything and was only used internally, many administrators – for reasons of ease of use and because they had many other urgent tasks – decided not to change this state by, for example, securing the services in a sensible way during the transition.
Identifying Spoofed Certificates
Recent years have seen a great deal of flux in the TLS certificate ecosystem. The Let's Encrypt service revolutionized the entire certificate market in 2014. Without too much setup overhead and without any costs, administrators can use this service to secure the communication of their web services. According to the Censys website [1], Let's Encrypt has a market share of more than 50 percent of trusted websites. Gaining a little less in terms of public attention, the Certificate Transparency strategy, which was promoted by Google, was already implemented back in 2013.
The motivation for Certificate Transparency dates back to 2011. After a break-in at certificate authority DigiNotar, there was no way to trace falsely issued certificates reliably – either for DigiNotar itself or for external domain owners. Once attackers have access to a renowned Certificate Authority (CA), they can issue valid certificates at will, with no restriction in the choice of domains, even those domains that have their certificates issued by other CAs.
To be able to check what certificates are issued in your name as the person responsible for the domain, public log data is created for Certificate Transparency. Each certificate issued has a log entry with all the corresponding domain names, the issuing CA, and a time stamp. The log is available on a special log server. Every CA can store the issued certificates there in a revision-proof, cryptographically secure manner. As the domain admin, you can check these logs and make sure that nobody else has issued certificates for your domain. If someone has, the certificate authority can be identified.
The market power of Google, especially with the browser Chrome, also convinced established CAs to create log entries for issued certificates. Chrome would simply have consistently displayed a warning for all non-participating CAs for certificates that were in fact valid. If you come across a previously unpublished certificate while surfing the web, a message to this effect is displayed. The diagram in Figure 1 is an overview of how Certificate Transparency is used in everyday life. The approach seamlessly extends the previous process of certificate issuance and verification to the browser.
Log Access via Proxy Server
Various service providers operate their own log servers. The relevant list is defined by Google and is the same list used in Google Chrome. If a log server is not included in this list, the server is basically useless. One of several ways to access the logs of these servers is to access the entire log directly via the server's API.
Direct access may be suitable for an occasional quick look, but it is not suitable for continuous use in log monitoring. For the scenario here, I use a simpler access. Security researcher Ryan Sears (CaliDog) operates a proxy server, which in turn outputs a feed of currently reported log entries. He provides a Python library in his GitHub repository [2], which I used for the subsequent activities of this article.
Confidential Subdomains at Risk
From the perspective of a domain admin, Certificate Transparency is a blessing. Google's market position makes it possible to implement such significant changes quickly to security mechanisms on the Internet. Monitoring and the continuous and audit-proof storage in the logs increases the pressure on certificate authorities. At the same time, however, it also relieves them of the burden, for example, of accusations of issuing arbitrary certificates to governments, which could in turn use them for espionage or similar purposes.
In addition to the possibilities for the kind of monitoring that domain admins have in mind, attackers can of course also use the logs and scan domains or subdomains for vulnerabilities. Especially in times like the 2020 pandemic, insecure test systems are quickly transitioned into production. Assuming that the domain is not known or linked somewhere, administrators can be fooled into thinking they are secure.
Between 2am and 8am on March 19 at the beginning of the COVID-19 restrictions in Germany, I randomly sampled and recorded the Certificate Transparency logs, specifically looking for the video conferencing system Jitsi [3] with the use of just three keywords: jitsi , meet , and conference . All other domains were ignored at first. During this time, certificates for 555 different domain combinations with these terms were found in (sub-)domains. As a test, I checked a selection of Jitsi installations. Of these, I was able to use 40 percent for my own video conferences without further authentication.
The potential damage in this setup might seem negligible, but in fact, resources on the host are consumed – memory, CPU time, network bandwidth – that might well result in additional costs, depending on the hosting package. Of course, an attacker could also deliberately disrupt the system (e.g., at about the same time as the weekly team meeting).
Buy this article as PDF
(incl. VAT)