« Previous 1 2 3 Next »
Artificial admin
Machine Brain
Complete Portfolio
Of course, IBM would not be IBM if it did not also have a comprehensive portfolio of solutions for the Linux sector under the Red Hat brand. The product flyer reads a bit like something out of a future novel. In the future, it claims, AI will help nip attacks in the bud, automatically restore systems in the event of malfunctions, facilitate scalability, and, as a side effect, drive innovation in areas where staff are currently working on boring everyday tasks.
True to its own DNA, Red Hat primarily relies on open source components such as Llama 2 (large language model Meta AI), a machine learning model developed by Facebook parent company Meta, for which Prophet was in some ways a preparatory task. It provides an entire toolchain for AI and machine learning, including an open API for general availability. Facebook obviously wants to be portrayed as one of the good guys in the world.
However, Red Hat doesn't want to bet its money on just one horse and is also touting Thoth as an alternative, flanked by Project Wisdom, which can already be connected to Red Hat's Ansible automation platform. This pairing is geared to handle administrative tasks with AI under the Ansible Lightspeed moniker. Red Hat is also a member of the AI Center of Excellence (AICoE), an industry consortium that promotes the use of AI and demonstrates it with practical proofs of concept. The example I mentioned, with Prometheus and Prophet, for example, was implemented by Red Hat in the scope of the AICoE.
One thing then is clear: Red Hat is throwing cash at the subject, and AIOps is a long-term project and not a flash in the pan. That said, apart from a few successful proofs of concept, the Red Hatters have not yet delivered anything significant in terms concrete results.
Coroot with AI Analysis
An observability tool out of the Coroot project (Figure 3) demonstrates how artificial intelligence can be used in practical applications now. Observability has many definitions, but most administrators understand the term to mean a combination of monitoring, alerting, trending, and log aggregation.
In contrast to other solutions, Coroot relies on the principle of zero instrumentation, which means administrators do not have to prepare their systems in any special way or install special software to use Coroot. Instead, the tool docks directly onto the Linux kernel, connecting to eBPF, an environment for running special virtual machines that provide specific functions in the network stack. Which functions are primarily up to the admin's imagination.
Coroot relies on this principle to analyze and evaluate data streams at the kernel level. It filters out the relevant data according to the admin's specifications and sends the data to a central Coroot instance, where all data streams converge. Particularly practical is that Coroot comes with a number of pre-configured virtual machines for eBPF out of the box, helping it cover a massive zoo of services on the target systems.
Once the central Coroot instance has collected its observability data, the data is not just stored safely somewhere on the local network; instead, Coroot uses machine learning models to detect anomalies in the data and identify their causes. According to the authors of the software, this feature is a central aspect of Coroot development. The tool is also said to make it significantly easier to identify the causes of failure, which includes outages that occur as a result of DDoS attacks, for example.
The software's approach is not complicated. A service level objective (SLO), which can be defined as required, provides the underpinnings by defining the valid parameters in assembler. Any deviation constitutes an anomaly and therefore an event that requires notification. Coroot then continuously feeds the machine language algorithm defined in this way live data from a setup (Figure 4). As training progresses, it adapts its alerts in an increasingly granular way to reflect the environment's local conditions.
A concrete strength of AI models is that they can be trained on site and in the specific customer setup. They can respond to local characteristics in a targeted way. It is clear that, when it comes to detecting anomalies in its services, a large service provider has requirements different from a medium-sized IT service provider. Mapping with conventional methods would not serve either environment. AI-supported machine learning models adapt to these subtleties without any further intervention by the admin (Figure 5).
The only present drawback with Coroot is that the tool currently only targets popular container setups that are based on Kubernetes. Although it comes with all the tools you need and can quickly be put into operation, the AI functions, which are likely to be of particular interest to many companies, are currently only available in the hosted Coroot Cloud version, so all of the provider's claims that Coroot is 100 percent free software ring hollow. This infamous open core principle makes the central engine available under a free license but keeps the really interesting additional functions under a proprietary license.
Improvement for the Worse?
Apart from the functions described with regard to specific setups, unlike training spam filters, AIOps can generate new knowledge with the right combination of input data and algorithm. Legacy spam filters will not normally recognize fresh spam mail if the message cleverly combines various elements of previous approaches to create something new. Machine learning is far smarter and can use existing data from previous attacks to identify new attack patterns without prior training, from which it then generates alerts. This ability takes the sting out of the criticism raised by many admins that AIOps will not catch on because this kind of automation has regularly done more harm than good in the past.
The classis automated reactions quite rightly meet with ridicule in production operations. However, AIOps with correctly constructed machine learning algorithms are vastly superior to legacy heuristics or even plain command processing when certain events occur. In the long term, these algorithms will become an indispensable everyday tool for administrators looking to maintain control over constantly growing setups, and they will gradually assume some of the individual tasks currently handled by administrators, just as automation is already doing.
« Previous 1 2 3 Next »
Buy this article as PDF
(incl. VAT)