Migrating to Azure Monitor Agent
Up to Date
Microsoft announced the general availability of the Azure Monitor Agent back in 2021 after the consolidation of existing monitoring solutions for the Azure cloud environment. Companies that still use the legacy Log Analytics Agent need to think about making the change in good time because it will reach its end of support in 2024. In this article, I look at how the previous agent and the new kid on the block differ and what you need to do to make the switch.
Azure Monitor sees itself as a hub for bringing together a wide variety of monitoring signals, as a data repository for metrics, as a collection point for analytics services of various kinds, and as a visualization tool. Thanks to its smart alerting function, the tool is also a hub for automation, bringing together various Azure services under the umbrella of a common interface.
One of these services is Log Analytics – the Azure-styled evolution of Operations Manager Suite. In the Azure cosmos, Log Analytics is the tool that processes log queries against the data acquired by Azure Monitor. It supports interactive data analysis and forms the basis for numerous other Azure features and services such as Microsoft Sentinel, Azure Automation Update Management, or Azure Automation Desired State Configuration.
Data Preparation with Log Analytics
When admins discuss log analytics, talk often turns to metrics and logs. Metrics are numeric, usually unidimensional values that describe an aspect of a system at a particular point in time. In contrast, logs contain different types of data organized as records with different properties for each type. Sending and storing metrics is an inherent part of virtually any Azure resource. The data can always be collected, stored, and analyzed directly in Azure Monitor with the Metrics Explorer for platform as a service (PaaS) and for infrastructure as a service (IaaS) – with a couple of restrictions – without having to install agents.
Azure stores platform- and user-defined metrics for up to 93 days in a time series database optimized for analyzing timestamp data. Companies typically pay nothing to store and view data with Azure Monitor Metrics. Only if you want to keep metrics longer than intended by the provider, or link alerts to metrics, does the cost meter start to tick. You can set up custom alerts for metrics (and for logs, including the activity logs managed by Azure Resource Manager) as an automation hub. However, to identify longer term trends, you also need to send platform metrics to a Log Analytics workspace with an agent.
For IaaS – for example, Azure virtual machines (VMs) – you can optionally collect performance indicators from the guest system with the use of an agent referred to as the Azure Diagnostics extension. When you do so, you can automatically deliver the data to Azure Monitor's default metrics data sink – Azure Monitor Metrics. Another option is to edit the configuration and route the information to the new Azure Monitor Agent by data collection rules (DCRs), which offer greater flexibility in terms of configuring the metrics you want to collect. Again, the retention period is 93 days. You always need to specify a storage account as the location when you run diagnostics, and Microsoft only guarantees to retain metrics in an associated storage account for 14 days.
Although the data sink for metrics is basically included in Azure Monitor, Log Analytics requires a workspace as the data sink for log data. Unlike the data sink for metrics, this workspace is not included in Azure Monitor by default; instead, it is a standalone first-class Azure resource that lets you interactively create, run, and save log queries against the logs captured by an agent. Log Analytics queries let you, for example, retrieve records that meet certain criteria, identify trends, analyze patterns, and gain various insights from your data.
In a way, the Log Analytics workspace drives the database schema according to the log data that is structured for storing these records. At the same time, Log Analytics supports the powerful Kusto Query Language (KQL), which is also used in other Azure services, such as the Data Explorer analysis service or the Microsoft Sentinel security information and event management (SIEM) tool. Microsoft ultimately charges for this software by the volume of data stored in a workspace – which can be quite large because of the numerous sources supported.
Out of Control Costs
If you do not configure Log Analytics and instead use the default settings, a retention period of 31 days applies. Each Log Analytics workspace is billed on the basis of two factors: data ingestion (i.e., the volume of data you are taking in, which grows almost immediately as soon as you send it to Log Analytics) and retention period, which is billed by time and volume of data. Only the first 30 days are free of charge, or 90 days if Microsoft Sentinel is enabled or in the context of Application Insights. Pricing is set by region [1], but if you want to store the data for a longer period of time, the costs are considerable – currently EUR0.13/GB per month in Germany or $0.10-$0.13/GB per month in the US.
This cost of data is particularly problematic if you no longer need it but have to retain it (e.g., for legal reasons or because of customer requirements). In this case, retention (i.e., storing the logs online) is the significantly more worrying cost component; although the worries can be alleviated by archiving the log data, which has been around for some time. Azure then differentiates between a plain vanilla retention phase in the log data life cycle, during which you can easily query the data with KQL. For this scenario, the maximum possible retention time is two years, with a subsequent archiving phase of five years max.
The Azure bill for archiving is significantly lower for storage (EUR0.024/GB per month; $0.025/GB per month). Of course, then you have to retrieve the data on demand, either with a search (EUR0.007/GB or $0.007/GB of information scanned) or a restore (EUR0.128/GB per day; $0.123/GB per day), whereas queries have no extra charge during the regular retention phase. In total, a maximum retention period of up to seven years can be configured.
Selecting a Suitable Agent
The volume of data ingested can be reduced by configuring the Log Analytics Agent in a targeted way (e.g., by deciding which data you want to collect). Although the legacy Log Analytics Agent only lets you configure the entire agent, which then affects all connected VMs equally, the Azure Monitor Agent setup is far more granular. You can fine-tune it using data collection rules and provide a different configuration for individual VMs.
Security-related information and event logs are collected with either the legacy Log Analytics Agent or the Azure Monitoring Agent. One way to roll out either agent is with Microsoft Defender for Cloud, which automates the deployment to existing and future resources. In the context of Defender for Cloud, Microsoft still configures the legacy agent by default (Figure 1).
Interestingly, this is not the case when you configure Log Analytics from the perspective of a specific data source. To enable Log Analytics, you need to press Enable for the VM selected in the Monitoring | Logs section and then set up the configuration for one of the two supported agents. Now the new Azure Monitor Agent is recommended, and the legacy Log Analytics Agent is optional.
Alternatively, you could first deploy the workspace and then connect the desired VMs with Virtual machines | Workspace Data Sources ; again, with the legacy agent. The VM can even be located in a different region, as can the workspace.
Azure Monitor Agent replaces the Log Analytics Agent in the logging area, providing diagnostic data from Windows client operating systems; it can be deployed not only for Azure VMs, but also for Azure Arc and locally. The agent collects event logs, performance data, file-based logs, and IIS logs, sending them to Log Analytics – and to Azure Monitor Metrics in the future – as well as to a storage account and an event hub.
According to Microsoft, the future integration options will be the same as for Log Analytics (i.e., Microsoft Defender for Cloud, Microsoft Sentinel, Azure Update Management, and VM Insights); however, all integration options are currently still classified as previews. Change Tracking is currently only supported in the legacy Log Analytics Agent. Neither the classic nor the new agent capture event tracing for Windows (ETW) events, .NET application logs, crash images, and agent diagnostic logs. You still need the diagnostics extension for Windows (WAD) or Linux (LAD) on the (Azure) VM in question. Linux VMs also support the Telegraf agent.
Buy this article as PDF
(incl. VAT)