Consume Azure logs in a 3rd party system

Direct access to your production environment is generally a bad practice. But you do want to monitor it and want to diagnose it when some error occurs. Logs are being used to this extent, to be aware of what is going on and what has happened in the past. Nowadays almost every component of your system can be logged, mostly just outputted on a local disk on the host. With one system this can be done by manually inspecting some logfiles, but when the number of components growing and the correlation between components in the logs are harder to track you are going to want a tool helping you with your hundreds of logs scattered over hundreds of hosts. A common approach to this problem is to create a central logging location, where a tool can index, aggregate, and correlate the different logs and give you insights on the behavior of your systems. Although Azure has some greate monitoring solutions of its own, this might not be the logging location of your choice. This blog gives an approach to flow the logging data into a third-party system.

There are many reasons you might consider enabling third party logging eg. when you:

  • already have a Central log location and want to Azure logging
  • want to retain your logs on a different system than Azure
  • want to correlate with data not native to Azure
  • need to retain your logs longer than the retention period in Azure

Native log options

The azure monitoring logs are divided into 3 components; the activity logs on the tenant level, the diagnostic logs on the provisioned resource level and the metrics on both. Natively, the components provide publication of all events to an event hub. This resource is an event collection on a fan in strategy which easily scales. The application insights have an option to enable “continues export” to a storage account and is currently the only way to export the log. All the events are sent in raw data, the filtration, correlation, and aggregation needs to be done in the monitoring application.

Decoupling

When choosing a tool you can integrate directly by making a hard coupling between 2 systems. The downside to this approach is that you become locked in by the choices you make, not able to decouple your system or make changes without having an impact on another. The key benefits of creating an extra abstraction layer are having the product system push and the monitoring system pull. This gives clear responsibilities of the system that you can enable with the least amount of privileges that is always good to achieve together with the ability to rotate security keys on a regular basis or when compromised. Streaming your diagnostic data to a single event hub enables you to pipe log data to a third-party SIEM or log analytics tool. The key vault is added to provide a security layer with the possibility to rotate keys when needed.

Caveats

In every solution, there are some things to be aware of. In this one, you need to be aware that:

  • the Event Hub Namespace needs to be in the same region as the resources that you want to send events from.
  • you get the right partition count in your Event Hub, this cannot be changed after creation and depending on the consuming system that can differ. For instance, the “Azure Monitoring plugin for Splunk” needs to have a partition count of 4 to work.
  • Although you can send your metric events to the event hub, the better alternative is to just query them directly from the REST-API
  • The provided code below doesn’t tune down on any privileges, you need to set up the Shared Access Policies and Shared Access Signatures yourself.

Do it yourself

To try this out you can use my Github repo centralized-monitoring-feed. This will create the infrastructure in your subscription and gives you the necessary variables to orchestrate your consuming application. This script will create the following:

  • Event hub namespace
  • Key vault
  • storage account
  • Service principle
  • Store all secrets in the key vault
configure-feed.ps1 -SubscriptionId xxxx-xx-xx-xx-xxxx -ResourceGroupName rg-centrallog-westeurope -ResourceGroupLocation westeurope -EventhubNamespaceName ehn-centrallog-westeurope -KeyvaultName kv-centrallog-westeurope -StorageAccountName sacentrallog1234

Now you can enable your existing azure resources to send the events to this solution. To enable the activity logs, go to Azure Monitor => Activity Log => export to the event hub. Don’t forget to click Save when it is set up.

This will now create an event hub in your event hub namespace with the name insights-operational-logs with a partition count of 4, you cannot change this. Whenever you create or delete a resource this will be sent to this event hub. You can see this on the incoming bytes of the throughput, the outgoing will be zero because you don’t have any consuming application on the other side of the event hub.

Consuming the events

There are many platforms that you can use to consume the event hub and scrape the storage account. One used often is Splunk, by using the plugin Azure Monitor Add-on For Splunk you can consume the activity logs and diagnostic logs through the event hub. The metrics will be queried directly at the API, you can control the data used by using tags on the resources. In addition, use the Splunk add-on for Microsoft Cloud services to consume the continuous export from application insights.

To give it a try, just spin up a docker container docker run -d -p 8000:8000 -e "SPLUNK_START_ARGS=--accept-license" -e "SPLUNK_PASSWORD=" --name splunk splunk/splunk:latest
Log in to the machine and install the required dependencies

# Elevate to root user
sudo -i

# Download script to setup Python dependencies
curl -O https://raw.githubusercontent.com/Microsoft/AzureMonitorAddonForSplunk/master/packages/am_depends_ubuntu.sh

# Set the execution attribute on the downloaded script
chmod +x ./am_depends_ubuntu.sh

# Run the script
./am_depends_ubuntu.sh

# Download Node.js and it's dependencies
curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -

# Install Node.js
  apt-get install nodejs

# Install Nodel modules in the add-on's app folder.
cd /opt/splunk/etc/apps/TA-Azure_Monitor/bin/app
npm install

# Return back to a non-root user
exit

Next, install the app and set up the inputs using the settings obtained by running the PowerShell script.

Resulting in data to be used on your dashboards

Conclusion

This is just an example of how you can configure your Azure environment to forward all activities and logs to an externally hosted system. Splunk is used as an example, the Azure set up can be used with multiple systems and is a very scalable way to integrate your Azure system. Remember to take a look into the security and productize this solution to your organizational demands.

Share

You may also like...

Leave a Reply