Unlimited Advanced Hunting for Microsoft 365 Defender with Azure Data Explorer

Koos Goossens
15 min readMar 30, 2023

Introduction

More and more customers ask me what the options are to extend the retention in Microsoft 365 Defender beyond the default 30 days.

Data like incidents, alerts and event timelines of devices remain available for 180 days. But in this particular case they’re referring to the Advanced Hunting data being purged beyond 30 days. So you won't be able to use Kusto Query Language (KQL) to look for events in the "raw data". And for pro-active hunting purposes, I can agree with my customers; this is just too short.

In this article I'd like to demonstrate how you can leverage Azure Data Explorer (ADX) to archive data from Microsoft 356 Defender without having to make use of Microsoft Sentinel in between. Because relaying this data through Sentinel is not preferred by most, due to the added costs that come along with it. Which can be huge in some cases.

I'll also be providing a PowerShell script and ARM templates which will make the entire deployment very easy.

This article is split into multiple parts due to the variety of Microsoft products we'll be combining, and the various choices you can make along the way.

  • Part I | Introduction and automated deployment [📍you are here ]
    – Architectural overview
    – Configuring Microsoft 365 Defender
    – Preparations for automated deployment
    – Running the
    DefenderArchiveR PowerShell Script for automated deployment
  • Part II | In-depth & design choices
    – Calculating Defender log size
    – Choosing the right Event Hub namespace(s)
    – Deciding on Azure Data Explorer tiers and options
    – ADX caching and compression

Credit where credit's due

First Javier Soriano from Microsoft published a blog back in 2020 about archiving Sentinel data to Azure Data Explorer.

Later, Sreedhar Ande made the whole setup process very easy with the PowerShell script he released in 2021 to automate the whole setup and configuration process. Amazing effort!

But these solutions were based on using the Log Analytics Data Export feature to stream data to Event Hub or Azure blob storage and then extending these into ADX. And as Javier Soriano wrote in an update on a blog; this solution got kind of superseded by the addition of the new archive tier in Sentinel in 2022.

And while I agree, that using a native feature like Sentinel Archive is a much easier solution and has better integration with the product, I believe there are still good reasons to use Azure Data Explorer for certain logs and purposes. Especially when it comes to Microsoft 365 Defender logs and the desire to keep these logs for a longer period of time than the default of 30 days available in the product.

But streaming these logs into Sentinel first is not very cost effective. You’ll be billed for the ingestion into Sentinel before these logs end up in your archive. I’ve worked in environments where these logs can easily end up into the hundreds of Gigabytes or even more than a Terabyte per day!

So, I wanted to create a solution where we leverage the native Streaming API feature from Defender, stream the data into an Event Hub and store it into Azure Data Explorer for there.

Jeff Chin from Microsoft already wrote a blog about archiving data directly from Defender back in 2021. But a lot of new tables were added in the meantime and schema’s have been extended. So I needed to update this and simplify the process as well in the same way as Sreedhar Ande did back in 2021 with his PowerShell script.

Architectural overview

Before we dive into scripts and code, let's first take a high-level look at what we're trying to achieve here.

High-level overview of solution

So, Defender can push raw events onto an Event Hub and Azure Data Explorer is able to pull messages from an Event Hub as well. Sounds simple right?!

Well, like always; it's a little bit more complicated than that. 😉

When we configure the Streaming API in Defender we'll notice that we only have five slots available for configuration.

Perhaps one or two slots are already taken because you've enabled de Microsoft 365 Defender data connector in Microsoft Sentinel. There doesn't seem to be a way to remove these from here. In my case I had enabled it on two workspaces in the past and those two workspaces don't exists anymore! So be careful when cleaning up workspaces and disable your data connectors first!

Within each slot we can decide to push logs to Event Hub or Azure Storage.

Configuring the Streaming API within Microsoft 365 Defender

Besides an Event Hub Namespace Resource ID, we can provide a name of a specific Event Hub, or leave the latter empty so that Defender will automatically split up the logs into separate Event Hubs for you.

Event Hubs reside inside an Event Hub Namespace. And depending on what tier Namespace you choose, you're limited in the amount of Event Hubs you can create and what throughtput limitations apply. More on this in Part II of this article. (coming soon)

This means we'll end up with the same amount of Event Hubs as we have tables in Microsoft 365 Defender. And most Event Hub Namespaces tiers have a limit of only ten Event Hubs per Namespace. Fortunately Microsoft points this out with the mouseover tip:

Great tip! Thanks!

If we want to forward all logs from every table, we'll need twenty-one individual Event Hubs which means we'll need three Event Hub Namespaces. And thus three of five available Streaming API configuration slots to point to their respective Namespaces.

Overview of Azure resources in reality; multiple Event Hubs spanning multiple Event Hub Namespaces

Using one slot, and configure all tables to be outputted to one single Event Hub, isn't ideal because you'll most probably end up facing performance limitations on that single Event Hub. And because this solution is probably most suited for larger enterprises, I decided to split things up and spread out the load. More on the individual tiers and performance limitations in Part II of this article.

Azure Data Explorer

Besides Event Hub Namespaces and Event Hubs, we'll also going to need an Azure Data Explorer Cluster (ADX). ADX natively supports ingesting data from Event Hubs for which data connections should be created.

But these data connections cannot parse or alter these event messages on-the-fly in any way. So they'll end up in a single column named records with a datatype of dynamic.

DeviceInfo sample in ADX ingestion wizard showing only one column

To solve this we need two ADX tables per Event Hub; one to capture the raw logs coming in from the Event Hub, and a second one where the data will be stored long term with the same schema as it had in Defender.

ADX data flow

Within ADX data will flow between two tables:

Example of 'DeviceInfo' being processed inside ADX. Click for larger view.
  1. A data connection is responsible to pull in new event messages from Event Hub.
  2. The "raw" table has a simple schema of just one column, matching the data that's coming in thru the data connection.
  3. A second "destination" table, with the same name as the original, is the one our security analysts and threat hunters will be performing their magic on.
  4. An "expand" function will be created with a piece of KQL responsible for transforming the records into the desired results. By using mv-expand we can expand the original json data, but all values expanded will keep their original datatype of dynamic. That's why we need to add datatype conversion operators like todatetime(), tostring(), tolong(), tobool() and others to make sure the end result exactly matches the schema of the original table in Defender.
  5. An update policy is responsible for triggering the expand function once new data is ingested into the "raw" table, populating the "destination" table.

To determine the schema of the original table's, and to insert the proper datatype operators in KQL, we'll be using some scripting to avoid any manual steps. More on this below.

Setting up ADX

Inside our ADX solution we need to configure and setup quite some components to make sure everything works together nicely. And we need to repeat these steps for every table we want to ingest from Defender.

Before I'll show you how to do this, let's quickly sum up all of the steps we need to take after the ADX cluster is deployed:

  1. Create an ADX database. We only need one of these.
  2. Create a "raw" table within that database for each Defender table where the raw data will be ingested from the individual Event Hubs.
  3. Create a "mapping" within each "raw" table, which acts as a schema of sorts so that ADX known what data resides inside that table.
  4. Create a "destination" table for each Defender table where the data is eventually stored and queried from by the users.
  5. These "destination" tables also need mappings, which need to be 1:1 with the original schema's used in the Advanced Hunting tables in Defender.
  6. For every table we'll need to create a function which will expand and transform the original records into their required datatype matching the schema/mapping of the original Advanced Hunting table.
  7. An update policy needs to be set for every "destination" table, calling the expansion function created earlier upon the ingestion of new records in the corresponding "raw" table.
  8. A data retention policy needs to be configured for each and every table. Determining how long to keep the data.
  9. And lastly, we probably want to assign Azure Active Directory permissions access to the database so they're able to query the data.

The database can be created from the UI or from an ARM template deployment, but the rest of the steps above need to be executed as Data Explorer commands from the query interface.

Data Explorer commands

Here's an example of the commands we need to execute for setting up the DeviceInfo tables, mappings, functions and update policy:

.create table DeviceInfoRaw (records:dynamic)

.create-or-alter table DeviceInfoRaw ingestion json mapping 'DeviceInfoRawMapping' '[{"Column":"records","Properties":{"path":"$.records"}}]'

.alter-merge table DeviceInfoRaw policy retention softdelete = 1d

.create table DeviceInfo (Timestamp:datetime,DeviceId:string,DeviceName:string,ClientVersion:string,PublicIP:string,OSArchitecture:string,OSPlatform:string,OSBuild:long,IsAzureADJoined:bool,JoinType:string,AadDeviceId:string,LoggedOnUsers:string,RegistryDeviceTag:string,OSVersion:string,MachineGroup:string,ReportId:long,OnboardingStatus:string,AdditionalFields:string,DeviceCategory:string,DeviceType:string,DeviceSubtype:string,Model:string,Vendor:string,OSDistribution:string,OSVersionInfo:string,MergedDeviceIds:string,MergedToDeviceId:string,SensorHealthState:string,IsExcluded:bool,ExclusionReason:string,ExposureLevel:string,AssetValue:string)

.alter-merge table DeviceInfo policy retention softdelete = 365d recoverability = enabled

.create-or-alter function DeviceInfoExpand {DeviceInfoRaw | mv-expand events = records | project Timestamp = todatetime(events.properties.Timestamp),DeviceId = tostring(events.properties.DeviceId),DeviceName = tostring(events.properties.DeviceName),ClientVersion = tostring(events.properties.ClientVersion),PublicIP = tostring(events.properties.PublicIP),OSArchitecture = tostring(events.properties.OSArchitecture),OSPlatform = tostring(events.properties.OSPlatform),OSBuild = tolong(events.properties.OSBuild),IsAzureADJoined = tobool(events.properties.IsAzureADJoined),JoinType = tostring(events.properties.JoinType),AadDeviceId = tostring(events.properties.AadDeviceId),LoggedOnUsers = tostring(events.properties.LoggedOnUsers),RegistryDeviceTag = tostring(events.properties.RegistryDeviceTag),OSVersion = tostring(events.properties.OSVersion),MachineGroup = tostring(events.properties.MachineGroup),ReportId = tolong(events.properties.ReportId),OnboardingStatus = tostring(events.properties.OnboardingStatus),AdditionalFields = tostring(events.properties.AdditionalFields),DeviceCategory = tostring(events.properties.DeviceCategory),DeviceType = tostring(events.properties.DeviceType),DeviceSubtype = tostring(events.properties.DeviceSubtype),Model = tostring(events.properties.Model),Vendor = tostring(events.properties.Vendor),OSDistribution = tostring(events.properties.OSDistribution),OSVersionInfo = tostring(events.properties.OSVersionInfo),MergedDeviceIds = tostring(events.properties.MergedDeviceIds),MergedToDeviceId = tostring(events.properties.MergedToDeviceId),SensorHealthState = tostring(events.properties.SensorHealthState),IsExcluded = tobool(events.properties.IsExcluded),ExclusionReason = tostring(events.properties.ExclusionReason),ExposureLevel = tostring(events.properties.ExposureLevel),AssetValue = tostring(events.properties.AssetValue) }

.alter table DeviceInfo policy update @'[{"Source": "DeviceInfoRaw", "Query": "DeviceInfoExpand()", "IsEnabled": "True", "IsTransactional": true}]'

Note that to construct both lines for creating the the destination table mapping and the expand function, you'll need to know what the original table schema is. This is where the DefenderArchiveR PowerShell script comes in...

Data retention

As you can see from the example above; the data retention for raw log tables can be as short as 1 day. Because once the data is flowing in, the update policy will trigger the expand function and saves the data into the destination table. Afterwards the raw records are no longer needed. For the destination tables it's up to you; ADX support keeping data up to 100 years!

Depending on the amount of data you'll be ingesting from Defender, and the setup you choose for your Event Hubs and their retention, it might be safer to use a softdelete policy for at least a few days. If for some reason the update policy won't be working correctly, you won't be loosing data.

Roll out!

Ok, enough reading! It's time to roll out some deployments!

DefenderArchiveR.ps1

This PowerShell script will help you out by deploying all the necessary resources for you and setting up everything fully automated. The only thing you need to do is make sure you meet all prerequisites and configure the Streaming API in Defender once it's finished.

Inside my repository you'll find the following files:

  • DefenderArchiveR.ps1 | PowerShell script for automated deployment
  • dataexplorer.template.json | ARM template for deploying ADX
  • eventhub.template.json | ARM template for deploying Event Hub(s)
  • workspacefunction.template.json | template for deploying (Sentinel) workspace functions

Visit my Github repository and start cloning!

Prerequisites

Before we can run the script we need to meet a couple of prerequisites:

  • Make sure all hard-coded variables inside the script meet your environmental needs:
  • Create an App Registration which is used to query the Microsoft Graph API to collect the schema of each of the tables in Defender. This Application needs ThreatHunting.ReadAll permissions for Microsoft.Graph. Make sure to grant admin consent and assign a secret with a very short lifespan, we'll only need to do this once. (more on this below)
  • The Azure Subscription requires two resource provides to be registered: Microsoft.EventHub and Microsoft.Kusto. The script will check the status of these, but will not enable them for you.
  • The user running DefenderArchiveR.ps1 need to have either Owner or Contributor and UserAccess Administrator role(s) on the Azure subscription. This is needed to deploy the Azure resources, but also to make sure the ADX system-assigned Managed Identity has the required permissions on the Event Hub(s).
Example of an App with the minimal required permissions to gather the table schema's

Parameters

DefenderArchiveR’s behavior can be modified with some parameters:

  • tenantId
    The Tenant ID of the Azure Active Directory in which the app registration and Azure subscription resides.
  • appId
    The App ID of the application used to query Microsoft Graph to retrieve Defender table schemas.
  • appSecret
    An active secret for the App Registration to query Microsoft Graph to retrieve Defender table schemas.
  • subscriptionId
    Azure Subscription ID in which the archive resources should be deployed.
  • resourceGroupName
    Name of the Resource Group in which archive resources should be deployed.
  • m365defenderTables
    Comma-separated list of tables you want to setup an archive for. Keep in mind to use proper "PascalCase" for table names! If this parameter is not provided, the script will use all tables supported by Streaming API, and will setup archival on all of them.
  • outputAdxScript
    Used for debugging purposes so that the script will output the ADX script on screen before it gets passed into the deployments.
  • saveAdxScript
    Use -savedAdxScript switch to write content of $adxScript to ‘adxScript.kusto’ file. File can be re-used with -useAdxScript parameter.
  • userAdxScript
    Provide path to existing ‘adxScript.kusto’ file created by -saveAdxScript parameter.
  • skipPreReqChecks
    Skip Azure subscription checks like checking enabled resource providers and current permissions. Useful when using this script in a pipeline where you’re already sure of these prerequisites.
  • noDeploy
    Used for debugging purposes so that the actual Azure deployment steps are skipped.
  • deploySentinelFunctions
    Use -deploySentinelFunctions switch to add optional step to the deployment process where (Sentinel) workspace functions are deployed (savedSearches) to be able to query ADX from Log Analytics / Sentinel UI. (more on this below)

Example with a single table

Let's say we only want to archive the DeviceInfo table from Defender. We can run DefenderArchiveR as follows:

./DefenderArchiveR.ps1 `
-tenantId '<tenantId>' `
-appId '<application(client)Id>' `
-appSecret '<applicationSecret>' `
-subscriptionId '<subscriptionId>' `
-resourceGroupName '<resourceGroupName>' `
-m365defenderTables 'DeviceInfo' `
-deploySentinelFunctions `
-saveAdxScript

  1. Since only theDeviceInfo table was provided, that's the only schema it will retrieve via the Microsoft Graph API. During this step a variable named $adxScript will be populated with all the ADX commands required for setting up the tables, mapping, expand function and policy. This will be used in a later step when setting up ADX. And because we used the -saveAdxScript parameter, this variable is now also stored into a file named adxScript.kusto for reuse in incremental redeployments. (see next example)
  2. A browser pop-up will ask the user to sign-in. If the user was already signed-in by running Connect-AzAccount this will be skipped. After signing-in to Azure, it will check if the current user has the appropriate permissions….
  3. …and if the subscription has the required resource providers registered.
  4. The script will "calculate" how many Event Hub Namespaces will be required for deployment. Remember that we can only have ten Event Hubs per Event Hub Namespace. In this case only one is required.
  5. The Event Hub Namespace will be deployed including a single Event Hub for DeviceInfo event messages to land in.
  6. The Resource ID of the Event Hub Namespace will be displayed. We'll need this at the end when configuring Streaming API in Defender.
  7. The Azure Data Explorer (ADX) cluster will be deployed including a single database. The $adxScript variable will be used as part of the deployment to make sure all the required ADX commands are executed.
  8. And for every table provided, it will create a data connection for event message retrieval from the Event Hub.
  9. The system-assigned Managed Identity of the ADX cluster will be assigned the "Azure Event Hubs Data Receiver" role on the resource group. This is required for the data connections retrieving the Event Hub messages to work.
  10. This step is optional and will deploy KQL functions inside a (Sentinel) workspace so that you're able to query the ADX data from within the workspace UI. Instead of needing to go to the ADX query interface.

Example with selection of all tables

Now let's look at an example where we want to archive all tables. For the sake of this example; let's say this is the second time we run the script, and we have already saved the adxScript to a file in a previous run. (as demonstrated above)

./DefenderArchiveR.ps1 `
-tenantId '<tenantId>' `
-appId '<application(client)Id>' `
-appSecret '<applicationSecret>' `
-subscriptionId '<subscriptionId>' `
-resourceGroupName '<resourceGroupName>' `
-useAdxScriptFile 'adxScript.kusto'

An existing 'adxScript.kusto' file, containing all the ADX commands, is reused and thus skipping the schema retrieval of the tables.
Note that there are now three Namespaces required to host twenty-one Event Hubs
The deployment of ADX is also repeated three times to make sure all data connections are in place

Workspace functions

As mentioned above, with an optional step workspace functions can be deployed to the desired workspace as well. This makes it possible to query your archive in ADX straight from Sentinel!

Query ADX from Sentinel is awesome! But unfortunately you cannot use the tablename as a function name

Configure Streaming API in Defender

Once DefenderArchiveR ran successful, the only thing left to do is to configure the Streaming API within Defender:

  • Make sure you've activated the Global Administrator role
  • Visit https://security.microsoft.com and go to SettingsMicrosoft 365 DefenderStreaming API and click "Add"
Configuring the Streaming API within Microsoft 365 Defender
  • Provide a suitable name and the Resource ID of the Event Hub Namespace you deployed earlier.

Make sure to leave the "Event Hub name" field empty!

  • Select the tables you'll be sending to that specific Namespace. The first 10 tables go into the first Namespace etc. As a reference you can also peek at DefenderArchiverR’s output, to know which table goes where.
  • Repeat this step up to three times, depending on the amount of tables you're forwarding and the amount of Namespaces you deployed to facilitate them.

You're all set!

Please give it up to 30 minutes before logs will flow into your Event Hubs and into Azure Data Explorer.

Once your data is ingested in Azure Data Explorer, you can enjoy endless data retention and endless KQL queries, because your results are no longer limited to the same query limits that apply to Sentinel / Defender. ❤️

You'll probably notice that the results do come back a bit slower than you might be used to. This has to to do with the ADX cluster tier, how much instances there are available and the compute size of these instances. It's also possible to cache a certain amount of data for better performance. But the slower performance might not be a huge problem. When using the data periodically (for pro-active threat hunting, forensic investigations and/or to simply meet compliance requirements) it might be sufficient.

All of these choices, and more like data compression and cost benefits, are detailed in Part II of this article.

If you have any follow-up questions don’t hesitate to reach out to me. Also follow me here on Medium or keep an eye on my Twitter and LinkedIn feeds to get notified about new articles here on Medium.

I still wouldn’t call myself an expert on PowerShell. So if you have feedback on any of my approaches above, please let me know! Also never hesitate to fork my repository and submit a pull request. They always make me smile because I learn from them and it will help out others using these tools. 👌🏻

I hope you like this tool and it will make your environment safer as well!

If you have any follow-up questions, please reach out to me!

— Koos

--

--

Koos Goossens

Microsoft Security MVP | Photographer | Watch nerd | Pinball enthusiast | BBQ Grillmaster