Migrating to Unified SecOps and the Sentinel Data Lake
The Microsoft unified SecOps platform consolidates SIEM, XDR, and exposure management. Here's the migration playbook — step by step, with validation at every stage.
Migrating to Unified SecOps and the Sentinel Data Lake
Articles 4a and 4b made the architectural case: analytics-first SIEM configuration with default retention doesn't hold up against modern attacks, and the data lake tier is the solution — not just for cost control, but for forensic depth, behavioral baseline building, and AI agent context.
This article is the migration playbook. How to move an existing Sentinel deployment to the data lake-aware architecture and the unified SecOps portal without disrupting live detection coverage.
There are two migration tracks depending on where you're starting:
- Track A: You're running Microsoft Sentinel in the classic workspace model and want to enable the data lake tier and migrate to the unified SecOps portal
- Track B: You're running a separate legacy SIEM and want to consolidate into Sentinel + the unified platform
This article covers Track A in depth. Track B migration is a dedicated engagement — scope, data connector migration, analytics rule translation, and cutover sequencing vary too much by legacy platform for a single playbook to handle. The principles here apply, but the specifics differ.
What the unified SecOps platform actually consolidates
Before the migration steps, it helps to be precise about what's changing.
Microsoft Sentinel remains the SIEM and SOAR layer — the workspace, analytics rules, watchlists, automation playbooks, and data connectors stay in Sentinel. The underlying Log Analytics workspace doesn't change.
Microsoft Defender XDR remains the XDR layer — endpoint, identity, email, and cloud app detection. Incidents generated by Defender XDR flow into Sentinel.
The unified SecOps portal (security.microsoft.com) consolidates the investigation and hunting experience across both. You're working in one interface rather than switching between the Defender portal and the Azure Sentinel portal. Incidents from both sources appear in a single queue. The hunting interface queries across both workspaces.
The Sentinel data lake (also called the Basic/Auxiliary tier or the Sentinel data lake storage tier depending on the interface you're looking at) is a workspace-level configuration that enables long-retention, lower-cost log routing as a DCR destination and makes it queryable via KQL.
You're not rebuilding anything. You're extending what's already there.
Pre-migration: the inventory
Before you touch a configuration, document what you're working with. This protects you from discovering mid-migration that a production analytics rule depends on a table you just moved.
Step 1: Export your active analytics rules
From the Sentinel workspace, go to Analytics → Active rules. Export the list. For each rule, record:
- The table(s) it queries
- Whether it uses entity enrichment
- The alert threshold and lookback window
This is your migration constraint map. Any table referenced by an active analytics rule cannot move to the data lake tier without either rewriting the rule to tolerate data lake latency or accepting that the rule will no longer generate real-time alerts. In most cases you'll keep the analytics-tier detection surface and route supplementary full-volume data to the lake — the dual-ingest pattern from Article 4b.
Step 2: Identify your highest-volume tables
Run this query in your Sentinel workspace to see where your ingestion volume is going:
Usage
| where TimeGenerated > ago(30d)
| summarize TotalGB = round(sum(Quantity) / 1024, 2) by DataType
| sort by TotalGB desc
| take 20
The top results are your data lake migration candidates. If a table in the top 10 isn't driving any of your active analytics rules (cross-reference your export from Step 1), it's a strong candidate for immediate data lake routing.
Step 3: Map your current retention settings
In Log Analytics workspace settings → Tables, check the retention period for each table. Default 90-day tables that should be extended are your first configuration action after enabling the data lake tier.
Enabling the Sentinel data lake
Step 4: Enable the data lake tier on your workspace
In the Azure portal, navigate to your Log Analytics workspace → Tables. For each table you want to route to the data lake tier, you can change the table's plan from "Analytics" to "Basic" (for lower-cost ingestion) or keep it on Analytics and set long-term retention separately.
The distinction matters:
- Basic/Auxiliary tier — lower ingestion cost, KQL-queryable, but analytics rules cannot run against it in real time. Use for high-volume hunting and forensics tables.
- Analytics tier with extended retention — same detection capability, increased retention cost compared to data lake. Use for detection-driving tables where you need the retention depth but can't accept data lake latency.
For most high-volume raw telemetry tables (DeviceProcessEvents full volume, DnsEvents, DeviceNetworkEvents), Basic tier with 1-2 year retention is the right configuration. For tables that drive analytics rules (SigninLogs, AuditLogs), extend Analytics tier retention to 6-12 months rather than moving to Basic.
Step 5: Validate cross-tier query access
Run a test query that explicitly targets a recently configured data lake table:
// Validate data lake tier is queryable
SecurityEvent
| where TimeGenerated > ago(1d)
| take 10
If the table is correctly in Basic tier, the query should return results. If it's not returning data, check that the DCR routing is directing to the correct tier and that the workspace data lake feature is fully enabled.
Migrating to the unified SecOps portal
This migration is low-risk because it's a portal migration, not a data migration. Your workspace, rules, and data stay exactly where they are.
Step 6: Enable Microsoft Sentinel in the unified Security portal
In the Defender portal (security.microsoft.com), navigate to the Sentinel workspace connector. Connect your existing Sentinel workspace to the unified portal. This is a read/write connection — incidents created in either portal sync.
The unified portal will prompt you to complete the connection and may ask you to set your primary workspace if you have multiple.
Step 7: Validate incident sync
After connecting, open a recent Sentinel incident. Confirm it appears in the unified portal incident queue with the correct severity, entities, and evidence items. Create a test comment in the unified portal and verify it syncs back to the Sentinel workspace in Azure.
Incident sync is near-real-time but not instantaneous. Allow 1-2 minutes and refresh if an incident doesn't appear immediately after creation.
Step 8: Validate hunting query access
In the unified portal, navigate to Hunting → Advanced Hunting. Run a Sentinel-sourced query:
SecurityIncident
| where TimeGenerated > ago(7d)
| summarize count() by Status
Confirm the query returns results from your Sentinel workspace. This validates that the advanced hunting interface has cross-workspace access and that your Defender XDR and Sentinel data is queryable from a single interface.
Migrating data connectors
For new log sources you're adding as part of this migration, use the new Sentinel data connectors experience in the unified portal rather than the legacy connector gallery in Azure. New connectors added through the unified portal benefit from improved schema alignment and built-in DCR support.
For existing connectors, no migration is required. They continue to function.
Step 9: Audit and update DCR configurations
Data Collection Rules are where the dual-ingest pattern lives. For each high-volume log source, verify your DCR is configured to split event routing — alert-level events to analytics, full volume to data lake. This is the Article 4b pattern applied in practice.
An example DCR transformation for DNS logs:
{
"transformKql": "source | extend _IsBilledFilteredEvent = (QueryType == 'NXDOMAIN' or isnotempty(ThreatIP)) | project TimeGenerated, Computer, ClientIP, Name, QueryType, _IsBilledFilteredEvent",
"outputStream": "Microsoft-DNS"
}
This routes only NXDOMAIN and threat-flagged queries to the analytics tier while the full DNS log goes to a data lake table. The _IsBilledFilteredEvent field controls which records are billed at analytics-tier rates.
Graph capabilities: what comes with the data lake
Once the data lake is enabled and the unified portal is connected, two graph capabilities become available without additional configuration.
Blast radius graph (GA with Sentinel data lake) — In the Defender XDR portal, incident graphs now include blast radius visualization: current breach impact plus possible future lateral movement paths from any compromised entity. This updates live as investigation progresses. No extra setup required.
Hunting graph (GA with Sentinel data lake) — Interactive graph traversal in the hunting interface. Reveal hidden relationships between users, devices, and entities. Map privileged access paths to critical assets. Useful for incident prioritization and scope determination before deploying a full investigation.
Both are generally available once your workspace has the data lake enabled. They don't require MCP configuration or preview enrollment.
Graph MCP tools (preview) — If you're building AI agent workflows using the Sentinel MCP server, three graph tools are available via preview enrollment:
| Tool | What it does | GA status |
|---|---|---|
blast_radius | Returns all resources and identities reachable from a compromised entity | PREVIEW |
path_discovery | Confirms or refutes an access path between two specific entities | PREVIEW |
exposure_perimeter | Returns all identities that can reach a specific resource (including multi-hop paths) | PREVIEW |
Enroll in the preview at learn.microsoft.com/azure/sentinel/datalake/sentinel-graph-overview before building production agent workflows that depend on these tools. Preview behavior can change. Verify current status before committing to implementation.
Behavioral baseline validation
The migration is complete when your data lake is routing correctly and the unified portal is connected. But there's a softer validation that matters more for the AI agent workflows from Article 3.
Behavioral baselines don't build instantly. An entity triage agent comparing current authentication against a 30-day baseline needs 30 days of data lake retention to have anything to compare against. A hunting agent looking for anomalies in access patterns needs whatever lookback window your hunting hypothesis requires.
Step 10: Set a 60-day validation checkpoint
Sixty days after completing Steps 4-9:
-
Run the ingestion volume query from Step 2 again. Confirm that your highest-volume raw telemetry tables are now in the data lake tier and analytics-tier costs have decreased.
-
Test an entity triage query against
SigninLogswith a 30-day lookback. Confirm you're getting data and that the behavioral comparison is producing meaningful variance signals. -
Run a hunting query against a data lake table that was previously analytics-only. Confirm the historical depth you expected is actually there.
If any of these fail, the most common causes are: DCR routing not applied to all ingest paths for a given source, table still configured on analytics tier in Log Analytics settings, or data lake tier not fully enabled on the workspace.
Executive Summary for Security Leadership
The migration to unified SecOps is a portal migration, not a data migration. Existing Sentinel workspaces, analytics rules, and data connectors continue to function without changes. The investment is in configuration updates, not infrastructure replacement.
The data lake tier is the enabler for two strategic capabilities: forensic depth for late-discovered breach investigations, and behavioral baseline data for AI-assisted triage and hunting. Both have been constrained by default 90-day analytics retention. Enabling and routing to the data lake tier closes that constraint.
Behavioral baselines require time to build. Organizations that enable the data lake today will have meaningful AI agent context in 60-90 days. Organizations that delay will continue to run agents against incomplete historical data. The capability is time-gated by data accumulation.
The graph capabilities (blast radius, hunting graph) become available without additional configuration once the data lake is enabled. These are generally available — not preview. They change how quickly analysts can scope incident impact and prioritization from minutes of manual enumeration to seconds of graph traversal.
This quarter: complete Steps 4-8 (data lake tier enable, DCR updates for top three high-volume sources, unified portal connection). Set the 60-day validation checkpoint to confirm routing is correct before agent workflows depend on the data depth.
Where this lands
Six articles in, the blueprint is complete for the detection and architecture layers.
You have the threat picture (pillar), the tool map (Article 1), the log source architecture (Article 2), the agent automation patterns (Article 3), the case for architectural change (Article 4a), the specific routing and retention decisions (Article 4b), and now the migration playbook (this article).
The next articles in this series move from foundation to advanced operations: multi-tenant MSSP architectures, threat hunting at scale with KQL and notebooks, and operationalizing the agent workflows from Article 3 into a production SOC program.
Let me know what you build.
This article is part of the Threat-Informed Defense Series: The Agentic SOC. See the pillar article for the complete framework.