NIS2 Incident Response on Azure: Meeting the 24-Hour Reporting Requirement

NIS2 Article 21 mandates documented incident handling with a 24-hour initial reporting deadline. Here is how to build a compliant incident response capability on Azure — detection, triage, evidence collection, and authority notification — without a dedicated SOC team.

Tags: NIS2, Incident Response, Azure, Microsoft Sentinel, Security, Compliance

Most cloud teams discover they have an incident response gap when they begin a NIS2 readiness assessment. It is not that they have no incident process — it is that the process they have was designed for operational outages, not for the security incident reporting that NIS2 requires. The reporting obligations are stricter than most teams expect, and the detection capability needed to meet them is not trivial to build.

This article covers the practical mechanics of NIS2-compliant incident response on Azure: what the directive actually requires, why the 24-hour window is harder to meet than it appears, and what you need to build to make it achievable.

What NIS2 requires for incident handling

NIS2 Article 21(2)(b) explicitly lists incident handling as one of the mandatory risk management measures. Entities must have documented procedures for detecting, classifying, responding to, and reporting significant security incidents. Article 23 sets out the notification requirements in precise terms: when a significant incident occurs, entities must notify their national competent authority (or CSIRT) within a defined and tight sequence.

NIS2 Article 23 reporting timeline: within 24 hours of becoming aware — initial warning (significant incident flag, suspected cause). Within 72 hours — incident notification (updated assessment, severity, indicators of compromise, measures taken). Within one month — final report (root cause, total impact, remediation measures, lessons learned). Each deadline runs from when the entity "becomes aware" — not from when the incident started.

A "significant" incident is defined in Article 23(3) as one that causes or could cause severe operational disruption of services, or financial loss to the entity, or significant material or immaterial damage to other natural or legal persons. In practice, for cloud infrastructure, this includes incidents affecting availability of essential services, unauthorised access to systems or data, ransomware, and major misconfigurations that expose sensitive data. The threshold is deliberately broad — when in doubt, the directive expects you to report.

Why 24 hours is harder than it sounds

The 24-hour clock starts from when the entity "becomes aware" of the incident. This sounds reasonable until you account for detection lag. Industry data consistently shows that the median time to detect a breach is measured in days, not hours — and for cloud infrastructure, misconfiguration-based exposures can persist for weeks before being identified. If you have no automated detection, you will almost certainly miss the window.

There is a secondary problem: awareness is not just about detecting that something happened. It requires enough context to classify the incident as significant. A single failed login alert is not an incident. Fifty failed logins against a privileged account followed by a successful authentication from an unusual location might be. Moving from raw signal to "this is a significant incident" within a timeframe that allows 24-hour reporting requires both the right log data and the correlation logic to surface genuine incidents from noise.

The Azure detection stack

A NIS2-capable detection stack on Azure has three layers: log collection, correlation and detection, and alerting. Each layer must be deliberately configured — the defaults are not sufficient.

NIS2 incident detection: Azure stack

Three-layer detection architecture

Log collection
  • Diagnostic settings on all resources: send activity, audit, and platform logs to Log Analytics
  • Entra ID sign-in logs, audit logs, and risky sign-in events via Entra ID data connector
  • Microsoft Defender for Cloud: alerts and security recommendations ingested into Sentinel
  • Network flow logs (NSG, Azure Firewall, Application Gateway) for lateral movement detection
  • Resource-level logs: Key Vault access events, Storage account operations, SQL audit logs
Correlation and detection (Sentinel)
  • OOTB analytics rules: Microsoft Security, Azure Activity, Entra ID connectors enabled
  • Custom KQL rules for environment-specific threat patterns (service principal abuse, unusual data exports)
  • Fusion detection: multi-stage attack correlation across identity, compute, and data signals
  • Threat intelligence matching against ingested IOCs (Microsoft TI feed + custom indicators)
Alerting and escalation
  • Sentinel incidents automatically mapped to severity: High/Medium/Low based on analytics rule configuration
  • Logic App playbook triggers on High severity incidents: Teams notification + ticket creation
  • Azure Monitor action groups for platform-level alerts (resource health, quota limits)
  • On-call escalation path documented and tested quarterly

Log Analytics workspace retention must be configured explicitly. Default retention is 30 days. NIS2 does not specify a minimum, but national authorities expect to see logs covering the incident timeline. 90 days hot + 12 months archive is a practical baseline. Archive tier storage costs are low — the risk of missing evidence is not worth the saving.

From log ingestion to incident: the detection flow

  1. Log sources emit events
  2. Diagnostic settings route to Log Analytics
  3. Sentinel analytics rules evaluate
  4. Correlated signals create incident
  5. Playbook fires on High severity
  6. On-call team notified
  7. Triage begins (clock starts)

This pipeline is the prerequisite for meeting the 24-hour window. Without automated detection surfacing incidents within minutes, you are relying on manual discovery — which typically means the incident has already been running for hours or days before triage begins. The gap between actual incident start and "becoming aware" is where compliance risk lives.

Detection rules: where to start

Microsoft Sentinel ships with hundreds of built-in analytics rules. Enabling all of them immediately produces alert fatigue that undermines incident response rather than supporting it. For NIS2, the priority is rules that surface the incident categories most likely to cross the "significant" threshold: credential compromise, privilege escalation, data exfiltration, and ransomware indicators.

// Detect service principal credential access followed by unusual operations
// Useful for identifying compromised automation credentials
let lookback = 1h;
let suspiciousOps = AuditLogs
  | where TimeGenerated > ago(lookback)
  | where OperationName in (
      "Add service principal credentials",
      "Update service principal",
      "Add member to role"
  )
  | where Result == "success"
  | project
      TimeGenerated,
      InitiatedByUpn = tostring(InitiatedBy.user.userPrincipalName),
      InitiatedByApp = tostring(InitiatedBy.app.displayName),
      TargetSP = tostring(TargetResources[0].displayName),
      OperationName;

let unusualSignIns = SigninLogs
  | where TimeGenerated > ago(lookback)
  | where ResultType == 0  // successful sign-in
  | where RiskLevelDuringSignIn in ("high", "medium")
  | project TimeGenerated, AppDisplayName, UserPrincipalName, IPAddress, Location;

suspiciousOps
| join kind=inner (unusualSignIns) on $left.InitiatedByUpn == $right.UserPrincipalName
| project
    AuditTime = TimeGenerated,
    Upn = InitiatedByUpn,
    TargetServicePrincipal = TargetSP,
    Operation = OperationName,
    RiskySignInFrom = Location

Custom KQL rules like this one correlate audit events with risky sign-in signals to surface compromised identity scenarios that out-of-the-box rules may miss. The key principle is to start narrow: write rules that produce high-confidence incidents, not rules that surface every anomaly. A false positive in a NIS2 context has real cost — you may need to decide whether to report it within 24 hours.

Automating the first 30 minutes with Sentinel playbooks

The first 30 minutes after incident detection are the most time-critical. In a small team without a dedicated SOC, those 30 minutes may be spent just locating the right person, getting them access to the right tools, and figuring out what actually happened. Sentinel playbooks — Logic Apps triggered by incident creation — automate the initial enrichment and notification steps so the responder arrives with context rather than starting from scratch.

Before: Without automated playbooks On-call responder receives a bare alert Manual lookup of affected resources Manual identification of who to notify No evidence preservation steps Incident timeline reconstruction starts cold

After: With automated playbooks Incident created with enriched context (entity timeline, related alerts) Affected resources automatically tagged in incident Teams/PagerDuty notification with deep link to Sentinel incident Evidence snapshot triggered: Key Vault access log export to long-term storage Responder arrives with a 15-minute head start

Sentinel automation rules (not playbooks) run before a human sees the incident. Use them for classification: auto-assign incidents matching known patterns to specific owners, auto-close known false positives (like alerts from approved penetration testing IPs), and auto-tag incidents with severity context. Keep playbooks for actions that have external side effects (notifications, ticket creation, resource isolation).

Evidence collection and chain of custody

NIS2 does not specify evidence preservation requirements with the precision of criminal law, but national authorities investigating a significant incident will expect you to produce logs, timelines, and documentation of actions taken during response. The practical requirement is that logs covering the incident period must be accessible and unaltered when the final report is due — up to one month after the incident.

Azure provides three sources of forensic evidence for cloud-level incidents. Azure Activity Logs record every ARM API call — every resource created, modified, or deleted, with the caller identity and timestamp. Log Analytics workspace logs (when diagnostic settings are correctly configured) provide service-level audit trails. Microsoft Sentinel incidents capture the correlation of signals, the timeline of detection, and all analyst actions taken during response — including comments, severity changes, and closure reasons.

Archive tier for Log Analytics: hot retention is billed at ~€2.30/GB/month, archive tier at ~€0.025/GB/month. Moving logs to archive after 90 days keeps costs manageable while ensuring evidence is available for up to 12 years. Configure this in the Log Analytics workspace resource under "Usage and estimated costs" → "Data retention". For Sentinel-ingested tables, configure per-table retention in the workspace settings.

Reporting to national authorities: what to include

The three-stage reporting process under Article 23 has different content requirements at each stage. The initial 24-hour warning is deliberately lightweight — authorities want to know a significant incident has occurred, not a full post-mortem. The 72-hour notification adds substance. The monthly final report is the complete account.

NIS2 Article 23: Report content by stage

What each notification must contain

Initial warning — 24 hours
  • Incident occurred: yes/no flag that a significant incident is suspected
  • Affected services: which systems, applications, or services are impacted
  • Initial classification: suspected cause (if known), whether cross-border impact is possible
  • Preliminary assessment: operational and financial impact so far
Intermediate notification — 72 hours
  • Updated severity assessment and scope of impact
  • Indicators of compromise (IOCs): IP addresses, domains, file hashes, tactics observed
  • Containment and mitigation measures taken so far
  • Current operational status: services restored, partially degraded, or still affected
Final report — within one month
  • Detailed description of the incident and root cause
  • Total duration and full scope of impact (users affected, data involved, services disrupted)
  • All response actions taken: detection, containment, eradication, recovery
  • Lessons learned and measures implemented to prevent recurrence

Most EU member states have designated ENISA or their national CSIRT as the competent authority for NIS2 notifications. Check your national transposition law — submission channels and formats vary by country. Poland: CERT Polska (cert.pl). Germany: BSI. Netherlands: NCSC-NL. Austria: CERT.at.

Four things to build before your next incident

If you are starting from a baseline of basic Azure Monitor alerts and no Sentinel deployment, the gap to NIS2-capable incident response can feel large. The practical approach is to sequence the work by the risk it addresses, not by completeness.

First: centralise your logs. Configure diagnostic settings to send all platform, service, and identity logs to a single Log Analytics workspace. This is the prerequisite for everything else. Without centralised logs, detection and evidence collection are both impossible. One workspace per environment (dev, prd) is fine; cross-workspace queries in Sentinel handle the rest.

Second: enable Microsoft Sentinel with the Microsoft Security connectors. Entra ID, Microsoft Defender for Cloud, and Azure Activity Log connectors are low-effort to enable and surface the most common incident categories (credential compromise, misconfiguration, suspicious resource operations) with built-in detection rules.

Third: build one playbook. Choose the High-severity incident notification playbook — trigger on Sentinel incident creation with severity "High", send a Teams/Slack message with incident details and a link to the Sentinel portal. This single automation closes the "how does anyone find out?" gap for the most serious incidents.

Fourth: document your incident response procedure. This does not need to be lengthy. It needs to cover: who is the incident response owner, what qualifies as a significant incident under your assessment, what are the contact details for your national CSIRT, and what is the sequence of steps from detection to notification. A two-page document that people have read is worth more than a 40-page playbook nobody has seen.

NIS2 auditors do not require a perfect SOC setup. They require evidence that you have thought through incident response, have detection capability, and have a documented process. A Sentinel workspace with a handful of active analytics rules, a tested notification playbook, and a short incident response procedure document puts you ahead of most entities at their first assessment.

Incident response capability is one part of a broader NIS2 readiness posture. The technical controls, governance, and supply chain measures that form the rest of Article 21 are covered in our companion article on Infrastructure as Code for NIS2 compliance. Both sides of the requirement — preventive controls and response capability — need to be in place for a complete posture.

Need this for your project?

We cover this exact scenario. Strategy, delivery, or both. See the use case or get in touch.