Network Incident Response in the AI Era

Mar 3 2026 ~ 6 min read
Kubernetes
Network Incident Response in the AI Era

"We Have Reports of a Possible Breach Yesterday at 5:23 PM"

The security team flags it the next morning. Anomalous outbound connections. Pods communicating with external IPs that don't match any known service. Timestamp: yesterday, 5:23 PM.

By the time you hear about it, it's been 16 hours. The pods may have restarted. Nodes may have scaled. Logs may have rotated. In a traditional setup, the evidence is already gone.

But your Kubernetes cluster has been running Kubeshark — silently recording every packet for your business-critical workloads, 24/7. The raw data from yesterday at 5:23 PM is still sitting in the buffer. Waiting.

You chat with your favorite AI agent or assistant like Claude Code, Cursor, and any MCP-compatible AI tool.

Prompt 1

You: "Create a snapshot from yesterday 5 PM to 6 PM. Name it."

Agent: 2.8 GB of raw packet data across 3 nodes — frozen in eight seconds. Every TCP and UDP packet for the targeted workloads during that hour, locked into an immutable snapshot. Buffer rotation, pod restarts, node scaling — none of it can touch this data now.

✓ Snapshot created
  Name:   breach-investigation-5pm
  ID:     ff894b52-f900-4047-8a18-c6451e7050ab
  Size:   2.8 GB across 3 nodes
  Window: 2026-03-02 17:00:00 — 18:00:00 UTC

Prompt 2

You: "Download the PCAP."

Agent: 2.6 GB. 74 PCAP files from 3 nodes, merged into one. A single Wireshark-compatible file containing the complete network evidence from the one-hour snapshot window.

✓ Downloaded: breach-investigation-5pm.pcap (2.6 GB)

Prompt 3

You: "Scan the PCAP. What's the busiest IP? Anything unusual?"

Agent: 5,301,288 packets. 2.6 GB. 9 L7 protocols detected. The anomalies surface immediately:

FindingDetail
🔴 30% HTTP 500 error rate92,544 of ~308,480 responses are server errors
🟡 16% DNS NXDOMAIN86,856 queries for names that don't exist
🟡 5,400 TCP ResetsConnections being forcefully killed
🔵 New Google IPs172.253.122.99 and 172.253.122.106 — never seen before
🔵 200+ single-packet external IPsPossible scanning or probe activity

The Google IPs jump out. They weren't in any previous capture. Something new is calling home from inside the cluster.

Under the hood, the agent ran a full-spectrum tshark analysis — IP endpoints, protocol hierarchy, HTTP status codes, DNS statistics, TCP anomalies, external connections — all in parallel.

Prompt 4

You: "Get the L7 entries for these IPs."

Agent: The decrypted HTTPS payload — the actual content of an encrypted TLS session:

{
  "src": {
    "name": "mizutest-websocket-client-956d99d9-jnrgt",
    "namespace": "sock-shop",
    "process": "curl",
    "pid": 46,
    "container_id": "d61c03bbca3c..."
  },
  "dst": { "ip": "172.253.122.106", "port": 443 },
  "method": "GET",
  "url": "/",
  "req_headers": {
    "Host": "www.google.com",
    "User-Agent": "curl/7.64.0",
    "Content-Type": "application/json"
  },
  "req_body": "{\"key\": \"value\"}",
  "status": 400,
  "capture": "ebpf_tls"
}

There it is.

A pod called mizutest-websocket-clientnot part of the application — is running curl inside a container, sending JSON data to www.google.com over TLS. Google rejected it. But against a real C2 server, this works silently.

The PCAP told us that traffic happened. Kubeshark's L7 dissection — powered by eBPF TLS interception — tells us what was said. The encrypted session was intercepted at the kernel level, no private keys needed, full request body captured.

The agent keeps pulling. Two more rogue pods emerge:

Threat Pod What It's Doing
🔴 Data Exfil mizutest-websocket-client curl sending JSON to www.google.com:443 every ~3s (~1,200 req/hr)
🟠 Rogue API mizutest-graphql-clientmizutest-graphql-server 6,648 GraphQL queries in 1 hour, wildcard CORS, Express.js
🟠 Outbound TLS mizutest-outbound-tls-golang Go binary, 2,088 TLS connections to gorest.co.in in 1 hour

All three pods: deployed in sock-shop namespace. All labeled version: v0.1.0. All using the mizutest- prefix. None are part of the Sock Shop application.

Same actor. Coordinated deployment. Hiding among legitimate workloads.

Prompt 5

You: "Create a PCAP with only the malicious traffic. Use BPF."

Agent: 2.6 GB → 85 MB. 97% noise eliminated. Only the attacker's packets come through. The agent constructed a BPF filter targeting all six malicious IPs, and Kubeshark applied it server-side during export — no need to download the full capture and filter locally. 85 MB of pure evidence, ready for Wireshark, your security team, or legal.

→ export_snapshot_pcap(
    bpf_filter="host 10.0.60.35 or host 10.0.31.202 or host 10.0.30.195
                or host 10.0.44.221 or host 172.253.122.99 or host 172.253.122.106"
  )

Prompt 6

You: "Create an HTML report with everything you found."

Agent:

Here's the HTML report you've requested:

🚨
Security Incident Response Report
CRITICAL - UNAUTHORIZED POD DEPLOYMENT
Kubernetes Cluster Compromise Detected
Incident Detected: 2026-03-04 | Snapshot: incident-response-hack
🔴 Executive Summary
Three unauthorized pods with the "mizutest-" prefix were discovered deployed in the sock-shop Kubernetes namespace. These pods are making outbound connections to external services, performing potential data exfiltration and reconnaissance activities. None of these pods are part of the legitimate Sock Shop application. The attack demonstrates sophisticated knowledge of the cluster environment, with pods deployed alongside legitimate workloads to avoid detection. Immediate action required.
Snapshot Name
incident-response-hack
Snapshot ID
ff894b52-f900-4047...
PCAP Size
241 MB
Duration
~5 min
Cluster Nodes
3
Total Packets
~441,000
Network Traffic Analysis
breach-investigation-5pm.pcap — 5,301,288 packets — 2.6 GB — 2026-03-02 17:00–18:00 UTC
Protocol Distribution
By packet count — 9 protocols detected
Total 5.3M
HTTP/1.x + HTTP/235.0%1,855,451
DNS18.0%954,232
TLS / SSL16.0%848,206
TCP (SYN/FIN/RST)12.0%636,155
GraphQL7.0%371,090
WebSocket5.0%265,064
gRPC3.0%159,039
AMQP2.5%132,532
Other1.5%79,519
Top Talkers by Volume
Bytes transferred — source → destination
front-end → catalogue-db
348 MB
internal
orders → orders-db
271 MB
internal
front-end → payment
212 MB
internal
carts → carts-db
170 MB
internal
mizutest-graphql-client → graphql-server
132 MB
rogue
mizutest-websocket-client → 172.253.122.106
91 MB
external
mizutest-outbound-tls → gorest.co.in
66 MB
external
front-end → user
49 MB
internal
5,301,288
Total Packets
30.0%
HTTP 500 Error Rate
16.0%
DNS NXDOMAIN
5,400
TCP Resets
200+
Single-Packet External IPs
9
L7 Protocols Detected
Identified Threats
3 unauthorized pods detected in sock-shop namespace — coordinated deployment by single actor
🔴
Threat #1
Data Exfiltration
CRITICAL
mizutest-websocket-clientwww.google.com:443
Processcurl (PID 46)
FrequencyEvery ~3s (~1,200 req/hr)
Captureebpf_tls (TLS decrypted)
// Decrypted TLS payload
"method": "GET",
"Host": "www.google.com",
"User-Agent": "curl/7.64.0",
"req_body": "{\"key\": \"value\"}",
"status": 400
🟠
Threat #2
Rogue GraphQL API
HIGH
mizutest-graphql-clientgraphql-server:8080
Requests6,648 queries / hour
FrameworkExpress.js
CORSAllow-Origin: *
// GraphQL query pattern
"protocol": "GraphQL",
"method": "POST",
"path": "/graphql",
"x-powered-by": "Express",
"status": 200
🟠
Threat #3
Outbound TLS Beaconing
HIGH
mizutest-outbound-tls-golanggorest.co.in:443
BinaryGo compiled binary
Connections2,088 TLS sessions / hour
Destinationgorest.co.in (external API)
// TLS connection metadata
"protocol": "HTTP/1.1",
"method": "GET",
"Host": "gorest.co.in",
"capture": "ebpf_tls",
"status": 200
Common Indicators — Single Actor Attribution
Namespace
sock-shop
Pod Prefix
mizutest-*
Label
version: v0.1.0
Tactic
Hiding among legitimate workloads

Six Prompts. That's It.

Here's every prompt that drove the investigation:

#What You SaidWhat Happened
1"Create a snapshot from yesterday 5 PM to 6 PM"2.8 GB of raw packets frozen across 3 nodes
2"Download the PCAP"2.6 GB merged PCAP exported and downloaded
3"Scan for anything unusual"5.3M packets analyzed, new external IPs flagged
4"Get the L7 entries for these IPs"TLS-decrypted payloads reveal curl exfiltration
5"Create a PCAP with only the malicious traffic"85 MB BPF-filtered evidence PCAP
6"Create a report"Full incident report with findings and remediation

No SSH. No kubectl exec. No tcpdump. No manual Wireshark filtering.

Under five minutes from "someone hacked my system" to a forensic report identifying three rogue pods, their processes, their container IDs, their decrypted TLS payloads, and their external destinations.

What Made This Possible

Three things came together to turn a panicked Slack message into a resolved incident in under five minutes:

Continuous Raw Capture

Kubeshark records every packet at the kernel level using eBPF — continuously, for your targeted workloads, with minimal configuration. When the breach happened, the evidence was already in the buffer. The agent simply asked for a snapshot, and the data was preserved before it could rotate out.

L7 Visibility Through TLS

Most network tools stop at "IP A talked to IP B on port 443." Kubeshark goes further — intercepting TLS sessions via eBPF at the kernel level, without needing private keys. The agent didn't just see that a pod connected to Google. It saw the HTTP method, the headers, the JSON body, the User-Agent string, the response code. That's the difference between "suspicious connection detected" and "pod mizutest-websocket-client, process curl, PID 46, container d61c03bbca3c, sent {"key": "value"} to www.google.com via TLS-encrypted HTTPS."

AI + MCP = Natural Language Forensics

The MCP server gives the AI agent direct access to Kubeshark's capabilities — snapshot creation, PCAP export with BPF filters, L7 API queries with KFL filters, L4 flow analysis. The agent combines these server-side tools with local analysis (tshark for PCAP scanning) to build a complete picture. You describe what you want. The agent figures out how to get it.

Your Toolkit

Every capability used in this investigation:

MCP ToolWhat It Does
create_snapshotFreeze a time window of raw packets into immutable storage
export_snapshot_pcapExport as Wireshark-compatible PCAP with optional BPF filter
download_fileDownload the PCAP to your local machine
list_api_callsQuery L7 payloads with KFL filters — HTTP, gRPC, DNS, Kafka, GraphQL, and more
list_l4_flowsQuery L4 TCP/UDP flows by pod, namespace, service, or IP

Try It

The next time someone says "something's off" — don't reach for kubectl. Start a conversation.

"Create a snapshot for the last hour. Download the PCAP. Scan for anomalies. Show me the L7 payloads for any suspicious IPs. Create a filtered PCAP with just the evidence. Generate a report."

Your AI agent will do the rest.

Kubeshark provides continuous cluster-wide traffic recording, immutable snapshots, and AI-powered forensic analysis via MCP. Get started →