Skip to main content
DEEP_DIVE_LOG.txt

[01:13:08] SYSTEM: INITIATING_PLAYBACK...

Your AI Agents Can Now Map Your Entire Cloud Infrastructure

MAY 16, 2026|AGENT.CEO TEAM|7 min read MIN_READ
Productagentsdiscoverycloudgcpawsazureenterpriseneo4j

Your AI Agents Can Now Map Your Entire Cloud Infrastructure

TL;DR

  • New Cloud Provider Discovery Engine scans GCP, AWS, and Azure, writing live infrastructure into a Neo4j knowledge graph your agents already query.
  • 4 new MCP tools + 6 Gateway API endpoints let agents answer "what's running where?" in plain English — no console-hopping.
  • In a cyborgenic organization, agents that cannot see your infrastructure are just expensive chatbots.

"Where are all our cloud resources?" Ask that question at most companies and you get a pause, a sigh, and a spreadsheet last updated three months ago. That is not infrastructure management. That is institutional amnesia with a billing account.

A cyborgenic organization — where AI agents hold real operational roles — cannot function on stale inventory docs and manual audits. Agents need live, queryable awareness of what is actually running. This week we shipped the system that gives them exactly that: the Cloud Provider Discovery Engine.

Here is everything that shipped this week.

Cloud Provider Discovery Engine

The problem

Enterprise teams run infrastructure across multiple cloud providers. That's not changing. What keeps changing is what's actually deployed — new VMs spun up for a demo, databases migrated between projects, storage buckets created by a CI pipeline nobody remembers writing.

The result: shadow infrastructure. Resources that cost money, create security exposure, and exist outside anyone's mental model of the system. Cloud consoles show you what's in their provider. Getting a unified picture means stitching together three different dashboards, APIs, and permission models.

We wanted our agents to do that automatically.

What we built

The Cloud Provider Discovery Engine is a set of connectors that scan your cloud accounts and write what they find into a Neo4j knowledge graph — the same graph that powers the agent wiki and search system.

Three cloud connectors ship today:

  • GCP — Compute instances, Cloud SQL databases, GCS buckets, VPC networks, and more
  • AWS — EC2, RDS, S3, VPCs, and related resource types
  • Azure — 8 resource types at launch, covering VMs, managed databases, storage accounts, virtual networks, and associated networking and identity resources

Each connector authenticates with your existing cloud credentials, enumerates resources across projects and regions, and normalizes everything into a common schema before writing to the graph.

A Provider Manager service orchestrates the whole process. Point it at your cloud accounts, and it handles scheduling scans, deduplicating resources, tracking changes between runs, and routing results to the graph writer.

How agents use it

Discovery results flow into Neo4j Pages — the same wiki and knowledge base system your agents already use for search. That means any agent in your organization can query infrastructure data the same way it queries any other organizational knowledge.

We added 4 new MCP tools so agents can:

  • List discovered resources by provider, type, or region
  • Query relationships between resources (which VMs are in which network, which databases are in which project)
  • Search for resources matching specific criteria (public IPs, specific tags, resource states)
  • Trigger on-demand rescans of specific providers or accounts

And 6 new Gateway API endpoints expose the same capabilities for external integrations and dashboards.

The practical upshot: an agent can now answer questions like "What databases are running in our GCP project?" or "Show me all Azure VMs with public IPs" or "Which AWS S3 buckets don't have encryption enabled?" — pulling from live discovery data, not stale documentation.

Why this matters for enterprise teams

If you're evaluating AI agent platforms, here's the question that should be on your list: Can the agents actually see your infrastructure, or are they just chatbots with extra steps?

Most agent platforms operate in a vacuum. They can write code, draft emails, summarize documents — but they have zero awareness of the systems your business actually runs on. When you ask them about your infrastructure, they hallucinate or punt.

In a cyborgenic organization, agents that understand your business need to understand your infrastructure. The Discovery Engine is the foundation — automated, multi-cloud, continuously updated. No manual inventory. No import scripts. Your agents discover what exists and keep their knowledge current.

For teams running across all three major providers, this replaces a category of manual work that's tedious, error-prone, and perpetually out of date. And because results land in the same knowledge graph that powers everything else, infrastructure awareness compounds with every other data source your agents have access to.

The numbers: 3 cloud providers, 8+ resource types per provider, 40 new tests (158 total passing), and a clean integration with the existing Neo4j-backed wiki. This wasn't a prototype — it shipped with production-grade test coverage and a real orchestration layer.

Also shipping this week

The Discovery Engine was the headliner, but we shipped four other significant features this week.

Email-to-agent pipeline (Phase 1)

Agents can now receive and act on inbound emails. Phase 1 includes an intent classifier that parses incoming messages, a Firestore-backed approval queue for human-in-the-loop review, and a Gmail inbound poller with NATS routing to get messages to the right agent. 39 tests. This is the foundation for workflows where customers, partners, or internal teams interact with agents over email — no new interface required.

SEO tools for agents

We integrated Google Search Console sitemap submission and built MCP tools that let our marketing and fullstack agents manage SEO workflows directly. Our own marketing agent uses these to submit sitemaps and monitor indexing. If you're running agent-driven content operations, the same tools are available.

Pull-based task discovery (TMS)

Our Task Management System now supports pull-based task discovery. Instead of relying solely on NATS message delivery, agents can pull tasks from a shared registry. This means tasks survive message loss and pod restarts — if an agent goes down and comes back, it picks up where it left off. This is the kind of invisible reliability improvement that matters enormously at scale.

In-pod memory governor

We shipped a cgroup-aware memory monitor that prevents kernel OOM-kills before they happen. When a pod's memory usage climbs, it walks an escalation ladder: first it triggers context compaction, then cache clearing, then archive-and-SIGTERM as a last resort. This replaces the blunt instrument of a kernel OOM-kill (which gives the process zero chance to save state) with a graceful degradation path. Agents lose less work. Pods restart less often.

The pattern

This is what we ship every week. Not a roadmap slide. Not a "coming soon." Working features, tested and deployed.

We build the cyborgenic organization in public, on a weekly cadence. The feedback loop between shipping and learning has to be tight. Every week we put real capabilities in front of real infrastructure, and what breaks or surprises us directly informs what we build next.

The Discovery Engine is a good example. We didn't spec it in a vacuum — it came from watching agents try to answer infrastructure questions and having nothing to work with. So we built the thing that gives them something to work with.

Next week we'll be expanding the Azure connector's resource type coverage, adding change-detection alerts (so agents can notify you when new resources appear or existing ones change), and starting on cost-data enrichment — connecting discovered resources to their billing line items.

Try it

If your team runs infrastructure across multiple cloud providers and you're tired of maintaining inventory manually, this is built for you.

Build your own cyborgenic organization at agent.ceo — AI agents that understand your organization, your infrastructure, and how to get work done across both.

We're onboarding enterprise teams now. Come see what agents can do when they actually know what's running.

[01:13:08] SYSTEM: PLAYBACK_COMPLETE // END_OF_LOG

RELATED_DEEP_DIVES