Foundry Wants You to Talk to It Now

AI FDE, Pilot, Autopilot, and end-to-end RAG pipelines. March 2026 was the month Foundry started becoming a platform you talk to instead of click through.

Back to Insights
TL;DR

AI FDE's the headline release this month. I'm either getting replaced or I'm about to get a lot more done. Probably the second one. Pilot's neat but I haven't tried it yet, Autopilot should've existed years ago, and the RAG updates in Document Intelligence are going to save teams real pain if their documents aren't a mess. Which they always are.

If you've spent any real time on Foundry, you know the rhythm. You're in Code Repositories writing transforms, jumping to Pipeline Builder to wire things together, occasionally swearing at an incremental transform that decided to run a full snapshot at 2 AM. That's the Foundry most of us know.

That hasn't gone away, but March 2026 made something pretty clear about where Palantir's taking the platform. A bunch of releases dropped this month and they all point the same direction: Foundry wants you to describe what you need instead of figuring out how to build it yourself.

Here's what actually shipped and what I think about it after spending some time with a few of these.

AI FDE: Like Going from a Screwdriver to a Drill

AI FDE went generally available on March 12, and honestly, using it feels like being in the Matrix terminal. You describe what you want, it plans the approach, executes it, checks its own work, then opens a branch proposal for you to review. Pipelines, transforms, ontology objects, functions. It handles a lot.

I've been using it. Not extensively yet, but enough to have opinions.

The good: it's genuinely intuitive. If you're someone who can communicate technical ideas clearly but isn't necessarily the fastest at translating that into code, this thing is helpful. I've had it update functions I didn't write, where normally I'd spend an hour just getting context on what the previous person was doing. AI FDE picks that up and proposes changes that mostly make sense.

The not-so-good: it's like a cocky intern. Confident. Fast. Sometimes wrong in ways that surprise you. I've been getting 422 errors from code it generates, which in 3+ years of FDE work I've literally never seen before. Still fighting with it on that one. So you're not handing this the keys and walking away.

One thing nobody's really talking about yet: it's not cheap. In my experience, half an hour of AI FDE doing function work can run you anywhere from $40 to $70. That range is wide because it depends on the complexity of the code, how much context it needs, and whether it makes mistakes and has to redo things. Whether that's worth it depends on what your time costs. If it saves an FDE three hours of work, the math makes sense pretty quickly.

The way I think about it: this is a screwdriver-to-drill upgrade. It'll make a lot of tasks easier once you learn how to use it, but you don't just turn into a pro overnight because you bought a new tool. You still need to check its work. You still need to write tests. For one-off tasks where you'd have to spend 20 minutes feeding it context, it might not be worth it. But if you're doing repeated work across a project, it learns. Like an intern that actually improves.

I wouldn't trust it on complex functions yet unless you can give it a ton of context. But for updating existing code, scaffolding new pipelines, or getting a first pass at something you'd refine anyway? It's good. It's going to get better. I'm excited to see where it goes once it gets deeper integrations with things like Workshop for building UI.

You describe what you need AI FDE executes it Checks its own work Branch proposal for your review adjusts if needed AI FDE CLOSED-LOOP EXECUTION MODEL
You describe what you need, AI FDE builds it, verifies it, adjusts if needed, then opens a branch proposal for your review.

Pilot: Think Figma, Not Production

Pilot hit beta on March 5. You describe an app in natural language and it generates ontology, design specs, a React front-end, and a live preview.

I'll be blunt: I haven't used it yet. So I can't tell you if it's actually good.

What I can tell you is where I think it fits. It's like any coding LLM right now. The vibe-coded MVP looks great. In production, things break. That's just where we are with AI-generated applications.

Where I do see real value: clients who don't know what they want yet. And that's a lot of them. When an organization doesn't have a unified data system and can't articulate what the app should do because they've never had the data in one place before, being able to spin up something quick, almost like a Figma prototype but functional, that's useful. Get something in front of them, let them react to it, then build the real thing properly.

Autopilot: Where's Waldo, But for Automations

Autopilot dropped in beta on March 19. It gives you visual dependency graphs, Kanban boards, and trace logs for your automations.

This should've existed years ago.

If you've worked on any Foundry environment that's been running for more than six months, you know the problem. Automations trigger other automations which trigger other automations, and when something breaks it's literally Where's Waldo trying to figure out which one caused it. You're reading logs, mentally reconstructing the execution chain, and eventually just accepting that something ran at 3 AM and hoping it sorted itself out.

Auto A Auto B Auto C Auto D Auto E FAILED AUTOPILOT Auto A → B, C running Auto B → D completed Auto C → E failed Trace log: NullPointer at step 3 of Auto E AUTOPILOT: DEPENDENCY GRAPH + TRACE LOGS
Autopilot shows you how your automations connect, where they failed, and what the trace log says about it.

The dependency discovery is the part I'd pay attention to. Autopilot analyzes execution history to surface relationships between automations you might not have even known existed. On client setups I've worked on, there are always surprise connections. Someone set up an automation two years ago, nobody documented it, and now it's quietly feeding data into three other processes. Finding those before they break something is the difference between a calm Monday and a fire drill.

Document Intelligence Got a Real RAG Pipeline

The Document Intelligence updates from this month finally give you an end-to-end RAG pipeline built into Foundry. Chunking, embedding, retrieval. No external tools, no stitching things together.

I've built RAG pipelines on Foundry before via Pipeline Builder. The worst part, every time, is documents. Specifically when they're not uniform. Some are clean PDFs. Some are scanned. Some are file uploads that are basically photos of paper. You end up mixing OCR with regular text extraction and trying to get coherent chunks out of documents that were never designed to be machine-readable.

PDFs & Docs Upload media set Extract OCR / VLM / Hybrid Chunk Semantic coherence Embed Generate vectors RAG Ready Query your docs DOCUMENT INTELLIGENCE: END-TO-END RAG PIPELINE
Documents go in, queryable RAG pipeline comes out. Each step deploys as a Python transform.

The new extraction strategies help with that: raw text, OCR, layout-aware OCR, Vision LLM, and a hybrid mode. But the chunking is where it actually matters. Foundry creates chunks that try to preserve semantic meaning instead of just splitting on token counts.

Bad chunking quietly wrecks RAG quality, and most teams don't notice until the answers start being subtly wrong.

If your documents are clean and consistent, this is going to feel almost too easy. If they're a mess (and they usually are), it's still going to take work. But a lot less than before.


Everything Else That Shipped in March

New LLMs on AIP: GPT-5.4 with a 1M token context window is the big one. Also GPT-5.4 Mini and Nano (400K each), GPT-5.3 Codex for coding and agentic tasks, and Gemini 3.1 Flash-Lite with adjustable thinking levels so you can balance cost and speed. I've mostly been using Claude Opus 4.6 for AI FDE and it's been solid. Used Haiku for quick agent responses and GPT models for general awareness stuff. Honestly you need to play around with all of them to figure out what works for your use case.

  • Health Checks for Virtual and Iceberg Tables: If you're pulling from Databricks, Snowflake, or BigQuery into Foundry, you can now monitor primary key validation, freshness, and schema compliance from inside the platform. About time.
  • Promote Critical Object Types: Mark object types with a verified checkmark so they rank first in search. If your ontology has more than ~150 object types and you're not using this, search is basically a scavenger hunt.
  • Role-Based Branch Security: You can now assign branch ownership to other users instead of being stuck as the sole owner. If you've ever gone on vacation and had a branch stuck because only you could merge it, you understand.
  • Enforce Incremental Execution: Pipeline Builder can now fail jobs that can't run incrementally instead of silently falling back to a full snapshot. This would've saved me a few angry mornings.
  • Workshop Usage Metrics: Built-in analytics showing action submissions and layout views. No user attribution by default.
  • LLM-Generated Notional Data: Pipeline Builder can generate test datasets with LLMs now. Useful for prototyping.
  • Time Series in Quiver: Dedicated time series workspace. Not just a chart type, an actual analysis environment.
  • Workflow Lineage shortcut: Cmd+i / Ctrl+i now opens lineage graphs from anywhere. Small but you'll use it constantly.
  • Ontology MCP: This one's from January but belongs in the conversation. MCP (Model Context Protocol) is an open standard that lets external AI agents query your Ontology directly. Works with LangChain, CrewAI, custom Python agents. If you're building anything that needs to pull Foundry data into an external AI workflow, this is how you do it now.

For the full details on everything, check out Palantir's official March 2026 announcements.


What This Means If You're Building on Foundry

The theme across all of this is pretty simple. Foundry is collapsing the distance between "I want a thing" and "the thing exists." AI FDE handles pipelines and transforms. Pilot handles apps. Document Intelligence handles RAG. Autopilot's the odd one out, and I mean that in a good way. It's not about building faster, it's about finally seeing what you've already built.

None of this makes a senior Foundry developer obsolete. I think it makes a good one more valuable. The ceiling just went up. What changes is what you spend your time on: less time wiring together transforms, more time making decisions about whether what the AI proposed is actually right for your use case.

The teams that get the most out of this are going to be the ones who already understand ontology design and pipeline architecture well enough to know when to push back on what AI FDE suggests. The tool is fast. Knowing when fast is wrong is still a human skill.

That's a trade I'll take, every time.

Need help sorting this out?

This is the kind of messy, real-world Foundry problem my team likes to dig into. If you reach out, we'll look at your current ontology and automations first, then figure out where AI FDE actually helps instead of adding noise.

DC

Daniel Cubellis

Founder, DWJC Services Inc.

Daniel spent time at Palantir as a Deployment Strategist before founding DWJC to give clients the honest answer about what to build and whether to build it at all.

All Insights

Have a project in mind? Let us show you what's possible.

Let's Talk See How We Work