AIP Analyst: Your Workflow Assistant Is About to Hit Everyone's Ontology

AIP Analyst goes generally available the week of April 13. Here's what it does, who it's actually for, and why the trust problem matters more than the tech.

Back to Insights
TL;DR

AIP Analyst goes generally available the week of April 13. It's basically a chat interface for your entire ontology, built for the people who need answers from Foundry data but don't want to build anything. Think of it as the object explorer's smarter sibling, or an AIP agent scoped to one very specific job: answer questions, show data, and occasionally do things on your behalf. Also shipped this month: new models (Grok, Nemotron), incremental media set inputs, and no-code ML inference in Pipeline Builder.

Last month I wrote about Foundry becoming a platform you talk to instead of click through. That post was about AI FDE, which is the tool for builders. This month Palantir rolled out the other half of that equation: a tool for the people who aren't builders.

AIP Analyst goes generally available the week of April 13. I've been testing it and I've got opinions.

The Quick Version of What It Does

Elevator pitch: it's a conversational AI interface sitting on top of your Foundry ontology. You ask it a question in plain English, it answers.

Under the hood is where it gets more interesting. It can search and filter object sets, run group-by aggregations, write SQL against datasets, build Vega charts and map visualizations, and execute Foundry actions and functions directly inside the conversation. Drop in a PDF or an image? It'll analyze that too. The best way I've found to describe it is that it's an upgraded object explorer with a mini-app builder bolted on, and a chat interface wrapping the whole thing.

And it can be embedded as a Workshop widget, which is the part I'd pay attention to.

Who's Actually Going to Use This

For day to day FDE work, this probably isn't the tool I'm reaching for. If you're building pipelines or shipping apps, you'll stay in Code Repositories and Pipeline Builder.

But that's fine, because AIP Analyst isn't really aimed at us. It's built for the business users we usually end up supporting. The people who know exactly what they're looking for but aren't technical enough to write a SQL query or build a Workshop module. The analyst with a big report due tomorrow who needs live data fast. The ops manager who wants to check if the numbers make sense before someone spends a week building a React app to display them.

It's also great for anyone who wants to test whether a question is even worth asking before committing engineering time to it. Exploratory work. First-pass analysis. The stuff that used to mean pestering a data team and waiting three days for a response.

If AI FDE is the tool that helps builders move faster, AIP Analyst is the tool that lets everyone else touch the platform without needing a builder for every single question. Different audiences, different use cases, same general direction. And honestly, any tool that gives business users a way to self-serve without breaking anything is a win for FDEs too. Fewer ad-hoc requests, more time on the stuff that actually needs us.

Why This Is More Than a Chatbot

People have been asking for "put a chatbot on our data" for years. Every client I've worked with has had some version of this conversation. In most cases, the result was gimmicky. A chat window that could answer a handful of pre-scripted questions. Enough to demo, nothing you'd actually use.

What's different here is that the data is live and the responses are grounded in the ontology. No hallucinated facts about made-up objects. No confident wrong answers pulled from stale training data. When AIP Analyst tells you the average time-to-resolution for tickets in Q1 was 4.2 days, that's because it just ran a query against your actual ticket object and aggregated the actual resolution timestamps.

I'll be honest. Watching it work the first time felt like those Star Trek scenes where someone walks up to a console, asks it a question, and the ship computer answers. That sounds dumb until you've spent a decade watching people wait two weeks for an analyst to rebuild a pivot table.

The Trust Problem

Here's where I'll hedge a little because it actually matters. AI is an easy sell on a sales call and a much harder sell to the person who'll get fired if the numbers in the board deck are wrong.

The thing most people aren't comfortable with when you say "AI can answer your questions" is that LLMs make stuff up. That's not a controversial statement anymore, everyone knows it. What's less obvious is how much damage a confident-sounding wrong answer can do when nobody knows where the number came from. I've heard of cases where teams made real operational decisions based on a chatbot answer that nobody bothered to verify until the real data finally came in and the number was off by a wide margin. Not anyone's fault exactly. Just what happens when the answer feels authoritative and the source is invisible.

AIP Analyst's whole thing is that the source isn't invisible. You see the filters. You see the aggregations. You see which objects it touched and which ones it didn't. When it tells you "the average time-to-resolution was 4.2 days," you can click through and see the exact query it ran to get there. That's how you build trust in a tool like this. Not with more marketing about how AI is safe and responsible, but by showing people the actual math and letting them decide for themselves.

It's still AI. It can still pick the wrong object to filter on, or misinterpret a question in a way that technically answers it but doesn't give you what you actually wanted. I wouldn't put it in front of an executive running a board meeting without some testing first. But the transparency helps a lot, and it's the single biggest reason I think this tool will actually get adopted instead of sitting unused like every other chatbot experiment.

The Action Execution Question

This is the part I'd think carefully about before rolling it out broadly.

AIP Analyst can execute actions and functions. That means a user could type "update the status of these 50 objects to approved" and watch it happen. Which is either amazing or terrifying depending on how your governance is set up.

For most deployments, I'd scope this tightly. Read-only at first, or actions limited to specific object types and specific users. Nothing changes in production until someone with authority signs off. Standard stuff, but worth saying out loud because the default temptation will be to let it loose and see what happens.

Treat it like any AIP agent. It's a cocky intern with tools. You don't hand an intern the production database on day one.


Everything Else That Shipped This Month

Lot of smaller updates landed alongside AIP Analyst. Running through them in order of how much I actually care.

Incremental Media Set Inputs is the one I'd highlight if you work with large media pipelines. If you're running pipelines over volumes of images, documents, or video, this is going to save real money. Media processing is expensive and slow. Not having to reprocess everything on every build is the kind of thing you don't appreciate until you've watched a six-hour pipeline burn through compute to re-chunk a bunch of PDFs that didn't change. I've worked on a project involving legal document processing where the team was rebuilding the entire corpus every week because nobody had a better option. This would have paid for itself in a month.

Models in Pipeline Builder with no-code inference is the one I'd point data scientists at. You can now run model inference directly in Pipeline Builder without writing code. It's limited to Spark batch with single tabular inputs and outputs for now, but it's the start of a real no-code path from training to production inference. Useful if you've got people who can train models but don't want to spend a week learning the Foundry Python SDK to ship them.

Nvidia Nemotron 3 and Grok 4.20 both landed in AIP on April 7. More model options is generally a good thing. Nemotron's got a Super 120B for heavy work and a Nano 30B for cheap and fast stuff, which is the kind of range that matters when you're running multi-agent workflows where different agents need different tradeoffs. Grok I haven't tested in any real workflow yet. It's less filtered than the enterprise models, which cuts both ways depending on what you're doing with it. Jury's out.

For the full list of everything that shipped this month, check out Palantir's official April 2026 announcements.


What This All Adds Up To

AI FDE handles the builder side. AIP Analyst handles the user side. Both halves of any Foundry deployment are getting natural language interfaces at the same time, which isn't a coincidence, it's the product direction.

The bar for "usable by a non-technical business user" just dropped significantly, and that's good for the people who actually need the platform to work for them. It's also good for FDEs, even though it doesn't look like it at first. Fewer ad-hoc data pulls means more time on the work that actually needs us. The middleman stuff gets automated. The hard stuff still needs humans.

I'll come back to this post in a few months after AIP Analyst has been in real client environments for a while. Demos are one thing. Production with messy ontologies and 47 different stakeholders all asking different questions is where the real test happens.

Need help sorting this out?

Thinking about rolling out AIP Analyst to your business users but not sure how to scope it safely? This is the kind of problem my team likes to dig into. If you reach out, we'll look at your current ontology and governance setup first, then figure out where AIP Analyst actually helps instead of creating new problems.

Daniel Cubellis DC

Daniel Cubellis

Founder, DWJC Services Inc.

Daniel spent time at Palantir as a Deployment Strategist before founding DWJC to give clients the honest answer about what to build and whether to build it at all.

All Insights

Have a project in mind? Let us show you what's possible.

Let's Talk See How We Work