Agentic UI. When AI Stops Talking and Starts Doing
The shift from conversational to agentic interfaces — and what it means for frontend architecture.
ACTIVE_PHASE: PALLAV // 12 MIN READ
For the last decade, every AI interface has looked the same. You type. It replies. A tennis match — back and forth, one volley at a time. The user is always serving.
Something is shifting. We're watching AI move from talking about doing things to actually doing them. Filling forms. Clicking buttons. Navigating views. Calling APIs. Acting on your behalf.
This is the move from conversational to agentic — and it changes everything about how we design frontends.
The old world and the new
A conversational interface is a canvas: the user paints every stroke. An agentic interface is a stage: the user sets the scene, and the AI performs.
On the left, a user asks a question. The AI answers. Nothing on the page moves. On the right, the user expresses an intent — "Book me a flight to Tokyo" — and the AI goes to work.
That loop — observe, plan, act, report — is the heartbeat of an agentic interface.
The three layers
Every agentic interface needs three clean layers. Mix them and you get bugs. Separate them and you get something testable, safe, and extensible.
The reasoning layer is the LLM itself. It receives context and produces a plan: a sequence of tool calls. It should never touch the DOM directly.
The tool layer is the API boundary. Each tool is a function with a typed schema and an implementation. This is where you enforce safety: validation, permissions, rate limiting.
The UI layer is React, Vue, or vanilla DOM. It consumes state changes from tool execution and surfaces the agent's activity to the user.
WHY THE SEPARATION MATTERS
If the LLM directly manipulates the DOM, you bypass React's state management, break hydration, introduce XSS vectors, and make the system impossible to test.
The agent loop
Two safety gates before any tool runs. First: did the user hit Stop? Second: does this tool need confirmation?
The core loop in code
for (const action of plan) {
if (abortSignal.aborted) break;
const tool = tools[action.tool];
if (tool.requiresConfirmation) {
const approved = await requestUserApproval(action);
if (!approved) continue;
}
addLog({ tool: action.tool, status: 'executing' });
try {
const result = await tool.execute(action.args);
addLog({ tool: action.tool, status: 'complete', result });
} catch (error) {
addLog({ tool: action.tool, status: 'error', result: String(error) });
}
}Tools should update state, not the DOM
This is the single most important implementation detail.
const createTools = (dispatch: React.Dispatch<Action>) => ({
updateField: {
schema: { fieldName: 'string', value: 'string' },
requiresConfirmation: false,
execute: ({ fieldName, value }) => {
dispatch({ type: 'SET_FIELD', fieldName, value });
}
},
submitSearch: {
schema: {},
requiresConfirmation: true,
execute: () => dispatch({ type: 'SUBMIT' })
}
});SINGLE SOURCE OF TRUTH
Tools dispatch actions to a reducer. React consumes the state and renders. No DOM manipulation. No injection vectors.
The summary
| Principle | In practice |
|---|---|
| Separate layers | LLM reasons, tools execute, UI renders |
| Tools are the API | Typed schemas, validation, safety flags |
| Scrutability | Show every action in a structured log |
| Human in the loop | Gate writes behind confirmation |
| Graceful failure | Validate preconditions, fall back to manual |
| Security | Whitelist inputs, semantic identifiers only |
Where to start
Pick one multi-step workflow in your current application. Prototype it as an agentic interaction using the three-layer architecture. Start with scrutability (the activity log) and human-in-the-loop confirmation.
The era of AI that talks is ending. The era of AI that acts is beginning. The question isn't whether your interface will become agentic. It's whether you'll design for it — or bolt it on later.