Skip to main content
Agentic_AI2026_03_27

Agentic UI. When AI Stops Talking and Starts Doing

The shift from conversational to agentic interfaces — and what it means for frontend architecture.

ACTIVE_PHASE: PALLAV // 12 MIN READ

For the last decade, every AI interface has looked the same. You type. It replies. A tennis match — back and forth, one volley at a time. The user is always serving.

Something is shifting. We're watching AI move from talking about doing things to actually doing them. Filling forms. Clicking buttons. Navigating views. Calling APIs. Acting on your behalf.

This is the move from conversational to agentic — and it changes everything about how we design frontends.


The old world and the new

A conversational interface is a canvas: the user paints every stroke. An agentic interface is a stage: the user sets the scene, and the AI performs.

The chatbot eraUser drives every stepThe agent eraAI pursues a goalUser typesAI thinksText comes backOne turn. No side effects.The page never changes.User sets intentAI observes + plansReads page stateAI acts via toolsFills fields, clicksLoopsMultiple steps. Real side effects.The page transforms.

On the left, a user asks a question. The AI answers. Nothing on the page moves. On the right, the user expresses an intent — "Book me a flight to Tokyo" — and the AI goes to work.

That loop — observe, plan, act, report — is the heartbeat of an agentic interface.


The three layers

Every agentic interface needs three clean layers. Mix them and you get bugs. Separate them and you get something testable, safe, and extensible.

LLMReasoning layerJSON callsTOOL LAYERupdateFieldsubmitFormfetchDatanavigateToStateUI LayerReact / DOM

The reasoning layer is the LLM itself. It receives context and produces a plan: a sequence of tool calls. It should never touch the DOM directly.

The tool layer is the API boundary. Each tool is a function with a typed schema and an implementation. This is where you enforce safety: validation, permissions, rate limiting.

The UI layer is React, Vue, or vanilla DOM. It consumes state changes from tool execution and surfaces the agent's activity to the user.

WHY THE SEPARATION MATTERS

If the LLM directly manipulates the DOM, you bypass React's state management, break hydration, introduce XSS vectors, and make the system impossible to test.


The agent loop

Two safety gates before any tool runs. First: did the user hit Stop? Second: does this tool need confirmation?

Next actionUser hit Stop?YesHaltNoNeeds confirmation?YesAsk userApprovedNoExecute toolLog + renderLoop

The core loop in code

CODE_MANIFESTagent-loop.ts
for (const action of plan) {
  if (abortSignal.aborted) break;

  const tool = tools[action.tool];

  if (tool.requiresConfirmation) {
    const approved = await requestUserApproval(action);
    if (!approved) continue;
  }

  addLog({ tool: action.tool, status: 'executing' });

  try {
    const result = await tool.execute(action.args);
    addLog({ tool: action.tool, status: 'complete', result });
  } catch (error) {
    addLog({ tool: action.tool, status: 'error', result: String(error) });
  }
}

Tools should update state, not the DOM

This is the single most important implementation detail.

CODE_MANIFESTtools.ts
const createTools = (dispatch: React.Dispatch<Action>) => ({
  updateField: {
    schema: { fieldName: 'string', value: 'string' },
    requiresConfirmation: false,
    execute: ({ fieldName, value }) => {
      dispatch({ type: 'SET_FIELD', fieldName, value });
    }
  },
  submitSearch: {
    schema: {},
    requiresConfirmation: true,
    execute: () => dispatch({ type: 'SUBMIT' })
  }
});

SINGLE SOURCE OF TRUTH

Tools dispatch actions to a reducer. React consumes the state and renders. No DOM manipulation. No injection vectors.


The summary

PrincipleIn practice
Separate layersLLM reasons, tools execute, UI renders
Tools are the APITyped schemas, validation, safety flags
ScrutabilityShow every action in a structured log
Human in the loopGate writes behind confirmation
Graceful failureValidate preconditions, fall back to manual
SecurityWhitelist inputs, semantic identifiers only

Where to start

Pick one multi-step workflow in your current application. Prototype it as an agentic interaction using the three-layer architecture. Start with scrutability (the activity log) and human-in-the-loop confirmation.

The era of AI that talks is ending. The era of AI that acts is beginning. The question isn't whether your interface will become agentic. It's whether you'll design for it — or bolt it on later.