Skip to main content

Command Palette

Search for a command to run...

πŸ€– Building an AI Agent with MCP: Multi-Step Tool Orchestration in TypeScript

Plan, reason, act β€” your first real AI agent that calls MCP tools in sequence, handles errors gracefully, and thinks before it acts

Published
β€’9 min read
πŸ€– Building an AI Agent with MCP: Multi-Step Tool Orchestration in TypeScript
T

Hi πŸ‘‹, I'm Tushar Patil. Currently I am working as Frontend Developer (Angular) and also have expertise with .Net Core and Framework.


This is Part 3 of the AI Engineering with TypeScript series.

Prerequisites: Part 1 β€” What is MCP? Β· Part 2 β€” MCP Fundamentals

Stack: Node.js 20+ Β· @anthropic-ai/sdk Β· @modelcontextprotocol/sdk v1.x Β· TypeScript 5.x


πŸ—ΊοΈ What we'll cover

In Parts 1 and 2 we built the server side β€” registering tools, resources, and prompts. In Part 3 we flip the camera and build the client side: an AI agent that connects to your MCP server, discovers its tools, and uses them to complete multi-step tasks.

By the end you'll have:

  • πŸ”Œ An MCP client that connects to any stdio MCP server
  • 🧠 An AI agent loop powered by Claude that plans and calls tools
  • πŸ”„ Multi-step reasoning β€” the agent calls tools in sequence and reasons about results
  • ❌ Error recovery β€” what happens when a tool fails mid-task
  • πŸ—οΈ A clean project structure you can build on

Let's build a Weather Analysis Agent that calls our weather server from Part 2 to answer questions like "Should I plan an outdoor event in Pune this weekend?" 🌀️


πŸ”Œ Part 1: Setting up the MCP Client

The MCP SDK ships a Client class that handles the entire connection lifecycle. Here is how to connect to a stdio server:

// src/client.ts
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import { StdioClientTransport } from "@modelcontextprotocol/sdk/client/stdio.js";
import { spawn } from "child_process";

export async function createMcpClient(serverCommand: string, args: string[] = []) {
  const transport = new StdioClientTransport({
    command: serverCommand,
    args,
  });

  const client = new Client(
    { name: "weather-agent", version: "1.0.0" },
    { capabilities: { sampling: {} } }
  );

  await client.connect(transport);
  console.log("βœ… Connected to MCP server");

  return client;
}

Three things are worth noting here. The transport spawns the server process β€” in this case node dist/weather-server/index.js. Capability negotiation happens automatically inside client.connect(). The client object is now ready to call listTools(), callTool(), readResource(), and more.


πŸ” Part 2: Tool Discovery

Before the AI can use tools, the agent needs to discover what is available. This is where listTools() comes in:

// src/agent.ts
import { Client } from "@modelcontextprotocol/sdk/client/index.js";
import Anthropic from "@anthropic-ai/sdk";

export async function discoverTools(client: Client) {
  const { tools } = await client.listTools();

  console.log(`\nπŸ”§ Discovered ${tools.length} tools:`);
  tools.forEach((t) => console.log(`  Β· \({t.name}: \){t.description}`));

  return tools;
}

The tools array from listTools() has exactly the shape Anthropic's API expects for the tools parameter β€” each entry has a name, description, and inputSchema. You can pass the array directly without any transformation. This is intentional: MCP's tool schema format mirrors the Anthropic tools API format, so the plumbing is zero-friction.


🧠 Part 3: The Agent Loop

Here is the core of any AI agent β€” the think β†’ act β†’ observe β†’ repeat cycle:

// src/agent.ts (continued)

const anthropic = new Anthropic();

export async function runAgent(client: Client, userQuery: string) {
  const tools = await discoverTools(client);

  // Convert MCP tool schemas to Anthropic tool format
  const anthropicTools: Anthropic.Tool[] = tools.map((t) => ({
    name: t.name,
    description: t.description ?? "",
    input_schema: t.inputSchema as Anthropic.Tool["input_schema"],
  }));

  const messages: Anthropic.MessageParam[] = [
    { role: "user", content: userQuery },
  ];

  console.log("\nπŸ€– Agent starting...\n");

  // The agent loop β€” runs until the model stops requesting tools
  while (true) {
    const response = await anthropic.messages.create({
      model: "claude-sonnet-4-20250514",
      max_tokens: 4096,
      tools: anthropicTools,
      messages,
    });

    console.log(`πŸ“Š Stop reason: ${response.stop_reason}`);

    // If the model is done, print the final answer and exit
    if (response.stop_reason === "end_turn") {
      const finalText = response.content
        .filter((b) => b.type === "text")
        .map((b) => (b as Anthropic.TextBlock).text)
        .join("\n");

      console.log("\nβœ… Final answer:\n", finalText);
      return finalText;
    }

    // Otherwise, process tool calls
    if (response.stop_reason === "tool_use") {
      // Add the model's response to the message history
      messages.push({ role: "assistant", content: response.content });

      // Collect all tool results to send back in one user turn
      const toolResults: Anthropic.ToolResultBlockParam[] = [];

      for (const block of response.content) {
        if (block.type !== "tool_use") continue;

        console.log(`\nπŸ”§ Calling tool: ${block.name}`);
        console.log("   Input:", JSON.stringify(block.input, null, 2));

        const toolResult = await callMcpTool(client, block.name, block.input);

        toolResults.push({
          type: "tool_result",
          tool_use_id: block.id,
          content: toolResult,
        });
      }

      // Add all tool results in a single user turn
      messages.push({ role: "user", content: toolResults });
    }
  }
}

The pattern to understand here: every tool call from the model gets a corresponding tool_result block with the same tool_use_id. The Anthropic API requires you to send all tool results from a single response in one user turn β€” not as separate messages. Getting this wrong is the most common agent bug.


πŸ› οΈ Part 4: Calling MCP Tools from the Client

// src/agent.ts (continued)

async function callMcpTool(
  client: Client,
  toolName: string,
  toolInput: Record<string, unknown>
): Promise<string> {
  try {
    const result = await client.callTool({
      name: toolName,
      arguments: toolInput,
    });

    // MCP tool results are an array of content blocks
    const textContent = result.content
      .filter((c) => c.type === "text")
      .map((c) => (c as { type: "text"; text: string }).text)
      .join("\n");

    if (result.isError) {
      console.warn(`  ⚠️ Tool returned an error: ${textContent}`);
      return `Error from \({toolName}: \){textContent}`;
    }

    console.log(`  βœ… Result: ${textContent.slice(0, 100)}...`);
    return textContent;
  } catch (err) {
    // Unexpected exception β€” log it and return a message the model can reason about
    const message = err instanceof Error ? err.message : String(err);
    console.error(`  ❌ Tool call failed: ${message}`);
    return `Tool \({toolName} failed unexpectedly: \){message}`;
  }
}

Notice there are two levels of error handling. result.isError === true is an expected error β€” the tool ran, but the business logic failed (city not found, API rate limited). You return a descriptive string so the AI can adapt its plan. A catch block is for unexpected errors β€” network crash, JSON parse failure, process died. In both cases you return a string rather than throwing, because throwing would break the agent loop.


πŸ—οΈ Part 5: Wiring It All Together

// src/main.ts
import { createMcpClient } from "./client.js";
import { runAgent } from "./agent.js";
import path from "path";
import { fileURLToPath } from "url";

const __dirname = path.dirname(fileURLToPath(import.meta.url));

async function main() {
  const serverPath = path.resolve(__dirname, "../../weather-server/dist/index.js");

  const client = await createMcpClient("node", [serverPath]);

  const query = `
    I'm planning an outdoor cricket match in Pune this Saturday.
    Can you check the current weather and 5-day forecast, then tell me
    if Saturday looks good and what I should be aware of?
  `;

  await runAgent(client, query);

  await client.close();
}

main().catch(console.error);

πŸ”„ Part 6: Tracing a Multi-Step Run

Here is what a real agent run looks like in the terminal β€” the model plans, calls tools in order, and synthesizes results:

πŸ€– Agent starting...

πŸ“Š Stop reason: tool_use
πŸ”§ Calling tool: get_current_weather
   Input: { "city": "Pune", "country": "IN", "units": "metric" }
   βœ… Result: 🌀️ Current weather in Pune, IN:
              Temperature: 31Β°C
              Condition: Partly Cloudy...

πŸ“Š Stop reason: tool_use
πŸ”§ Calling tool: get_forecast
   Input: { "city": "Pune" }
   βœ… Result: πŸ“… 5-day forecast for Pune:
              Thu: 33Β°C, Sunny
              Fri: 32Β°C, Partly Cloudy
              Sat: 29Β°C, Light Rain Likely...

πŸ“Š Stop reason: end_turn

βœ… Final answer:
Saturday's forecast for Pune shows 29Β°C with light rain likely β€” 
not ideal for cricket! Friday looks much better at 32Β°C and partly 
cloudy. I'd recommend moving the match to Friday if possible. If 
Saturday is fixed, have a backup indoor plan ready. 🏏

The model made two sequential tool calls, reasoned about the combined results, and gave a concrete recommendation. That is the agent loop in action.


❌ Part 7: Error Recovery in Practice

What happens when a tool fails halfway through a multi-step task? Let's trace it:

πŸ”§ Calling tool: get_current_weather
   βœ… Result: 🌀️ 31Β°C, Partly Cloudy

πŸ”§ Calling tool: get_forecast
   ⚠️ Tool returned an error: City "Poona" not found in database.

πŸ“Š Stop reason: tool_use
πŸ”§ Calling tool: get_forecast
   Input: { "city": "Pune" }    ← the model corrected the city name!
   βœ… Result: πŸ“… 5-day forecast...

The model received the error message as a tool_result, reasoned about it ("Poona must be an alternative name β€” let me try Pune"), and retried with the correct input. This is emergent error recovery β€” you didn't write any retry logic. The model handled it because you returned a useful error string instead of throwing.


πŸ“ Part 8: Project Structure

Here is the full layout for a clean agent + server project:

mcp-weather-agent/
β”œβ”€β”€ packages/
β”‚   β”œβ”€β”€ weather-server/        ← your MCP server from Part 2
β”‚   β”‚   β”œβ”€β”€ src/
β”‚   β”‚   β”‚   └── index.ts
β”‚   β”‚   β”œβ”€β”€ package.json
β”‚   β”‚   └── tsconfig.json
β”‚   └── weather-agent/         ← the client agent we built today
β”‚       β”œβ”€β”€ src/
β”‚       β”‚   β”œβ”€β”€ main.ts        ← entry point
β”‚       β”‚   β”œβ”€β”€ client.ts      ← MCP client setup
β”‚       β”‚   └── agent.ts       ← agent loop + tool calling
β”‚       β”œβ”€β”€ package.json
β”‚       └── tsconfig.json
β”œβ”€β”€ package.json               ← workspace root
└── tsconfig.base.json

Using a monorepo workspace lets both packages share TypeScript config and the agent can reference the server by path for local dev, then switch to an npx invocation for production.


πŸ’‘ Part 9: Key Patterns to Remember

1. Always collect all tool results in one user turn.

Do not send each tool result as a separate message. The Anthropic API requires all tool_result blocks from a single assistant response to arrive in one user turn.

2. Return strings from tool calls, don't throw.

Throwing breaks the agent loop. Return a descriptive error string so the model can reason about what went wrong and adapt.

3. Tool discovery is dynamic β€” not hardcoded.

Always call listTools() at runtime. If you hardcode tool schemas, they go stale whenever the server updates. Dynamic discovery means your agent auto-upgrades.

4. The agent loop is just a while loop.

There is no magic framework here. The entire loop is: call the model, check stop_reason, execute tools, append results, repeat. Keep it simple until you need something more.

5. Keep message history compact.

For long-running agents, the context window fills up. A common technique is to summarize earlier tool results before appending them β€” or prune messages older than N turns.


🎯 Summary

In Part 3 you built a complete AI agent that:

  • πŸ”Œ Connects to an MCP server using the SDK Client
  • πŸ” Discovers tools dynamically with listTools()
  • 🧠 Runs an agent loop powered by Claude
  • πŸ”„ Handles multi-step tool calls correctly
  • ❌ Recovers from tool errors gracefully

In Part 4 we'll add streaming to the agent β€” so the user sees the AI's reasoning token-by-token as it thinks, not just the final answer. We'll also add an interactive CLI so you can chat with the agent in real time. πŸ’¬


πŸ“š Further Reading