Building AI Agents with TypeScript

A comprehensive technical guide to designing and implementing AI agents in TypeScript, covering agent architecture, tool integration, state management, and production deployment patterns.

technical10 min readBy Klivvr Engineering
Share:

The shift from prompt-response AI to agentic AI represents a fundamental change in how software systems interact with language models. A chatbot responds to a single question. An agent pursues a goal — breaking it into steps, using tools, maintaining state, and adapting its approach based on intermediate results. At Klivvr, Klivvr Agent is our TypeScript framework for building production-grade AI agents that automate customer support workflows and operational processes.

This article covers the foundational architecture of an AI agent in TypeScript: how to structure the agent loop, integrate tools, manage state, and handle the inevitable edge cases that separate a demo from a production system.

The Agent Loop

Every agent, regardless of complexity, follows the same core loop: observe the current state, decide what to do next, execute the action, and update the state with the result. This loop repeats until the agent reaches its goal or exhausts its budget (token limit, time limit, or step limit).

interface AgentState {
  messages: Message[];
  toolResults: ToolResult[];
  stepCount: number;
  status: "running" | "completed" | "failed" | "paused";
  metadata: Record<string, unknown>;
}
 
interface Message {
  role: "system" | "user" | "assistant" | "tool";
  content: string;
  toolCallId?: string;
  toolCalls?: ToolCall[];
}
 
interface ToolCall {
  id: string;
  name: string;
  arguments: Record<string, unknown>;
}
 
interface ToolResult {
  toolCallId: string;
  toolName: string;
  result: unknown;
  error?: string;
  durationMs: number;
}
 
interface AgentConfig {
  model: string;
  systemPrompt: string;
  tools: Tool[];
  maxSteps: number;
  maxTokens: number;
  temperature: number;
}
 
class Agent {
  private config: AgentConfig;
  private state: AgentState;
  private llmClient: LLMClient;
  private toolExecutor: ToolExecutor;
 
  constructor(
    config: AgentConfig,
    llmClient: LLMClient,
    toolExecutor: ToolExecutor
  ) {
    this.config = config;
    this.llmClient = llmClient;
    this.toolExecutor = toolExecutor;
    this.state = {
      messages: [{ role: "system", content: config.systemPrompt }],
      toolResults: [],
      stepCount: 0,
      status: "running",
      metadata: {},
    };
  }
 
  async run(userMessage: string): Promise<AgentState> {
    this.state.messages.push({ role: "user", content: userMessage });
 
    while (
      this.state.status === "running" &&
      this.state.stepCount < this.config.maxSteps
    ) {
      const response = await this.step();
 
      if (!response.toolCalls || response.toolCalls.length === 0) {
        // No tool calls means the agent is done
        this.state.status = "completed";
        break;
      }
 
      // Execute tool calls
      for (const toolCall of response.toolCalls) {
        const result = await this.executeTool(toolCall);
        this.state.toolResults.push(result);
        this.state.messages.push({
          role: "tool",
          content: JSON.stringify(result.result),
          toolCallId: toolCall.id,
        });
      }
 
      this.state.stepCount++;
    }
 
    if (this.state.stepCount >= this.config.maxSteps) {
      this.state.status = "failed";
      this.state.metadata.failureReason = "max_steps_exceeded";
    }
 
    return this.state;
  }
 
  private async step(): Promise<Message> {
    const response = await this.llmClient.chat({
      model: this.config.model,
      messages: this.state.messages,
      tools: this.config.tools.map((t) => t.definition),
      temperature: this.config.temperature,
      maxTokens: this.config.maxTokens,
    });
 
    this.state.messages.push(response);
    return response;
  }
 
  private async executeTool(toolCall: ToolCall): Promise<ToolResult> {
    const startTime = Date.now();
    const tool = this.config.tools.find((t) => t.name === toolCall.name);
 
    if (!tool) {
      return {
        toolCallId: toolCall.id,
        toolName: toolCall.name,
        result: null,
        error: `Unknown tool: ${toolCall.name}`,
        durationMs: Date.now() - startTime,
      };
    }
 
    try {
      const result = await tool.execute(toolCall.arguments);
      return {
        toolCallId: toolCall.id,
        toolName: toolCall.name,
        result,
        durationMs: Date.now() - startTime,
      };
    } catch (error) {
      return {
        toolCallId: toolCall.id,
        toolName: toolCall.name,
        result: null,
        error: (error as Error).message,
        durationMs: Date.now() - startTime,
      };
    }
  }
}
 
interface LLMClient {
  chat(params: {
    model: string;
    messages: Message[];
    tools: ToolDefinition[];
    temperature: number;
    maxTokens: number;
  }): Promise<Message>;
}

The agent loop is deliberately simple. Complexity lives in the tools, the system prompt, and the state management — not in the loop itself. This separation makes agents easy to test: you can mock the LLM client to return predetermined responses and verify that the agent executes tools in the expected order.

Tool Design

Tools are the agent's hands. They give the language model the ability to interact with external systems — query databases, call APIs, send messages, update records. The quality of tool design directly determines the agent's effectiveness.

interface Tool {
  name: string;
  definition: ToolDefinition;
  execute: (args: Record<string, unknown>) => Promise<unknown>;
}
 
interface ToolDefinition {
  name: string;
  description: string;
  parameters: {
    type: "object";
    properties: Record<string, ParameterSchema>;
    required: string[];
  };
}
 
interface ParameterSchema {
  type: "string" | "number" | "boolean" | "array" | "object";
  description: string;
  enum?: string[];
  items?: ParameterSchema;
}
 
// Example: Customer lookup tool
const customerLookupTool: Tool = {
  name: "lookup_customer",
  definition: {
    name: "lookup_customer",
    description:
      "Look up a customer by their ID, email, or phone number. Returns the customer's profile including name, account status, recent transactions, and open support tickets.",
    parameters: {
      type: "object",
      properties: {
        identifier: {
          type: "string",
          description: "The customer ID, email address, or phone number to search for",
        },
        identifierType: {
          type: "string",
          description: "The type of identifier provided",
          enum: ["id", "email", "phone"],
        },
      },
      required: ["identifier", "identifierType"],
    },
  },
  execute: async (args) => {
    const { identifier, identifierType } = args as {
      identifier: string;
      identifierType: "id" | "email" | "phone";
    };
    // Query customer database
    const customer = await customerService.findBy(identifierType, identifier);
    if (!customer) {
      return { found: false, message: `No customer found with ${identifierType}: ${identifier}` };
    }
    return {
      found: true,
      customer: {
        id: customer.id,
        name: `${customer.firstName} ${customer.lastName}`,
        email: customer.email,
        status: customer.status,
        accountAge: customer.accountAgeDays,
        recentTransactions: customer.recentTransactions.slice(0, 5),
        openTickets: customer.openTickets,
      },
    };
  },
};
 
// Example: Transaction refund tool
const refundTool: Tool = {
  name: "issue_refund",
  definition: {
    name: "issue_refund",
    description:
      "Issue a refund for a specific transaction. Requires the transaction ID and a reason. Refunds over $500 require manual approval and will be queued.",
    parameters: {
      type: "object",
      properties: {
        transactionId: {
          type: "string",
          description: "The ID of the transaction to refund",
        },
        reason: {
          type: "string",
          description: "The reason for the refund",
          enum: [
            "duplicate_charge",
            "service_not_received",
            "customer_request",
            "billing_error",
            "fraud",
          ],
        },
        amount: {
          type: "number",
          description:
            "The amount to refund. If omitted, the full transaction amount is refunded.",
        },
      },
      required: ["transactionId", "reason"],
    },
  },
  execute: async (args) => {
    const { transactionId, reason, amount } = args as {
      transactionId: string;
      reason: string;
      amount?: number;
    };
    const result = await refundService.process(transactionId, reason, amount);
    return result;
  },
};

Good tool design follows three principles. First, descriptions must be precise and operational. The LLM reads the tool description to decide when and how to use it. Vague descriptions lead to incorrect tool selection. Notice how the refund tool's description mentions the $500 approval threshold — this helps the agent set correct expectations with the user.

Second, parameter types should be constrained. Enum values, required fields, and clear type annotations reduce the chance of the LLM generating malformed arguments. An unconstrained string parameter for "reason" would produce inconsistent values; an enum produces predictable, machine-readable values.

Third, tool outputs should be informative. A tool that returns true or false forces the agent to guess what happened. A tool that returns a structured response with context — the customer's name, the refund status, the expected timeline — gives the agent the information it needs to construct a helpful response.

State Management and Conversation Context

An agent's effectiveness depends on maintaining coherent state across multiple turns. In a customer support scenario, the agent might look up a customer, review their recent transactions, determine the issue, and process a resolution — all within a single conversation. The state must be structured to support this multi-step reasoning.

interface ConversationContext {
  conversationId: string;
  customerId?: string;
  customerName?: string;
  intent?: string;
  resolvedEntities: Map<string, unknown>;
  actionsTaken: ActionRecord[];
  startedAt: Date;
  lastActivityAt: Date;
}
 
interface ActionRecord {
  toolName: string;
  arguments: Record<string, unknown>;
  result: unknown;
  timestamp: Date;
  success: boolean;
}
 
class ContextualAgent extends Agent {
  private context: ConversationContext;
 
  constructor(
    config: AgentConfig,
    llmClient: LLMClient,
    toolExecutor: ToolExecutor,
    conversationId: string
  ) {
    super(config, llmClient, toolExecutor);
    this.context = {
      conversationId,
      resolvedEntities: new Map(),
      actionsTaken: [],
      startedAt: new Date(),
      lastActivityAt: new Date(),
    };
  }
 
  enrichSystemPrompt(): string {
    let prompt = this.config.systemPrompt;
 
    if (this.context.customerId) {
      prompt += `\n\nCurrent customer context:\n`;
      prompt += `- Customer ID: ${this.context.customerId}\n`;
      prompt += `- Customer Name: ${this.context.customerName}\n`;
    }
 
    if (this.context.actionsTaken.length > 0) {
      prompt += `\n\nActions already taken in this conversation:\n`;
      for (const action of this.context.actionsTaken) {
        prompt += `- ${action.toolName}: ${action.success ? "succeeded" : "failed"}\n`;
      }
    }
 
    return prompt;
  }
 
  updateContext(toolName: string, args: Record<string, unknown>, result: unknown): void {
    this.context.lastActivityAt = new Date();
    this.context.actionsTaken.push({
      toolName,
      arguments: args,
      result,
      timestamp: new Date(),
      success: true,
    });
 
    // Extract entities from tool results
    if (toolName === "lookup_customer" && result && typeof result === "object") {
      const customerResult = result as { found: boolean; customer?: { id: string; name: string } };
      if (customerResult.found && customerResult.customer) {
        this.context.customerId = customerResult.customer.id;
        this.context.customerName = customerResult.customer.name;
      }
    }
  }
}

The context object accumulates knowledge across turns. When the customer lookup tool returns a customer ID and name, those values are stored in the context and injected into subsequent system prompts. This prevents the agent from re-querying information it already has and ensures that tool calls reference the correct customer.

Error Recovery and Graceful Degradation

Production agents must handle failures gracefully. LLM API calls can time out. Tool executions can throw exceptions. The model can generate invalid tool arguments. Each failure mode needs a recovery strategy.

class ResilientAgent extends Agent {
  private retryConfig = {
    maxRetries: 3,
    baseDelayMs: 1000,
    maxDelayMs: 10000,
  };
 
  async executeWithRetry(
    fn: () => Promise<unknown>,
    context: string
  ): Promise<unknown> {
    let lastError: Error | null = null;
 
    for (let attempt = 0; attempt < this.retryConfig.maxRetries; attempt++) {
      try {
        return await fn();
      } catch (error) {
        lastError = error as Error;
        const delay = Math.min(
          this.retryConfig.baseDelayMs * Math.pow(2, attempt),
          this.retryConfig.maxDelayMs
        );
 
        if (this.isRetryable(error as Error)) {
          await this.sleep(delay);
          continue;
        }
 
        throw error; // Non-retryable errors fail immediately
      }
    }
 
    throw lastError;
  }
 
  private isRetryable(error: Error): boolean {
    const message = error.message.toLowerCase();
    return (
      message.includes("timeout") ||
      message.includes("rate limit") ||
      message.includes("503") ||
      message.includes("429")
    );
  }
 
  private sleep(ms: number): Promise<void> {
    return new Promise((resolve) => setTimeout(resolve, ms));
  }
}

The retry logic uses exponential backoff with a maximum delay cap. Rate limit errors and transient server errors are retryable; validation errors and authentication failures are not. In production, Klivvr Agent also implements circuit breaker patterns for external tool calls — if a downstream service is consistently failing, the agent gracefully informs the user rather than retrying indefinitely.

Conclusion

Building AI agents in TypeScript is fundamentally about composing three capabilities: language model reasoning, tool execution, and state management. The agent loop orchestrates these capabilities in a cycle of observe-decide-act that continues until the goal is reached.

The patterns described here — a clean agent loop, well-designed tools, structured context management, and resilient error handling — form the foundation of Klivvr Agent. They are intentionally simple because agent complexity should live in the tools and the domain logic, not in the framework. A framework that is easy to understand is a framework that is easy to debug, and debugging is where you will spend most of your time when building production agents.

Start with a single tool and a simple prompt. Get the agent loop working end to end. Then add tools one at a time, testing each addition thoroughly. The temptation to build a sophisticated multi-agent system on day one is strong — resist it. A single agent with well-designed tools will outperform a complex multi-agent system with poorly designed ones every time.

Related Articles

business

AI Agents in Fintech Operations

How AI agents automate fintech operational workflows including compliance monitoring, fraud detection, dispute resolution, and regulatory reporting — with insights from Klivvr Agent deployments.

7 min read
business

Human-in-the-Loop Patterns for AI Agents

How to design effective human-in-the-loop workflows for AI agents, covering escalation policies, approval workflows, the autonomy ladder, and trust-building strategies.

7 min read
technical

Multi-Agent Systems in TypeScript

Architecture patterns for multi-agent systems including supervisor topologies, agent-to-agent communication, task delegation, and shared state management in Klivvr Agent.

6 min read