SmythOS SDK
    Preparing search index...

    The core Agent class for creating and managing AI agents.

    An Agent combines models, skills, and behaviors to create intelligent assistants that can process prompts, maintain conversations, and execute tasks.

    // Create a simple agent
    const agent = new Agent({
    name: "Assistant",
    model: "gpt-4",
    behavior: "You are a helpful assistant."
    });

    // Use the agent
    const response = await agent.prompt("Hello, how can you help me?");
    console.log(response);

    // Add skills to the agent
    agent.addSkill({
    name: "calculator",
    description: "Perform mathematical calculations",
    process: (a, b) => a + b
    });

    Hierarchy (View Summary)

    Index

    Constructors

    Properties

    structure: { components: any[]; connections: any[] } = ...

    The agent internal structure used for by internal operations to generate the agent data

    Accessors

    • Access to LLM instances for direct model interactions.

      Supported providers and calling patterns:

      • agent.llm.openai(modelId, params) - OpenAI models
      • agent.llm.anthropic(modelId, params) - Anthropic models

      Returns TLLMProviderInstances

      // Direct model access
      const gpt4 = agent.llm.openai('gpt-4', { temperature: 0.7 });
      const response = await gpt4.prompt("Explain quantum computing");

      // Using configuration object
      const claude = agent.llm.anthropic({
      model: 'claude-3-sonnet',
      maxTokens: 1000
      });

      // Streaming response
      const stream = await claude.prompt("Write a poem").stream();
      stream.on('data', chunk => console.log(chunk));

    Methods

    • Add a skill to the agent, enabling it to perform specific tasks or operations.

      Skills extend the agent's capabilities by providing custom functions that can be called during conversations or prompt processing.

      A skill can be implemented in two ways:

      1. With a process function that defines the skill's core logic
      2. As a workflow entry point that can be connected to other components to build complex logic

      Parameters

      Returns {
          in: (inputs: TSkillInputs) => void;
          out: { body: any; headers: any; query: any; [key: string]: any };
      }


      // Add a data fetching skill
      agent.addSkill({
      name: "fetch_weather",
      description: "Get current weather for a location",
      process: async (location) => {
      const response = await fetch(`/api/weather?location=${location}`);
      return response.json();
      }
      });

      // Add a skill that will be used as an entry point in a workflow
      agent.addSkill({
      name: "fetch_weather",
      description: "Get current weather for a location",
      });

      // Attach the skill to a workflow
    • Parameters

      • skillName: string
      • ...args: any[]

      Returns Promise<any>

    • Create a new chat session with the agent.

      Chat sessions maintain conversation context and allow for back-and-forth interactions with the agent, preserving message history.

      Parameters

      • Optionaloptions: string | ChatOptions

        The options for the chat session if you provide a string it'll be used as the chat ID and persistence will be enabled by default

        • string
        • ChatOptions
          • Optionalid?: string

            The ID of the chat. If not provided, a random ID will be generated.

            If provided, it will be used to identify the chat in the storage provider and try to load the previous messages from the storage provider if the chat is not found, a new chat will be created

          • Optionalpersist?: boolean | ILLMContextStore

            If true, the chat will be persisted in the default SRE storage provider : next time you create a chat with the same chat ID and same agent ID, it will load the previous messages from the storage provider

            If false, the chat will not be persisted

            If a ILLMContextStore is provided, the chat will be persisted in the provided store

      Returns Chat

      Chat instance for interactive conversations

      const chat = agent.chat();

      // Send messages in sequence
      await chat.send("Hello, I need help with my project");
      await chat.send("Can you explain the benefits?");
      await chat.send("What are the next steps?");

      // Get conversation history
      const history = chat.getHistory();
    • Expose the agent as a MCP (Model Context Protocol) server

      The MCP server can be started in two ways:

      • STDIO: The MCP server will be started in STDIO mode
      • SSE: The MCP server will be started in SSE mode, this is case the listening url will be http://localhost:/mcp

      Parameters

      • transport: MCPTransport

        The transport for the MCP server

      • port: number = 3388

        The port for the MCP server (when using SSE transport)

      Returns Promise<string>

      MCP instance

      const agent = new Agent({ /* ... agent settings ... */ });

      const stdioMcp = agent.mcp(MCPTransport.STDIO);
      const sseMcp = agent.mcp(MCPTransport.SSE, 3389);

    • Send a prompt to the agent and get a response.

      The returned command can be executed in multiple ways:

      • Promise mode: await agent.prompt("question") - returns final result
      • Explicit execution: await agent.prompt("question").run() - same as above
      • Streaming mode: await agent.prompt("question").stream() - returns event emitter

      Parameters

      • prompt: string

        The message or question to send to the agent

      • Optionaloptions: any

      Returns AgentCommand

      AgentCommand that can be executed or streamed

      // Simple prompt (promise mode)
      const answer = await agent.prompt("What is the capital of France?");


      // Streaming for long responses
      const stream = await agent.prompt("Write a detailed report").stream();
      stream.on('data', chunk => console.log(chunk));
      stream.on('end', () => console.log('Complete!'));
    • Import an agent from a file or configuration object.

      Supported import patterns:

      • Import from .smyth file: Agent.import('/path/to/agent.smyth')
      • Import from configuration: Agent.import(settingsObject)
      • Import with overrides: Agent.import('/path/to/agent.smyth', overrides)

      Parameters

      Returns Agent

      New Agent instance

      // Import from file
      const agent1 = Agent.import('./my-agent.smyth');

      // Import from configuration object
      const agent2 = Agent.import({
      name: "Imported Agent",
      model: "gpt-4"
      });

      // Import with overrides
      const agent3 = Agent.import('./base-agent.smyth', {
      name: "Customized Agent",
      behavior: "Custom behavior override"
      });
    • Import an agent from a file or configuration object.

      Supported import patterns:

      • Import from .smyth file: Agent.import('/path/to/agent.smyth')
      • Import from configuration: Agent.import(settingsObject)
      • Import with overrides: Agent.import('/path/to/agent.smyth', overrides)

      Parameters

      • data: string

        File path or agent settings object

      • Optionaloverrides: any

        Optional settings to override imported configuration

      Returns Agent

      New Agent instance

      // Import from file
      const agent1 = Agent.import('./my-agent.smyth');

      // Import from configuration object
      const agent2 = Agent.import({
      name: "Imported Agent",
      model: "gpt-4"
      });

      // Import with overrides
      const agent3 = Agent.import('./base-agent.smyth', {
      name: "Customized Agent",
      behavior: "Custom behavior override"
      });