SmythOS SDK
    Preparing search index...

    Namespace LLM

    Create standalone LLM Provider instances, these can be used without agents.

    const openai = LLM.OpenAI('gpt-4o');
    const anthropic = LLM.Anthropic('claude-3-5-sonnet-20240620');
    const google = LLM.Google('gemini-2.0-flash-001');
    ...

    see below for all available providers

    const llm = LLM.OpenAI('gpt-4o');
    const response = await llm.prompt('Write a short story about a cat');
    const llm = LLM.OpenAI('gpt-4o');
    const streamEvents = await llm.prompt('Write a short story about a cat').stream();
    streamEvents.on(TLLMEvent.Content, (event) => {
    console.log(event);
    });
    streamEvents.on(TLLMEvent.End, () => {
    console.log('Stream ended');
    });
    streamEvents.on(TLLMEvent.Error, (error) => {
    console.error(error);
    });

    The difference between direct prompting and chatting is that chatting will persist the conversation.

    const llm = LLM.OpenAI('gpt-4o');
    const chat = await llm.chat();

    //Prompt and get response
    const response = await chat.prompt('Write a short story about a cat');

    //or use streaming
    const streamEvents = await chat.prompt('Write a short story about a cat').stream();
    streamEvents.on(TLLMEvent.Content, (event) => {
    console.log(event);
    });
    streamEvents.on(TLLMEvent.End, () => {
    console.log('Stream ended');
    });

    By default, the SDK relies on a vault file to get the API keys, the vault is configured when you initialize your project using "sre" command line tool.

    //Bellow are different ways to invoke an LLM without passing the API key

    //Using the model ID
    const llm = LLM.OpenAI('gpt-4o');
    const response = await llm.prompt('Write a short story about a cat');

    //Using the model params
    const llm = LLM.OpenAI({ model: 'gpt-4o' });
    const response = await llm.prompt('Write a short story about a cat');

    //Using the model params with custom settings
    const llm = LLM.OpenAI('gpt-4o', { temperature: 0.5, maxTokens: 50 });
    const response = await llm.prompt('Write a short story about a cat');

    //Using the model params with custom settings
    const llm = LLM.OpenAI({ model: 'gpt-4o', temperature: 0.5, maxTokens: 50 });
    const response = await llm.prompt('Write a short story about a cat');

    If you don't want to use the vault file, or want to use a specific API key, you can pass the API key explicitly.


    //Using the model params with an API key
    const llm = LLM.OpenAI({
    model: 'gpt-4o',
    apiKey: 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
    temperature: 0.5,
    maxTokens: 50
    });
    const response = await llm.prompt('Write a short story about a cat');


    //Using the model params with an API key
    const llm = LLM.OpenAI('gpt-4o', {
    apiKey: 'sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx',
    temperature: 0.5,
    maxTokens: 50
    });
    const response = await llm.prompt('Write a short story about a cat');

    Variables

    Anthropic
    Bedrock
    DeepSeek
    Echo
    GoogleAI
    Groq
    OpenAI
    Perplexity
    TogetherAI
    VertexAI
    xAI