SmythOS SDK
    Preparing search index...

    Type Alias TLLMInstanceFactory

    TLLMInstanceFactory: {
        (modelId: string, modelParams?: TLLMInstanceParams): LLMInstance;
        (
            modelParams: TLLMInstanceParams & {
                model: string | TLLMModel | TCustomLLMModel;
            },
        ): LLMInstance;
    }

    LLM instance factory functions for each LLM provider.

    Supported calling patterns:

    • LLM.provider(modelId, modelParams) - specify model ID and optional parameters
    • LLM.provider(modelParams) - specify model ID within modelParams object

    Type declaration

      • (modelId: string, modelParams?: TLLMInstanceParams): LLMInstance
      • Create an LLM instance with explicit model ID and optional parameters.

        Parameters

        • modelId: string

          The model identifier (e.g., 'gpt-4', 'claude-3-sonnet')

        • OptionalmodelParams: TLLMInstanceParams

          Optional model parameters (temperature, maxTokens, etc.)

        Returns LLMInstance

        LLM instance ready for use

      • (
            modelParams: TLLMInstanceParams & {
                model: string | TLLMModel | TCustomLLMModel;
            },
        ): LLMInstance
      • Create an LLM instance with parameters object containing model ID.

        Parameters

        • modelParams: TLLMInstanceParams & { model: string | TLLMModel | TCustomLLMModel }

          Model parameters including the required model field

        Returns LLMInstance

        LLM instance ready for use

    // Pattern 1: Explicit model ID
    const llm1 = LLM.openai('gpt-4', { temperature: 0.7 });
    const response1 = await llm1.prompt("Hello!");

    // Pattern 2: Model ID in params
    const llm2 = LLM.openai({ model: 'gpt-4', temperature: 0.7 });
    const response2 = await llm2.prompt("Hello!");