Chat Provider - Custom

Thanks Wappler Team for hearing users.
Now we have Custom Chat Provider but now it is not working we need to select models and list models is empty.
I think this is because the selection of models has to be handled by the endpoint:
BaseURL/api/v1/models
Hence the conclusion, you either need to add an input field for the Models Endpoint variable, or solve process BaseURL/models
Or as seen here:

Add an input field for the Models selector anthropic/claude-3.5-sonnet, gryphe/mythomax-l2-13b
To use the method
models: ['anthropic/claude-3.5-sonnet', 'gryphe/mythomax-l2-13b']

The custom chat provider should point to an endpoint that is compatible with OpenAI API. The OpenAI API also describes the models endpoint.

The endpoints that are used are ${baseURL}/chat/completions for the chat and ${baseURL}/models to get the models list.

1 Like

Thank you.
My ${baseURL} endpoint is compatible with OpenAI API
Is correct:

The endpoints that are used are ${baseURL}/chat/completions for the chat and ${baseURL}/models to get the models list.

An example of an answer ${baseURL}/models:

{
  "data": [
    {
      "id": "openai/gpt-3.5-turbo",
      "name": "OpenAI: GPT-3.5 Turbo",
      "pricing": {
        "prompt": "0.0",
        "completion": "0.0"
      },
      "context_length": "4095",
      "chat_na_msg": ""
    },
    {
      "id": "openai/gpt-3.5-turbo-1106",
      "name": "OpenAI: GPT-3.5 Turbo 16k (11-06)",
      "pricing": {
        "prompt": "0.0",
        "completion": "0.0"
      },
      "context_length": "16385",
      "chat_na_msg": ""
    },
    {
      "id": "openai/gpt-3.5-turbo-16k",
      "name": "OpenAI: GPT-3.5 Turbo 16k",
      "pricing": {
        "prompt": "0.0",
        "completion": "0.0"
      },
      "context_length": "16383",
      "chat_na_msg": ""
    },
    {
      "id": "openai/gpt-3.5-turbo-0125",
      "name": "OpenAI: GPT-3.5 Turbo 16k (01-25)",
      "pricing": {
        "prompt": "0.0",
        "completion": "0.0"
      },
      "context_length": "16385",
      "chat_na_msg": ""
    },
    {
      "id": "openai/gpt-4",
      "name": "OpenAI: GPT-4",
      "pricing": {
        "prompt": "0.0",
        "completion": "0"
      },
      "context_length": "8191",
      "chat_na_msg": ""
    }
  ]
}

Works correctly with:
Cline / Open WebUI / AnythingLLM / Luna Translator / Visual chatGPT Studio / Cherry Studio / OpenHands / Cursor IDE / Cody (VSCode, JetBrains) / Obsidian Copilot / Make.com / n8n / Browser Brave and LeoAI / and others

Hi everyone,

I’m trying to use the new AI integration in Wappler (version 7b29) and I’m having trouble connecting to both a local LM Studio instance and the Groq API. In both cases, no models are shown in the dropdown selector, even though the /models endpoint works perfectly when tested outside of Wappler.

Here’s what I’ve tried so far:

With LM Studio on my local network:

  • Base URL used: http://192.168.0.xx:xxxx/v1
  • I tested the /models endpoint using curl from bash: curl http://192.168.0.xx:xxxx/v1/models
    and it returned a valid JSON with a full list of available models (e.g. meta-llama-3-8b-instruct, phi-3-mini-4k-instruct, etc.).
  • No API Key is required.
  • Despite this, Wappler does not populate the model list.

With Groq API:

I also tried changing the base URL to include /models (e.g. .../v1/models) just in case.

So it seems that both endpoints respond correctly outside Wappler, but Wappler does not display or detect any models from either provider.

Has anyone experienced this? Is there any workaround or fix for Wappler not recognizing valid models from /models?

Thanks in advance.

1 Like

There is a bug in the custom provider not passing the options correctly, I've tested it with groq and in the next update it will be fixed.

3 Likes

Thank you @patrick all works well!