Allow 'streaming' functionality for Wappler (specific to AI chat use-case)

Hi,

I've implemented something custom here, to make the user 'think' like it's a natural chat but the Wappler Server Connect doesn't support this (to the limit of my knowledge and attempts anyway), even with custom extensions as it always outputs a single response.

This isn't Wappler building AI as I know that's not a priority, but tools to help us build AI applications would be advantageous. This is a good node open source package:

With NodeJS, we implemented response streaming using sockets in Wappler. It works quite well.

Interesting timing.i am just about to add DeepSeek to the Wappler all in one ai node extension and have been thinking about the same issue and how to add that functionality.

Can't share the custom extension, but here's an extract. It uses LangChain to support multiple models.

// Execute asynchronous operation to stream events
        const stream = await llmModel.stream(sysPrompt);

        let res = ""; // Variable to accumulate response

        // Iterate over each event in the stream
        for await (const event of stream) if (event.content != undefined) {
          global.io.to(roomName).emit("send_answer", {
            query_id: queryID,
            stream_id: streamID,
            message_token: event.content,
          });
        }

        // Stream ended, emit end answer event
        global.io.to(roomName).emit("end_answer", { stream_id: streamID });

On client side, the data returned HTML/Markdown is appended to the response showing div.
You can use AI to put all this together. :sweat_smile:

1 Like

Another solution is to use res.write() on the server to send the data in chunks to the client and use the fetch api on the client to get the chunks.

Here a tutorial:

2 Likes