AI Full Document Only?

This really limits the use of AI of you can not add elements and can only create full pages. So you can no longer add elements to a section or already built out page? It has to be a full page?

When I try to add let's say a row of cards to my design, it wipes out my entire design. Not sure about this change. I liked it better before.

1 Like

You can add new sections on a page perfectly fine. It's just the AI analyzes the whole page now, instead of just the selected part of the page.

Well then there is a bug it seems. It currently wipes out the entire page and only displays what you had in the prompt. Maybe I'm just not using it right. I will do some more playing.

The full page is given to the AI for context so it knows what is included.

Then it suggest one or more changes or additions to implement your request

1 Like

Just tell the AI Assistant: "add a row of cards after this or before that". You just need to describe what you need added and where - AI likes clear prompts.

1 Like

Seems I have lots to learn about AI before using this in production. :beers:

Great job guys!

You simply need to use natural language and explain the details of what you need on the page. Example:

1 Like

Worth reading/viewing some documentation/videos on prompting an LLM for best results. A good way to start learning about prompting @brad

You could even ask a web based LLM to generate prompts for you to begin with.

Couldn't Wappler send the selected section line numbers to the prompt?

Would both ways be an option, send only a section of the page or the whole page?

I haven't used CoPilot or Claude, but I know I constantly fight with ChatGPT to ensure it gives me back the entire code I gave it. A lot of the time it will leave comments like

onUpdate: ({ editor }) => {
    // ... your update logic ...
  },

where there should be 100 lines of code.

It tends to do this more with larger code bases.

Dear @George, the problem is with larger pages. I m working at the moment with a 2790 lines page and as I started to work on this page I started to have problems. Few minutes ago I received an email from Github where they deactivated my Copilot for suspect activity, bulk activities or rate limits. I didn't use the Copilot so much. Just asking it to make more appealing a part of the page. But it seems that sending everytime the full page is creating problem. Not only. In Beta 18 we can select the exact part we need to work on. It could be interesting thinking about a solution that keep the full context but also just the small piece of the page.

1 Like

We will be adding more options to limit what is send to copilot

1 Like

Seems like copilot has undocumented limits for max 8k tokens to enforce fair usage as they don’t charge per token.

So we should enforce those limits and also optimize the way we send html as context because in normal text a token is like a word but in html all the special characters like < and > also count as tokens.

So we should be sending the html code in a more emmet like way to preserve tokens.

You can check how many tokens your page is on:

As we also add support for open router you will have more possibilities in using cheaper and just as good models without limits and larger context size

Wow, sending the whole page to AI sure racks up the tokens unnecessarily. What was the reason for this change?

My simple login page is almost 10k tokens not including a prompt. It's no wonder I ran out of tokens in about 4-5 test prompts.

1 Like

well to make the AI more smarter we needed to provide more context about what is already on your page.

But indeed depending on what you need to do that might not be necessary. So we will be optimizing that.

And GitHub copilot never publicly mentioned any limits ...

2 Likes

When you add OpenRouter support also give the option of sending full page for usage with smaller models like the one @ Cheese is fond of, Qwen 2.5 32b instruct

1 Like

I was just going to suggest something similar @Apple. Would be great to see Open Router being used as so much more control and no limits. An modals like Qwen are far more economical you don't have to worry about the cost so much. It is fast, very good, and for simple generation of Bootstrap and layouts it smashes it out of the park.

However, and not knocking the recent updates, I would like to see some concentrated effort on clearing up some recent bug reports before going all in on AI features. Feels like AI integration is over shadowing those for the moment where as the priority should really lay with fixing what is already broken or causing issues. Like I said I'm not being ungrateful as truly appreciate the hard work and effort involved by the Team! I know myself when I get my teeth in to something new it is easy to put other 'more' important things on the back burner so to speak...