Is there a list of tests and results the Wappler team can share that shows what the AI Assistant in v7 is capable of? As v7 approaches "release candidate", I'm more inclined to start using it IF the AI Assistant is more accurate than other available AI's. For example, can it successfully create (1) nested repeats, (2) page flows, (3) pass data between dmx.app and Javascript? I'm sure there are plenty of more complex Wappler actions to be considered and tested.
For Wappler's AI Assistant to be truly beneficial, shouldn't it rise above the rest when it comes to Wappler-specific actions?
I tried it at various points through the betas, and it's honestly pretty bad.
On the server connect side, I've had it do things I can't even begin to explain. A couple of times when I asked it to do really basic things, it ended up just putting code inside a set value step. Other times things that should be simple it ends up doing in completely bizarre ways.
The front end isn't much better either. I'd say about 30% of the time does Agent mode actually work when applying HTML edits. It nearly always gives a diff error, then after retrying multiple times, it will either:
Generate a broken HTML structure
Put the new HTML at the very top of the page instead of where it should be
Completely wipe the page and put the edited HTML in all by itself.
The AI manager I've never got to work after trying it about 6 times. It always seems to get stuck. When it comes to editing HTML, I have way better results just pasting it into ChatGPT and copy and pasting in the result.
Even if it did work 100% of the time, I don't think I'd ever actually trust it on the server connect side. Letting AI create server connects seems like a recipe for disaster.
I honestly wish v7 had features and updates people have been asking for. More than half of the features on the roadmap from a few years ago still haven't been implemented. It seems like this was just done so AI can be plastered all over the homepage when v7 releases, like every other app is doing.
It seems that some individuals are more fortunate than others. While there have been missteps, the majority of my instructions have yielded highly satisfactory results.
Let's not overlook that AI is learning from me just as I am learning from it.
Maybe provide some examples of what you tried, so that we can check and improve it. AI is something we will be improving with the future updates.
Which model are you using?
Note that the prompt is also very important. If your prompt for the AI manager is something like "build the next instagram" or "create a trello clone" the results won't be quite great.
Does this mean the Wappler team has higher expectations for its AI Assistant than other available models? Is there a series of tests and results the team can share with us?
TBH - I've yet to download v7 and see what the AI Assistant actually is capable of. Are you hosting a model specifically trained on Wappler components and syntax?
We’re not hosting any model, you can select any model of your choice. We provide very detailed instructions about Wappler, its components, structure etc., which the AI models follow.
My experience so far is fairly limited - mainly using the AI Assistant client-side - but the results are excellent and typically exceed expectations. I think the implementation is great.
Eg I've created several forms, listing input names and required fields. The distribution of inputs within rows was excellent (for different viewport sizes); I didn't specify this. I added a regex pattern for the first input and then asked AI assistant to apply similar validations for the remaining inputs. It did this, modifying the regex for number and textarea inputs. It also customised the validation messages.
One of the mysterious things about the AI chatbots is how responses vary. I created another form, again with excellent results. However, whereas the first form was created with floating labels, the second wasn't (I hadn't asked for them in either case). This wasn't a problem - I just asked the assistant to use floating labels.
What I often find most time-consuming is CSS and animations etc. AI Assistant works extremely well for this; it's like having @ben on speed dial.
AI Assistant has never put any code in the wrong place but I specify the element precisely, assigning an ID if there isn't one, although it's probably not necessary.
As I say, I've only tried fairly basic things so far, but it's already a great time saver and I'm confident it will be increasingly so.