Using Agent Mode in Wappler's AI Assistant

It is 20 Dollars a month now George. The reason I suggest Open Router is that if you do not use A.I you won't be paying a fixed monthly fee and your credit will remain as you left it. Always nice to have the choice though. Which I'm sure we all very much appreciate indeed.

image

EDIT. I WAS WRONG! This is for Windows integration not the Coding companion! As George rightly states below which is 10 Dollars and not 20 Dollars.

I’m talking about GitHub copilot for coding , not Microsoft Windows copilot to run office macros for me :slight_smile:

See:

1 Like

Errggghhhh Windoooze...

:joy:

Sorry for the confusion @baub!! My bad @George. I do apologise.

1 Like

Does AI agent mode in Server Actions have access to current DB(s) structure?

Yes in server connect the AI agent can fetch your database connections, tables list and specific table schema from the database manager, fully on demand depending of what it needs when you tell him to use it.

1 Like

@George Can we have the Models listed alphabetically in the next update please and if possible a search input? Asking for a friend...

Also what is the 'base' Model for Copilot? One of those searches which doesn't seem to get a definitive answer...? Some say Codex (GPT3), others GPT4 Turbo, and GPT4 comes up too?

:nerd_face:

1 Like

OK after lots of reading through documentation on GitHub I found the answer hidden away. Why it is so difficult to find the answer to my earlier question is absolutely beyond me..? If anyone else is interested the 'Base' Model (unlimited requests) for PRO Users is GPT 40 (was previously GPT 3.5 Turbo).

This information can be found on the below documentation page:

1 Like

The placement of the AI button in Server Actions is not ideal, I always have that panel hidden

1 Like

I'm the same, that panel takes up way too much space so it hidden 100% of the time - It'd be great to see the AI button available from elsewhere, viewable all the time without having it inside a collapsible menu item.

So which one of the models is best trained in wappler? And why would I choose different ones? Wouldn't I just want to use the best wappler trained one, or are they all trained the same?

Going by Teodors above screen shots he appears to have Claude 3.7 selected.

image

I can only assume all Models are following the same 'training' data..? But that is for the Team to answer...?

Each have their own strengths and weakness. There are many websites which track these such as LLM Leaderboard:

Scroll down to LLM Rankings to see a comparison of all current LLMs and their respective skill-sets.

We provide the same Wappler knowledge instructions about App Connect and Server Connect to all models, so it really depends on how smart the model is and how good its coding and logical thinking knowledge is.

Currently the models that perform best in Wappler, specially in Agent mode, are the Claude 3.7/3.5 Sonnet and also the new GPT 4.1

So we advise you to use those. The Claude can be a little slower in response but gives great explanations and code and the new GPT 4.1 is very fast and also gives great solutions but just explains less.

2 Likes

ChatGPT 4o (the update model) seems to know wappler quite well and the appconnect framework

1 Like

So this assistant isn't aware of the entire project yet, only the one page opened in the editor?

This is actually really useful information. Thanks

1 Like

Yes the AI assistant and the used models have limited context length that can act on. We supply instructions for App Connect or server connect depending on the editor and also the type of the project. You can include the current page and selection but it doesn’t know the other files indeed.

It will be also way too much context to handle and also way too much tokens.

Soon we will also introduce the project wide AI manager that will be more aware of your whole project organization and delegate work to the dedicated editor based AI assistants.

It will keep track and summary description ( memory bank) of files it processes so it will be more aware of your project files and act accordingly and to your project wide requests.

It will also load and use your own project description, goals and organization rules so it can act accordingly.

3 Likes

Any chance of using concepts that other applications like Cursor use?

Cursor (the AI-powered code editor based on VS Code) provides context about your project files to an LLM using a few different mechanisms:

  1. File System Awareness:

Cursor can scan your workspace and index project files. When you prompt the AI, it uses a context window to include relevant files—often the one you're working on plus others it deems necessary based on dependencies or imports.

  1. Selective Context Injection:
    Cursor intelligently selects which files or snippets to load into the prompt. It typically includes:

The current file you're editing.

Related files (e.g., imported modules, configuration files).

File summaries or embeddings if full files are too large.

Some versions allow you to manually specify which files to include for more control.

  1. Embeddings & Retrieval:

For larger projects, Cursor might use vector embeddings. It creates compressed representations of files and retrieves the most relevant parts based on your query, then includes those in the LLM's prompt.

  1. Session Memory:

During a session, Cursor keeps track of your activity, including files opened, functions edited, and previous interactions with the AI. This builds a more cohesive context without needing to re-parse the entire project.

  1. LLM Prompt Engineering:

Cursor uses advanced prompt engineering to construct the prompt sent to the LLM. It might summarize parts of files, include file names and line numbers, or highlight recent changes.

If you're using a local model or API, Cursor can sometimes allow configuration on how much context to pass or even let you inspect the raw prompts for transparency.


You can explicitly instruct Cursor’s LLM to search the codebase, and here’s how that typically works under the hood:

  1. Explicit Codebase Search Command:

Cursor allows commands like "search the codebase for all instances of X" or "find where function Y is used."

When you do this, Cursor uses a code-aware search (like ripgrep or similar) to locate relevant files/snippets.

The results of that search are then pulled into the LLM’s context window on-demand.

  1. Dynamic Context Expansion:

After the initial search, Cursor dynamically expands the LLM’s prompt with:

Relevant file paths.

Code snippets from those files (possibly trimmed to fit the token limit).

Short summaries or headings around the found code for better grounding.

  1. Interactive Refinement:

You can then refine your instruction. For example, after a search, you can say:

"Refactor all of these functions."

"Generate tests for all methods in these files."

Cursor fetches additional context only as needed and keeps prior results in short-term memory.

  1. Behind the Scenes:

File Embeddings (Optional): Some setups support vector-based search across the codebase, allowing for semantic code search.

File Previews: You can view which files/snippets are being fed to the LLM.

Partial Loading: If files are large, Cursor may chunk them and only load the most relevant chunks based on your query.

Example Flow:

  1. You: "Where is validateUserInput used in the codebase?"

  2. Cursor: Searches all project files → finds 3 instances → loads those snippets into the LLM.

  3. You: "Update all of them to also log the user ID."

  4. Cursor: Edits each usage based on the LLM's instructions with the full context of each usage loaded.

Why It Matters:

This hybrid approach (LLM + local/project search) lets you leverage AI over large codebases without blowing past context/token limits.

You’re effectively steering the LLM, telling it what to include, search, or ignore.

1 Like

A post was split to a new topic: Issues with AI Agent in Server Connect

Can you add xAI Grock Studio, which launched April 15th, to you list of models? It launched as part of xAI's product line, which shows a rapid maturation pace with connected context with several exciting projects that include potential for supporting electrical vehicle and space exploration on the near-term horizon. There is an outcome-based focus at its core that promises a robustness that other models don't have to the extent xAI is developing.

xAI's key data center now runs on more than 200,000 NVidia processors (Colossus) with the addition of another 1,000,000 processors projected within the next several months to a year with funding apparently already secured for its next iteration.

There is information at xAI.com including API access.

that links sends us to Steve Vais website..