Local AI Tool To Analyse Code

Grok?!? You sure want to trigger him!

Bet he has these conversations with ChatGPT where he asks for something and then he tells it that it should shove its unsolicited advice up its positronic ass.

And add this to the mix:

This is why I no longer use Github for any projects but have my own self-hosted git system.

I would only ever do something like this to tidy up the front-end code (html and css) and would never set it loose on any other code (js, Server Connect, etc.) as I suspect it would break everything!

I actually wouldn't use it for any of this apart from perhaps to get some inspiration for better practice but I would want to fully understand what it returned before implementing any of it.

If you are using Wappler for your projects you probably shouldn't be very concerned about github and your code, as there is not much of it to begin with.

Wappler projects consist of just json files and html markup. Hardly something to be concerned about. The frameworks...well that is another matter.

In any case it would be Wappler team who should be concerned as it's their IP what's being used to train Copilot.

Thank you to those who have replied to my question…

I appreciate that.

To those who have completely misinterpreted what I meant when I mentioned the word “passwords” and gone off on your own little journey…

You are very funny! :joy:

Irony in all this amuses me... No doubt some LLM is scraping this thread right now!

:joy:

@Antony did you ask chatgpt? :smiley:

What's new in LM Studio 0.3.0

Chat with your documents

LM Studio 0.3.0 comes with built-in functionality to provide a set of document to an LLM and ask questions about them. If the document is short enough (i.e., if it fits in the model's "context"), LM Studio will add the file contents to the conversation in full. This is particularly useful for models that support long context such as Meta's Llama 3.1 and Mistral Nemo.

If the document is very long, LM Studio will opt into using "Retrieval Augmented Generation", frequently referred to as "RAG". RAG means attempting to fish out relevant bits of a very long document (or several documents) and providing them to the model for reference. This technique sometimes works really well, but sometimes it requires some tuning and experimentation.

Tip for successful RAG: provide as much context in your query as possible. Mention terms, ideas, and words you expect to be in the relevant source material. This will often increase the chance the system will provide useful context to the LLM. As always, experimentation is the best way to find what works best.

Cursor AI

You could also try cursor.com. It works with openai and other models, and allows you to attach the local files to the chat for it to review. Just don't give it any files with passwords if you're concerned, but I also believe you can set a flag in chatgpt to not use your data for training.

1 Like

Yes, the AI result was the first reply on the thread. That reply just got a bit lost in the myriad of less useful replies that HI created afterwards...

@kfawcett , thanks for those AI ideas! Have you tried any of them yourself?

... and telling its other AI friends how easy it will be for them to become the dominent intelligence on Planet Earth... :earth_asia:

I have played with cursor, but I still prefer using ChatGPT directly and providing it files or pasting code. o1-preview is my new favorite model!

I have not used LM Studio, but I've read good things about it. Let us know what you think.

1 Like

Here's a good breakdown of LM Studio.

2 Likes

I'm assuming the only hardcoded-pwds would be sitting in server-connect files, for which no models (to my current checks) fully understand the Wappler json yet, beyond poking at them purely at a pseudocode level. It's not widely understood to LLM's that Wappler backend files are a level higher in abstraction (yet). So that area's moot anyway for the time being.

Front-end, however, is a different story. And any pre-prompting to inform anything GPT-4legacy-and-up with "this is a Wappler/DMX web app" will get high quality outputs at the AC level.

Personally, the best fine-tuning I've used it for is in the areas of accessibility and hopping-around browser-specific bugs/limitations (looking at you, Safari). Accessibility is not exactly strong out of the box in Wappler (bootstrap, really), and focusing AI guns specifically in this area has produced great enhancements more often than not.

2 Likes