Copilot has been terrible for the past couple of weeks using Claude 3.7 and repeatedly hitting rate limits (falsely) which is documented on their GitHub page (as recent as two hours ago another complaint). So if like me this really piss*s you off then switch to Claude 3.5 (which seems to do a better job than 3.7 anyway). Actually it did my head in so much I went back to Open Router but thought I'd share this none the less. Copilot seems very hit and miss with results and its process where as Open Router is always well defined and structured consistently... Seems to me that there is a lot of open experimentation going on with Copilot using its Users as guinea pigs. Suppose it is what it is eh.
Hey Cheese, have you tried using Groq with Wappler? I’ve been experimenting with it lately and it’s been way more stable than Copilot, especially in terms of response time and consistency.
Maverick (LLaMA 3) runs incredibly fast, no weird rate limits, and you get 300 requests per minute and 10,000 per day on the free plan via Groq. That’s been more than enough for me when building and testing stuff inside Wappler.
It’s been a lot more predictable and coding-friendly than Claude 3.7 in my experience. Might be worth checking out if Copilot’s recent issues keep getting in the way.
Firstly thank you and evening Max. Truly appreciate that information. I've not actually used the Wappler implementation at all aside from a brief test a couple of Beta's back and hit a bug (which was promptly squashed and fixed by the Team). And another admission is that I've not used Groq whatsoever yet. I've only recently used Claude again due to the Copilot subscription the company is paying for otherwise I gave up on it a few months back after its persistent irritating obsession with React. Only over the past couple of weeks have returned to using anything from Anthropic. I'm a huge fan of DeepSeek and Qwen though and am willing to give Groq a go based upon your recommendations above. I'll also give the Wappler A.I integration a good going over but record it live as I make my way through it as a new comer to Wappler A.I which should give an amusing and honest first look at it through the eyes of a User who has not yet investigated the features nor functionality. Should be fun and eye opening. So when I do this I may well use Groq for the video..
Any other tips or tricks are really welcome Max! Thank you.
Totally get you about Copilot, and yeah, makes complete sense to use it if your company’s already invested in it. I do think Claude is a solid tool, but like you said, it tends to burn through tokens pretty fast and the context window still feels a bit limited.
Personally, I love experimenting with LLMs that you can self-host or run locally. Right now I’ve got LM Studio running on a PC with an i7, 32GB of RAM and an old 4070 GPU. I’ve tested quite a few models, and Mistral and LLaMA have been the most capable when it comes to more complex tasks.
Haven’t tried Qwen yet, but I definitely will now after your recommendation. As for DeepSeek, I’ve found it a bit basic in some situations, but overall it’s very solid.
I think it’s a great idea to record a video while testing out alternatives. I’m actually holding off on doing a proper post until Wappler’s AI integration gets a bit more stable. Then I’d like to share some prompts and post results with screenshots of what each model can generate.
Btw, the other day one of the AIs completely rewrote my style.css file. I had to roll it back through Git. So yeah, AI can be a huge help both on frontend and backend, but it can also trigger new headaches and unexpected bugs. Feels like knocking over a domino and not knowing what else is going to fall.
Thanks again for sharing your experience, feels like we’re all in the same boat. Really appreciate it.