What are you guys using as your personal junior developer?
Claude 3.5, GPT-4o, codestral, deepseek, phind, ... ?
Next stop: senior
What are you guys using as your personal junior developer?
Claude 3.5, GPT-4o, codestral, deepseek, phind, ... ?
Next stop: senior
This
GPT-4o almost always, we also implement it on some of our front ends (all Wappler created) to assist customers with creating readable text instead of their usual ridiculous ramblings.
CoPilot on Windows OS is particularly handy too as its right there on the desktop.
Using their UI? Plus or teams plan? Do you use custom GPTs? Or do you just use API?
I'm using Claude 3.5 and just yesterday using their new projects feature I uploaded the whole codebase as a text file and asked for suggestions. And man do I have work in front of me
LMAO! Good use of the API indeed!
On Mac unfortunately and it seems we are screwed in Europe for integrated AI. Maybe a third party...
ChatGPT 4, but it loses context after a while so I don't trust it after a bit of back and forth. I subscribed to Cody AI two days ago, it allows me to talk to a ChatGPT and Claude inside VSCode, I've been using mostly Claude, but honestly I can't tell the difference aside from the different wording, they made the same faulty interpretation when I showed a tricky Bash script, which basically means they're juniors indeed
P.S.: Good to see you're still there!
Edit: Apparently they're good to assist making Wappler extensions, by looking at existing ones
I uploaded 100K tokens yesterday of my src folder to Claude 3.5 and man does it 0-shot everything I ask with perfect retrieval of context. I'm quite impressed with 3.5 coding and retrieval ability.
I was using Claude 3 opus until recently and I already thought it was ahead of chatgpt. But 3.5...that's another thing at least for me. Knowledge cutoff is actually quite important for my project and Claude is April 2024 while ChatGPT is still October 2023. I'm using some very new libraries so having the most up to date documentation as it's training data saves me some time.
But I must admit I'm looking forward for the new features OpenAI demoed for 4o.
I come by quite often. Lurking mostly.
I tried to 0-shot an extension hjson some days ago and it was impossible. Then I provided huge context and a very good prompt to no avail either.
I really don't think it will take them long until Apple implement it - it's too much of an advantage to have it in the OS.
I hope so. But I fear we might get a crippled version to comply with GDPR and DMA.
Lol, looking at mine = taught bad coding
Not sure that is a blessing or a curse.
My .js skills are way below many other languages but using .js daily now so will improve.
Did you manage to get them to get hjson right? Or did you have to make manual edits?
The problem with ChatGPT and Claude is that even if they have broad knowledge of what Wappler, App Connect, and Server Connect are. They have not been specifically trained on them. So most likely than not they hallucinate big time. It's very very difficult to 0-shot solutions for Wappler.
I had to fix the actionName property, but aside from that it worked good. For context I supplied the JS and HJSON of two Wappler extensions. I don't remember the prompt, but it wasn't more than 2 or 3 lines
I used Claude 3.5 Sonnet on Cody (VSCode). I decided to try after this Wappler did:
I made it do an Assert step, so very similar to the throw error
personally am loving supermavern at the moment, is crazy for inline suggestions, and recently started using the new supermavern chat, which i pick model gpt-4o of the time. But try out other just for hell of it.
Thus ive dropped github copilot.
Also run some open source models locally and stable diffusion but thats more just me tinkering around.
I don't understand what they are doing to be honest. They are falling behind.
I'll take it for a ride. Thanks for the suggestion.
Yeah. That's what I mean with 0-shot. 0-shot prompting is asking an LLM to answer correctly something without providing context or examples. I find that when I need to provide examples and/or context is when it starts losing value for me in coding stuff as it just delays me too much trying to get it right and I also have to double check the output.
By the way @JonL, do you know of any tool that can use Claude and run Python code? Kind of like how ChatGPT 4 does
The tool ChatGPT uses for data analysis? I'm afraid not. It's something that is lacking.
Although I could rarely rely on ChatGPT's implementation when it came out. Has it improved?
My experience was asking something, and then ChatGPT entering in a loop of generating and running code that didn't work. Generate code, throw error, sorry that didn't work, let me do it again, throw error, sorry that didn't work, ad infinitum.
I believe so, but only marginally, not to the level you might expect. It made me a script the other day, it was actually impressive, but it couldn't run on their servers, probably have some RAM limit or something. A bit of back and forth of errors and it fixed the script (I merely copy-pasted the run-time errors).
But then it loses context after a bit of back and forth, so
I find Claude 3.5 a bit better on this topic. Competition is always good for the consumers