After watching Cole’s video “Claude MCP has Changed AI Forever” I was struck by Cole’s choice of Claude Desktop (I assumed this was simply because Anthropic defined the MCP standard and so C.D. is a natural choice). Most devs seem to be using Cursor. What is everyone’s favorite coding tool and why? There are so many options.
I don’t actually use Claude Desktop to code! I just use it when I want to bounce general ideas off of an LLM or plan a project. But for the actual coding I use Windsurf. Cursor is great as well
@ColeMedin I am curious to know your exact development workflow with a tool like windsurf.
When I am developing, I tend to first have a highlevel discussion with a thinking model (o1, sonnet 3.7, etc). Once we have agreed on the requirements of the application, high level architecture, followed by the folder structure for the project.
Then I start working on file by file, and then get the LLM to write the unit test per file, pass the tests before moving on to the next one. This method have been working for a while. Sometimes I dont even write a single line of code. I have built trading bots, telegram bots, and so on.
Still curious to know what your workflow looks like.
I have been using DeekSeek R1, R1 Distill Qwen 32B, QwQ 32B, and Sonnet 3x. It’s probably my favorite right now; Cline is more polished than Roo Code and their MCP marketplace is the best implementation yet IMO. I’ve also be trying it with Ollama/vLLM locally with decent potential but it’s early days for “Useful local AI”.
And I’ve used Aider, Back4App, Bolt.new, Bolt.diy, Cline, Copilot, Roo Code, Trae, v0, Windsurf (aka. Codeium), and Zed (most recent, promising). Haven’t tried TabbyML or Cursor yet. Also, I regularly jump between AI Chats.
It wasn’t asked but I’m really liking Grok 3 for Research, Qwen Chat as a nice fallback, DeekSeek Chat (good but has service issues), OpenAI (not used much anymore tbh). Notable mention is Mercury Coder which is great at one-shotting a code prompt but not refining it.
@diehardofdeath honestly my workflow is pretty similar! Though sometimes I’ll have it write some of the code before creating the README for documentation, just so it can speak to the folder structure for the project as well. But yeah always having it reference the README to understand the project when I start a new conversation, which I do pretty often because long conversations make it start to hallucinate more.
Yea, local LLMs is a decent bit away from being useful. I have tried asking lama3.3 7B to respond to me in json for accessing tools I have written. It sometimes do not produce valid JSON. I would need to have a validation on the output of the LLM to trigger the same command on it again until I get a valid response. My experience, ~<10% of the time its responses are wrong. But hey, its running locally, so doesnt cost $$$ to keep re-trying.
Sure, I would agree somewhat but I’d argue a lot changed since the release of DeepSeek. Before then I would have agreed that running models locally was pointless.
For one, DeekSeek R1 Distill Qwen is way better than Llama at the same size. For one the 32B outperforms the Llama 70B model. And both the 7B and 14B variants are very good. So is Mistral Small 3.1 24B and of course Qwen QwQ 32B.
Not to mention the API for any of those is very cheap (Groq and Openrouter are great options). It only really becomes a consideration to run locally if you say run them 24/7 with millions of tokens. That’s where it makes sense to me.
Just my own two cents, but I go a little further myself. I personally think you should use a git repo to keep track of changes (first thing), utilize the SDLC (Software Development Life Cycle)/DevOps pipeline, as well as maintain a README.md, PLANNING.md, TAKS.md, and CHANGELOG.md for EVERY project. Maybe just make your own starting template…
A README to me is just an outline of the project, with navigation to the other docs. If USAGE, FAQ, etc. are more than a few lines, they should be their own docs. And to each their own, but I like my “docs” to be in their own folder, with the only ones in the root being README.md and LICENSE (only because GitHub expects it here, with no extension). Also many systems like Azure DevOps, etc. support linking markdown files directly to a project Wiki (nice navigation frontend for the user). Don’t remember if GitHub does this.
I personally plan out everything first, and break things down into tasks, steps (sub-tasks) and milestone (groups of tasks) before writing a line of code. I usually have a section for “Improvements” as well, where i can jot down ideas as I have them for future scope.
Once a task is completed it is moved to the CHANGELOG.md that keeps track of completed tasks with versioning and summary (and cleans them out of TASKS)… stashing, committing, and/or pushing successful changes along the way.
Now if the AI screws up things, you have a restore point, history, and can even automate versioning, etc.
I am relatively new to this AI IDE but looked over quite a bit of documentation before I began and set quite a few items up in place first. I have really delved into the rules part after I read a blurb I can no longer find but it was super helpful. I spent the first 5 or 6 hours just refining Cursor before I proceeded. In regards to the above comment about .json formatting I have a formatting document that outlines all the proper conventions as well as a CODE_STANDARDS.md with extensive but clear and concise examples. I have played around with several locally hosted models though for full disclosure I have 96GB of VRAM so it’s not a standard setup and I can run some pretty hefty models. I have to say it all depends on your task what you use. For crawling I have found llama2 to be smoking fast as simple as it is. I parallel deepcrawl, chunk, review and extract with it and then query with deepseek r1 70b and its pretty amazing. I was having issues with supabase so I crawled all 2700 or so pages of their site and fixed my issues in a flash.
My crawler has a lot of backend stuff so my github for it might not be helpful and I just use it as backup but my docs folder might be helpful to some. I reference them as first step reads in my prompt every time I reset the chat which is often as possible. I call it Deepcrawl and the link is to the repo.