Full Stack Developer & AI Engineer with 15+ YoE. Laravel/Vue.js specialist who also builds with React, Next.js, Python (Django/FastAPI), and Node.js.
I develop SaaS applications, integrate AI (OpenAI/Anthropic/Google APIs, agent orchestration), and rescue struggling projects. Background as a founder (SaaS, ecommerce, high-traffic blogs) means I understand business, not just code.
Available for long-term contracts (20+ hours/week), project-based work, fractional CTO roles, and technical consulting.
15+ years building web applications. Former founder of SaaS startup, ecommerce marketplace, and high-traffic blog network. I ship working software and understand what makes a product viable.
I think this might work for smaller codebases, but the main point of my article isn't really about vibe coding. Vibe coded apps are typically smaller anyway, so refactoring isn't that big of an issue there.
When we're talking about actual software that has been around for a while and has accumulated serious tech debt, it's not so easy. I've definitely worked on apps where your described approach doesn't lead to anything viable. It's just too much for an AI to grasp when you have years of accumulated complexity, dependencies, and business logic spread across a large codebase.
Regarding vibe coders specifically: I think people who can't code themselves often don't really know what "cleaner design" or "more reuse" actually means in practice. They can certainly learn, but once they do, they're probably not vibe coders anymore.
Technologies: Laravel, Vue.js, PHP, Statamic, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.
Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.
But I really liked the few responses it gave me, highly technical language. Not the flowery stuff you find in ChatGPT or Gemini, but much more verbose and thorough than Claude.
Technologies: Laravel, Vue.js, PHP, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.
Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.
You’re spending $1500 in additional costs? How?!!? I can’t even conceive of how I would spend that much with cursor. What am I missing? Are you ultra productive or just inefficient with tokens?
Being inefficient with tokens actually makes you super productive. It's too expensive in the long run though.
The last few weeks have been quite frustrating with Cursor. I dove deep into the issue and figured that the most annoying problem - which leads to all those frustratingly poor replies from the LLM - is how Cursor cuts down the context. You can test this yourself: just add a long file to the chat and ask if it can see the file.
Recently I discovered that all these problems disappear with the "max" models. This is exactly what I wanted. The price of 5¢ per request is manageable, the real issue is the cost for tool use in agent mode (see my other comment).
No write-up yet - Cursor iterates so fast that any guide would be outdated in a few weeks.
My tips:
- Check out the Cursor docs. They're concise - read through them to understand the features and model/context behavior
- It's basically all chat now. Chat has manual mode (previously edit mode), ask mode, and agent mode
- For one-off file changes, use manual mode. Just tell it what to do, and it shows changes as diffs you can accept/reject
- Agent mode is similar but the model can use tools to read files not in context (plus some other stuff like run commands and search through files). It works in a loop until the task is complete - reading files, editing them, fixing lint errors, reading more files, etc.
- For agent mode, Claude Sonnet works best. Other models sometimes fail to use tools correctly or just stop mid-task
- Context is critical. Works best when you provide all necessary context, or at least file names/directory trees in agent mode
- Biggest issue is context cutting. Cursor truncates files and doesn't give the LLM all the code you think it does. Even in max mode, the read file tool only ingests up to 750 lines per file (though I think actively adding files to context lets it read more in max mode). Sometimes copy & paste the file contents into the chat prevents truncating.
- This is why I use max mode for almost anything beyond simple small file edits
Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.
I'm available for long-term contracts (20 hours/week, can increase) and also take on strategic technical consulting.
Skills: Laravel, Vue.js, PHP, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.
We have, and it works great! We currently do this in production, though we use it to help us optimize for consistency between task executions (vs the linked post, which is about improving the capabilities of a model).
Phrased differently, when a task has many valid and correct conclusions, this technique allows the LLM to see "How did I do similar tasks before?" and it'll tend to solve new tasks by making similar decisions it made for previous similar tasks.
Two things to note:
- You'll typically still want to have some small epsilon where you choose to run the task without few-shots. This will help prevent mistakes from propagating forward indefinitely.
- You can have humans correct historical examples, and use their feedback to improve the large model dynamically in real-time. This is basically FSKD where the human is the "large model" and the large foundation model is the "small model".
Full Stack Developer & AI Engineer with 15+ YoE. Laravel/Vue.js specialist who also builds with React, Next.js, Python (Django/FastAPI), and Node.js.
I develop SaaS applications, integrate AI (OpenAI/Anthropic/Google APIs, agent orchestration), and rescue struggling projects. Background as a founder (SaaS, ecommerce, high-traffic blogs) means I understand business, not just code.
Available for long-term contracts (20+ hours/week), project-based work, fractional CTO roles, and technical consulting.
Core Stack: Laravel, Vue.js, PHP, Inertia.js, Tailwind | Also: React, Next.js, Python, Node.js, TypeScript | DevOps
AI: OpenAI, Anthropic, Google AI APIs, agent orchestration, web scraping
Website: https://t1p.de/rqox2
Email: see profile