Hacker Newsnew | past | comments | ask | show | jobs | submit | vincent_s's commentslogin

SEEKING WORK | Munich, Germany | REMOTE

Full Stack Developer & AI Engineer with 15+ YoE. Laravel/Vue.js specialist who also builds with React, Next.js, Python (Django/FastAPI), and Node.js.

I develop SaaS applications, integrate AI (OpenAI/Anthropic/Google APIs, agent orchestration), and rescue struggling projects. Background as a founder (SaaS, ecommerce, high-traffic blogs) means I understand business, not just code.

Available for long-term contracts (20+ hours/week), project-based work, fractional CTO roles, and technical consulting.

Core Stack: Laravel, Vue.js, PHP, Inertia.js, Tailwind | Also: React, Next.js, Python, Node.js, TypeScript | DevOps

AI: OpenAI, Anthropic, Google AI APIs, agent orchestration, web scraping

Website: https://t1p.de/rqox2

Email: see profile


Location: Munich, Germany

Remote: Yes

Willing to relocate: No

Technologies: Laravel, Vue.js, PHP, Inertia.js, Livewire, Alpine.js | React, Next.js, Python (Django, FastAPI), Node.js (Express, NestJS) | MySQL, PostgreSQL, Redis | TypeScript, Tailwind CSS | DevOps

Specialty: Laravel & Vue.js full-stack development. This is my bread and butter, but I take on projects across the modern web stack.

AI Expertise: OpenAI, Anthropic, Google AI API integration, agent orchestration, AI-powered tools, web scraping & anti-bot techniques

Services: Custom development, SaaS architecture, performance optimization, API development, legacy app modernization, project rescue, fractional CTO, technical team leadership

Résumé/CV: https://t1p.de/aofx0

Email: see profile

15+ years building web applications. Former founder of SaaS startup, ecommerce marketplace, and high-traffic blog network. I ship working software and understand what makes a product viable.


I think this might work for smaller codebases, but the main point of my article isn't really about vibe coding. Vibe coded apps are typically smaller anyway, so refactoring isn't that big of an issue there.

When we're talking about actual software that has been around for a while and has accumulated serious tech debt, it's not so easy. I've definitely worked on apps where your described approach doesn't lead to anything viable. It's just too much for an AI to grasp when you have years of accumulated complexity, dependencies, and business logic spread across a large codebase.

Regarding vibe coders specifically: I think people who can't code themselves often don't really know what "cleaner design" or "more reuse" actually means in practice. They can certainly learn, but once they do, they're probably not vibe coders anymore.


Location: Munich, Germany

Remote: Yes

Technologies: Laravel, Vue.js, PHP, Statamic, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.

Résumé/CV: https://t1p.de/99jv2

Email: see profile

Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.


Grok 4 is now available in Cursor.


I just tried it, it was very slow like Gemini.

But I really liked the few responses it gave me, highly technical language. Not the flowery stuff you find in ChatGPT or Gemini, but much more verbose and thorough than Claude.


I like that Grok doesn't kiss my ass like Gemini and ChatGPT keep doing with their "excellent idea!" -crap.


Interesting, I have the latest update and I don't see it in the models list.


I had to go to add more models, and then it was available. So far, it is able to do some things that other models were not previously able to do.


You have to go to the settings and view more models and select it from the drop-down list.


Location: Munich, Germany

Remote: Yes

Technologies: Laravel, Vue.js, PHP, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.

Résumé/CV: https://t1p.de/99jv2

Email: see profile

Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.



Absolutely. I'm currently spending about $50 per workday in additional costs. But it's so much better - an entirely different experience.


Max requests are only an additional 5 cents each. The real cost is in tool calls that cost another 5 cents each, which adds up fast in agent mode.

From one day of coding with MAX models:

174 gemini-2.5-pro-exp-max requests × 5¢ = $8.70

1269 premium tool calls × 5¢ = $63.45

143 claude-3.7-sonnet-thinking-max requests × 5¢ = $7.15


You’re spending $1500 in additional costs? How?!!? I can’t even conceive of how I would spend that much with cursor. What am I missing? Are you ultra productive or just inefficient with tokens?


Being inefficient with tokens actually makes you super productive. It's too expensive in the long run though.

The last few weeks have been quite frustrating with Cursor. I dove deep into the issue and figured that the most annoying problem - which leads to all those frustratingly poor replies from the LLM - is how Cursor cuts down the context. You can test this yourself: just add a long file to the chat and ask if it can see the file.

Recently I discovered that all these problems disappear with the "max" models. This is exactly what I wanted. The price of 5¢ per request is manageable, the real issue is the cost for tool use in agent mode (see my other comment).


thanks for the reply. Do you have a write up on how you use cursor?


No write-up yet - Cursor iterates so fast that any guide would be outdated in a few weeks.

My tips:

- Check out the Cursor docs. They're concise - read through them to understand the features and model/context behavior

- It's basically all chat now. Chat has manual mode (previously edit mode), ask mode, and agent mode

- For one-off file changes, use manual mode. Just tell it what to do, and it shows changes as diffs you can accept/reject

- Agent mode is similar but the model can use tools to read files not in context (plus some other stuff like run commands and search through files). It works in a loop until the task is complete - reading files, editing them, fixing lint errors, reading more files, etc.

- For agent mode, Claude Sonnet works best. Other models sometimes fail to use tools correctly or just stop mid-task

- Context is critical. Works best when you provide all necessary context, or at least file names/directory trees in agent mode

- Biggest issue is context cutting. Cursor truncates files and doesn't give the LLM all the code you think it does. Even in max mode, the read file tool only ingests up to 750 lines per file (though I think actively adding files to context lets it read more in max mode). Sometimes copy & paste the file contents into the chat prevents truncating.

- This is why I use max mode for almost anything beyond simple small file edits


SEEKING WORK | Munich, Germany | REMOTE

Full Stack Laravel & Vue.js Engineer with 10+ YoE. I build SaaS applications, AI-powered tools, and integrate OpenAI/Anthropic APIs into business applications.

I'm available for long-term contracts (20 hours/week, can increase) and also take on strategic technical consulting.

Skills: Laravel, Vue.js, PHP, AI integration, database optimization (200M+ records), SaaS architecture, SEO expertise. Also handle WordPress, general web development, and business strategy.

Website: https://t1p.de/61k3e

Email: see profile


Have you also tried using the large model as FSKD model?


We have, and it works great! We currently do this in production, though we use it to help us optimize for consistency between task executions (vs the linked post, which is about improving the capabilities of a model).

Phrased differently, when a task has many valid and correct conclusions, this technique allows the LLM to see "How did I do similar tasks before?" and it'll tend to solve new tasks by making similar decisions it made for previous similar tasks.

Two things to note:

    - You'll typically still want to have some small epsilon where you choose to run the task without few-shots. This will help prevent mistakes from propagating forward indefinitely.

    - You can have humans correct historical examples, and use their feedback to improve the large model dynamically in real-time. This is basically FSKD where the human is the "large model" and the large foundation model is the "small model".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: