Great question, @ravroid. Custom GPTs are fantastic for general workflows, but IdeaForge is built for a more focused 'one-shot' success rate. Here’s why I think it’s worth a try over a generic prompt/GPT:
Guided Extraction vs. Open Chat: Custom GPTs can sometimes drift or get 'lazy.' IdeaForge uses a specific Socratic interview logic designed to pull out the 'unknown unknowns' (like edge cases and data relationships) that users often forget to prompt.
Optimized for Codex Context: The output isn't just a summary; it’s a structured specification specifically formatted to minimize hallucinations when pasted into Cursor or Codex.
Zero Context Switching: Instead of jumping between an 'interview GPT' and a 'dev plan GPT,' IdeaForge handles the entire pipeline in one specialized UI, ensuring the logic remains consistent from idea to spec.
Lower Barrier: No ChatGPT Plus subscription required for your end-users to get high-quality technical specs.
I'd love for you to run one of your existing ideas through IdeaForge and let me know if the resulting MD is more 'executable' than your current workflow
I was getting tired of summarizing long articles & threads on HN/Reddit with ChatGPT so I made a simple little Chrome/Firefox extension to do it for me:
One strategy I've been experimenting with is maintaining a 'spec' document, outlining all features and relevant technical notes about a project. I include the spec with all relevant source files in my prompt before asking the LLM to implement a new change or feature. This way it doesn't have to do as much guessing as to what my code is doing, and I can avoid relying on long-running conversations to maintain context. Instead, for each big change I include an up-to-date spec and all of the relevant source files. I update the spec to reflect the current state of the project as changes are made to give the LLM context about my project (this also doubles as documentation).
I use an NPM script to automate concatenating the spec + source files + prompt, which I then copy/paste to o1. So far this has been working somewhat reliably for the early stages of a project but has diminishing returns.
You're describing functionality that's built into Aider. You might want to try it out.
Aider also has a copy/paste mode to use web ui interfaces/subscriptions instead apis.
I definitely use and update my CONVENTIONS.md files and started adding a second specification file for new projects. This + architect + "can your suggestion be improved, or is there a better way?" has gotten me pretty far.
It’s all that matters in the presidential race. But barely over half of the population wanted him as president. The other side doesn’t disappear just because they lost an election.
In my experience so far, GPT-4o seems to sit somewhere between the capability of GPT-3.5 and GPT-4.
I'm working on an app that relies more on GPT-4's reasoning abilities than inference speed. For my use case, GPT-4o seems to do worse than GPT-4 Turbo on reasoning tasks. For me this seems like a step-up from GPT-3.5 but not from GPT-4 Turbo.
At half the cost and significantly faster inference speed, I'm sure this is a good tradeoff for other use cases though.
You're equating this to censorship, when I think it's more like Google adding security measures so you can't break their search engine rather than removing unfavorable results.
Cool concept. Not relevant to me in its current state being limited to skin care products, but would love to use something like this for things like supplements or other products where I otherwise have to sift through Amazon reviews & reddit threads.
Thanks and makes sense. Supplements+general health and beauty will probably be one of the first things that get added outside of skincare. Would be interested in seeing the reviews as well for those considering how supplements are sold+regulated.
I too have been noticing a deterioration in recent months. Things that generally worked smoothly before have random bugs now. For example, turning off shuffle and resuming a song will sometimes just stop playing Liked Songs after the song ends, shuffle will turn off/on seemingly at random, etc. Also there seem to be odd inconsistencies in the UX between mobile and desktop such as liking songs and seeing Smart Shuffle songs in playlists.
I've been generally very happy with the Spotify app for years and it's disappointing to see the quality slipping.
I use another GPT to turn that spec into a development plan for Codex that I include in AGENTS.md. (https://chatgpt.com/g/g-698a6ee58aec8191ba1e3b520b13b5e7-dev...)
I'm curious what advantages this product offers vs. using a prompt?
reply