All of the recent LLM advancements have just been training the model to self-talk to force it to see clearly.
I hate to be a “proompter.” but I used this prompt and got the right answer without thinking:
Before answering, do the following:
Clearly restate the user’s actual objective.
Identify what must physically or logically change for the objective to be achieved.
Check for hidden assumptions or trick framing.
Ask: “Does my answer actually accomplish the stated goal?”
If multiple interpretations exist, briefly list them and choose the most logically consistent one.
Do not optimize for surface efficiency if it conflicts with the core objective.
Use strict common sense before answering.
As a junior, I feel most complexity in software is manufactured. LLMs simplify that mess for me, making it easier to get things done. But I’m constantly hit with imposter syndrome, like I’m less skilled because I rely on AI to handle the tricky stuff. And Gemini is better than me!
I hate to be a “proompter.” but I used this prompt and got the right answer without thinking:
reply