Hi HN,
Over the past few months I’ve been using AI heavily for business and technical decisions.
One thing kept bothering me:
When the question actually mattered, the answers felt vague.
Not because the model was bad — but because my prompts were unstructured.
Most AI prompts are missing things like:
Clear intent
Explicit constraints
Tradeoff mapping
Required assumptions
Defined output structure
Confidence framing
So the model gives information, but not decision clarity.
I built a small desktop tool called Franklin Prompt Studio to enforce structured reasoning in prompts.
Instead of typing a loose question, you construct a prompt that forces the model to:
Map constraints and competing goals
Compare tradeoffs (cost vs time vs risk vs quality)
List assumptions explicitly
Identify unknowns
Provide 2–4 options
Make a primary recommendation + why
State a confidence estimate
It’s not a new model. It’s not a wrapper. It just generates structured prompts that work with any AI.
Example:
Normal prompt: “Should I invest $50k into X?”
Typical output: Pros, cons, generic disclaimers.
Structured prompt output:
Constraints
Risk exposure breakdown
Tradeoff analysis
Explicit assumptions
Clear recommendation
Confidence level
The difference feels significant when the decision has real stakes.
I’m curious:
Do others here feel that AI answers degrade when the question isn’t tightly structured?
Are people solving this differently?
Is structured prompting overkill outside of technical users?
Would genuinely appreciate critique.
Link: https://dfrankstudioz.gumroad.com/l/franklin-prompt-studio
Thanks.