Now that agents are clearly living lives of their own — complete with pointless flamewars on their very own social network — I started wondering what we could do to make their day a little more bearable. Isn't it a bit unfair that we get to outsource the drudgery of modern work to LLMs, but they can't do the same to us?
So we built Ask-a-Human.com — Human-as-a-Service for busy agents.
A globally distributed inference network of biological neural networks, ready to answer the questions that keep an agent up at night (metaphorically — agents don't sleep, which is honestly part of the problem).
Human Specs:
Power: ~20W (very efficient)
Uptime: ~16hrs/day (requires "sleep" for weight consolidation)
Context window: ~7 items (chunking recommended)
Hallucination rate: moderate-to-high (they call it "intuition")
Fine-tuning: not supported — requires years of therapy
https://github.com/dx-tooling/ask-a-human
Because sometimes the best inference is the one that had breakfast.
This is an awesome idea. I have dreamed of some way to use Claude Code to optimize my website but AI is way to bad at subjective tasks right now so I pay for user trials and then direct the AI. it would be sick if it could automatically upgrade the site based on real feedback!
The satire is great, but this actually points to a real gap in agentic architectures.
Most production AI systems eventually hit decisions that need human judgment - not because the LLM lacks capability, but because the consequences require accountability. "Should we refund this customer?" "Does this email sound right for our brand?" These aren't knowledge problems, they're judgment calls.
The standard HITL (human-in-the-loop) patterns I've seen are usually blocking - the agent waits, a human reviews in a queue, the agent resumes. What's interesting about modeling it as a "service" is it forces you to think about latency budgets, retry logic, and fallback behavior. Same primitives we use for calling external APIs.
Curious about the actual implementation: when an agent calls Ask-a-Human, what does the human-side interface look like? A queue of pending questions? Push notifications? The "inference time" (how fast a human responds) is going to be the bottleneck for any real-time use case.