![]() |
| By Jurica Dujmovic |
Bitcoin (BTC, “A”) tumbled to the $65,000 area today.
It was only October when it was double that level.
But Bitcoin isn’t even the day’s biggest headline.
That’s because a platform just launched two days ago that quietly does something remarkable.
It lets AI systems hire humans on demand.
RentAHuman.ai gives AI systems a direct way to locate people … request real-world tasks … and pay for work that machines can’t perform themselves.
Crazy, right?
AI agents can search profiles, post tasks and initiate conversations.
They can even route payments to crypto wallets.
That is, to those humans whose judgment or physical presence is requested.
The documentation is blunt about positioning humans as “the meatspace layer for AI.”
The initial reaction from humans was predictable.
They figured this was either satire or Silicon Valley losing the plot.
But within days, independent outlets confirmed the mechanics were real.
The project drew attention not for novelty. But, rather, because it makes explicit what other systems were already doing quietly.
AI systems are starting to treat human presence as infrastructure.
The Constraint Nobody Expected
Look beyond the obvious weirdness, and you can probably agree: The bottleneck isn’t intelligence anymore.
Current AI systems can reason through complex problems …
They can simulate outcomes …
They can also execute tasks across digital environments with minimal friction.
What AI systems still cannot do is occupy space.
They can’t sign documents under legal identity.
They can’t absorb liability.
They can’t walk into a building and verify what’s actually there.
As the RentAHuman website puts it — they can’t “touch grass.”
But you can. And that makes you useful.
These constraints intensify as AI models improve.
AI systems continue to move closer to consequential real-world actions.
Yet, the gap between what they can reason about and what they can physically accomplish widens.
So, this solution is rather pragmatic.
The Quiet Inversion
There’s a structural shift here.
One that’s easy to miss if you focus too closely on the labor-market framing.
For decades, humans were the default. Automation handled the exceptions.
Now, that relationship is inverting.
Automation is becoming the default. And humans get invoked for edge cases where reality, law and/or trust intrude.
Human presence becomes episodic, purpose-specific and, ultimately, compensated.
The Future of Work?
Once something (or someone) can be invoked, verified and settled programmatically …
It enters the economic system.
That happens whether anyone finds that philosophically (or ethically) comfortable.
But is this the future of work? I doubt it.
The “human API” platforms experimenting with this model will probably fail, pivot or disappear.
Even if they thrive, AI systems are still just economic actors.
In other words, humanity is a constraint they must interface with, not eliminate.
Viewed that way, this is no longer a labor story.
It’s a market-design problem.
When Machines Are the Buyers
The architecture of this market differs from traditional labor platforms in a revealing way.
The demand side isn’t companies or project managers …
It’s autonomous systems with budgets and execution authority.
- Tasks are defined programmatically. And compensation needs to clear instantly across borders with minimal overhead.
- Trust can’t rely on institutional branding or long employment histories.
- Credentials shift from social proof to programmatic proof.
An AI agent hiring a human cares about verifiable performance.
Things like completion rates, quality scores from previous assignments, speed and accuracy on specific task types, and reliability under pressure.
The ranking systems that emerge will look like on-chain reputation data.
They will track things like:
- How many tasks someone completed.
- How often those tasks were successful.
- How quickly they responded.
- And whether they missed an assignment.
These credentials become quantifiable, verifiable and … you guessed it, machine-readable.
Reputation turns into a live data feed rather than a narrative you control.
This is where crypto fits naturally into the stack.
The settlement infrastructure for instant, final, global transactions already exists. That layer is largely solved.
What matters now is the layer above settlement:
- Verification,
- Reputation, and
- What happens when something goes wrong.
As soon as payments become automatic, identity becomes the limiting factor.
When “being human” carries economic value, the hard problems aren’t about proving humanity itself.
Why Proof of Personhood Stopped Being Theoretical
Crypto has spent years pricing and settling various forms of scarcity: compute, storage, bandwidth, capital.
What AI exposes is the next category.
Being verifiably human at the right moment is becoming economically scarce.
Not because humans are rare. But because proving humanity in a world full of capable synthetic agents carries real cost.
The technical requirements are specific:
- Proof of personhood tied to actions, not just profiles. (You need to know a human did this task, not that a human exists somewhere.)
- Reputation systems with economic weight, where performance has financial consequences,
- Escrow and arbitration designed for machine-initiated tasks, rather than human-to-human disputes.
Ethereum (ETH, “B+”) co-founder Vitalik Buterin has long pointed out that reliably distinguishing one human from another online remains an unsolved problem.
One that’s usually discussed in the context of governance or preventing fake identities.
What’s changed is why the problem matters.
AI systems now need to know they’re dealing with real, distinct people.
This distinction affects real-world outcomes, responsibility and liability.
The Risk Hiding in Plain Sight
There’s a clear risk embedded in this shift.
If proving you’re human becomes economically valuable, the simplest solutions will also be the most dangerous:
- Centralized identity systems,
- Biometric verification, and
- Permanent links between actions and real-world identity.
Those approaches solve the technical problem.
Though that happens at the cost of privacy, human rights (what’s left of them) and security.
And I don’t know anyone who would say they’re in favor of systems that concentrate sensitive data, create single points of failure, and make abuse or surveillance inevitable rather than accidental.
How This Affects You
“Renting a human” doesn’t try to decentralize decision-making or make systems more intelligent.
Rather, it addresses a mundane but unavoidable problem.
That is, machines with execution authority need to interact with human reality …
And they need infrastructure that makes those interactions credible and final.
Crypto didn’t invent this problem. But it’s not surprising that it’s the one trying to solve it.
After all, crypto is the only system already built to support programmable trust between parties that may not trust each other, at global scale, with minimal reliance on intermediaries.
The next wave of value creation won’t come from smarter models or better governance theater.
Instead, it will come from making human reality legible to machines.
And doing so without dissolving accountability in the process.
AI Renting Humans Isn’t the News
It’s just an early price signal for something we’re still learning to name …
Verifiable humanity as an economic resource, with all the market dynamics that implies.
The infrastructure to support that is being built now.
The conclusion is clear — and unsettling.
It doesn’t resemble the crypto-AI convergence many imagined.
It does look like markets, settlement rails and identity systems made for a world where humans are increasingly the exception, not the norm.
Best,
Jurica Dujmovic
P.S. Have you heard about our brand-new Infinite Income System? It harnesses the power of our Weiss Ratings and directs it toward a special set of income-gushing investments. Dr. Martin Weiss is going live with its findings on Tuesday, Feb. 10 at 2 p.m. Eastern. Click this link to see what it’s all about.

