In this episode, I sit down with Olga Megorskaya, CEO of Toloka, to explore what true human-AI co-agency looks like in practice. We talk about how the role of humans in AI systems has evolved from simple labeling tasks to expert judgment and co-execution with agents – and why this shift changes everything. Even from the compensation side for humans.
We get into:
Why "humans as callable functions" is the wrong metaphor – and what to use instead
What co-agency really means?
Why some data tasks now take days, not seconds – and what that says about modern AI The biggest bottleneck in human-AI teamwork (and it’s not tech)
The future of benchmarks, the limits of synthetic data, and why it is important to teach humans to distrust AI
Why AI agents need humans to teach them when not to trust the plan
If you're building agentic systems or care about scalable human-AI workflows, this conversation is packed with hard-won perspective from someone who’s quietly powering some of the most advanced models in production. Olga brings a systems-level view that few others can – and we even nerd out about Foucault’s Pendulum, the power of text, and the underrated role of human judgment in the age of agents.
Olga is a deep thinker, and I truly enjoyed this conversation.
The transcript (edited for clarity, brevity, and sanity. Always better to watch the full video though) ⬇️