FOD#140: Something Medium is Happening?
and what you need to feel comfortable with AI
This Week in Turing Post:
Wednesday / AI 101 series: OpenClaw explained: The Boom and the Architecture of Local AI Agents
Friday / Open Source AI series
I want to start by thanking Matt Shumer for his “Something Big is Happening.” It ricocheted through the part of Twitter I follow until it felt like it had swallowed the whole platform. People were quoting it, reacting to it, forwarding it to their friends with the digital equivalent of grabbing someone by the shoulders.
It has 83 million views. Clearly, it hit a nerve.
I also want to argue with it, even if that puts me on the unpopular side of the timeline. Because his piece gave me a real anxiety, the one that is unproductive.
Let’s start with what I agree with.
Matt is right that the pace feels different now. For people who actually use frontier models daily, “AI as a helpful tool” has been sliding toward “AI as an independent worker” in a way that is hard to explain to someone who only played with a free-tier chatbot a year ago. He’s also right about the perception gap: public understanding lags behind capability, and the lag creates bad decisions. The most wrong thing you can do today is to dismiss AI.
He’s also right about the labor market direction. If your work happens on a screen and your core output is text, analysis, code, structured documents, and decisions expressed through a keyboard, you are exposed. The question is not whether AI touches your job. It already does. The question is how quickly tasks get unbundled, automated, and re-priced inside your role. By you, but Matt doesn’t say that explicitly.
Now what I disagree with.
First, I reject the emotional framing. Comparing this moment to February 2020 is effective storytelling, but it also turns “learning how to work with a new general-purpose tool” into an emergency broadcast. That framing produces a very specific kind of reader: anxious, compulsively online, and primed to interpret every model release as a life-or-death update. If you already spend time in the Silicon Valley bubble, this is gasoline on the fire. If not, you will feel this sticky anxiety. brrr
That anxiety is not “AI will take my job tomorrow.” It’s “the discourse is training us to live in permanent cognitive overdrive.” That is simply inhuman. Twitter’s intensity can make you feel behind even when you are actively shipping work with these systems. There is always another tool, another meetup, another startup demo, another “you’re late” thread. That is a very effective recipe for burnout.
Second, I don’t buy the implied uniformity of impact. Capability is one curve. Adoption is another. Incentives, regulation, liability, procurement, internal politics, and institutional inertia are their own curves, and they do not politely synchronize. Some roles will compress rapidly. Others will change slowly, then suddenly. Matt’s directional forecast can be right while the timeline distribution across industries is far messier than “one to five years” suggests. It’s big, but it also medium (all this in the middle. mediocre stuff).
So where does it bring us?
Third thing I disagree with: how to learn working with AI.
Instead of emotions we should think about goal-setting. Taste. Knowing what matters. About stitching context into a decision that has consequences. To being accountable. About the boring parts that turn capability into reality: integration, evaluation, reliability, compliance, human trust, organizational adoption, and all the messy edges where the real world refuses to behave like a clean benchmark. Again, it’s that medium part that matters, not the grandeur of a model or a tool.
Matt gives an advise: “Spend one hour a day experimenting with AI.” And I just disagree with that so much.
It teaches a completely wrong muscle. Time is not the unit of learning. Feedback is.
Kids don’t learn by allocating 60 minutes to “walking practice.” They learn because they want something: open the jar, reach the table, climb the stairs, get the parent’s attention. Goal first. Attempts. Feedback. Repeat until the world changes.
So instead of “playing” with AI, you should choose a goal and achieve one real outcome per week meaningfully better with AI.
That forces a goal. And a goal forces evaluation. And it actually makes you feel better because you start achieving things.
There’s also a quieter (literal) point that gets missed in the alarm: if you’re reading this, you’re already inside the tiny internet class that can spend hours discussing AI on the internet. That’s not “everyone.” That’s a self-selected group with a particular set of incentives, and sometimes a suspicious amount of time. Maybe that’s what we need AI for – to let us spend more time on social networks… Anyway, 84 millions is very big. But not as much as 8 billion people on the planet.
What I would like to leave you with: treat AI like a power tool with a marketing department. Respect the capability. Ignore the adrenaline. Pick a goal you genuinely care about, then use the tool to move faster toward it. Your intelligence now is to move AI towards the right outcome for you.
Happy building.
Follow us on 🎥 YouTube Twitter Hugging Face 🤗
We are watching/reading:
The tension and friction of AI in the real world →watch here
Twitter Library
10 Must-read books and surveys about AI and Machine Learning
News from the usual suspects
Everyone is still absolutely blown by OpenClaw. Kimi introduced Kimi Claw with 5000 skills (read their guide here) and a few more examples we are collecting here →
The news digest is a bit shorter today due to the President’s Day which is a holiday in the US.
🔦 Paper and Achievement Highlight
This week marked a shift from “LLMs solving puzzles” to “LLMs doing research chores.” DeepMind’s Aletheia (→read their amazing paper here) couples a strong reasoner with a generator–verifier–reviser loop plus heavy tool use to navigate literature, producing results from Olympiad proofs to PhD exercises and even fully AI-generated or co-authored math papers, alongside a proposed taxonomy for autonomy and novelty.
In parallel, OpenAI reports GPT-5.2 spotting a closed-form pattern for a “single-minus” gluon amplitude in a half-collinear regime after humans computed small-n cases (→read their blog here); an internal scaffolded system then proved and checked the formula against standard recursions and constraints. The trend is research-grade AI as a workflow: propose, simplify, verify, and document contributions like a responsible coauthor, not a flashy calculator.








