Agency > Intelligence
Keynote Slides
Copyright © 2026 | humanskills.ai LLC. | All Rights Reserved
As AI makes intelligence abundant and accessible, human agency—the capacity to direct, evaluate, and take responsibility for outcomes—becomes the defining source of value.
The Four Revolutions
Every major economic transformation follows the same pattern: what was once scarce becomes abundant, and a new scarcity emerges as the source of value. Agriculture made food abundant; manufacturing did the same for mechanical power; computing for information. Now AI is making cognitive capability abundant. Developing human agency becomes what matters most.
Expert Predictions: Accelerating Faster Than Expected
McKinsey tracked AI researcher forecasts in 2017, then again in 2023. The pattern is striking: timelines for AI achieving top-quartile human performance collapsed by decades across nearly every cognitive domain. What experts once placed in 2060 now lands before 2040. Timelines continue to compress faster than the forecasts suggested.
Stanford's Fei-Fei Li, one of the architects of modern computer vision and world models, offers a crucial reframe. AI isn't some alien force arriving from outside. It's made by humans, shaped to serve humans, and ultimately answerable to human purposes. The technology is ours; the question is what we choose to do with it.
Situational Awareness: Plan for Tomorrow's AI
The biggest strategic mistake isn't misunderstanding today's AI—it's planning for it to stay this way. This section explores what's coming next, so institutions can build strategies resilient to capabilities that don't exist yet but soon will.
Access Has Expanded 40x/Year
The cost curve is staggering. In 2023, providing PhD-level AI assistance to a 30-student class would have run around $15,000 per semester. By 2025, the same capability costs under $400. This isn't incremental change—it's a 40x annual expansion in access. Strategies built for yesterday's constraints are already obsolete.
The Distribution of Abundant Intelligence
Mapped against the human IQ bell curve, today's leading AI models cluster at the far right, performing at levels only 0.1% to 2% of the population ever reach. More striking: in roughly 12 months, frontier models improved by nearly four standard deviations. Intelligence at this level used to be extraordinarily rare. Now it's available to anyone with an internet connection.
What Does It Mean When AI Scores >140 on a Mensa IQ Test?
An IQ of 140 puts you in the top 2% of humanity, which is genius-level territory. In 2024, leading AI models crossed that threshold. By 2025, they reached ~145. Projections for 2026 point toward ~175, a level virtually no human has ever tested at. The trajectory isn't flattening; it's still climbing off the chart.
The Human Intelligence Landscape
Not all human capabilities face the same AI pressure. This framework maps the terrain: embodied intelligence, lived intuition, and ethical responsibility remain distinctly human moats. Creativity, wisdom, and contextual reasoning sit on an erosion watchlist—still human advantages, but narrowing. Learning efficiency and pattern recall? AI has already arrived. Understanding where your work falls in this landscape helps with planning.
Seven Capability Revolutions Reshaping Intelligence
AI isn't one thing—it's a cascade of expanding capabilities, each unlocking new forms of human partnership. From language models (2022) through reasoning, multimodal perception, and agentic execution, to emerging world models, embodied AI, and spatial intelligence. Each layer builds on the last. Understanding this evolution is essential for knowing where human agency fits in.
AIME 2025 Benchmark: Top 8 Models
The American Invitational Mathematics Examination is designed to challenge the top high school math students in the country. As of December 2025, the leading AI model scores 100%—perfect accuracy. Eight different models now exceed 91%. A benchmark designed to challenge elite human talent no longer differentiates between humans and AI.
Software Engineering: SWE Bench Verified
On the industry-standard benchmark for real-world coding tasks, the leading AI model now outperforms the top 2% of human software engineers. Six different models have crossed the 74% threshold, with Claude Opus 4.5 exceeding 80%.
GPQA Diamond Benchmark: PhD-Level Domain Expertise
The Graduate-Level Google-Proof Q&A Diamond benchmark tests questions so specialized that even domain experts with internet access score only 65–74%. Ten AI models now exceed that range, with top performers pushing past 90%. The "Google-proof" knowledge barrier (questions too complex to simply search) no longer stops machines.
Human × AI Innovation Flywheel
This is the virtuous cycle at the heart of the Agency Economy. Human agency directs abundant intelligence toward meaningful problems. AI capability amplifies what humans can achieve. Those augmented capabilities drive discoveries and innovations, which in turn produce more capable AI. The flywheel only spins when humans stay in the driver's seat.
AI-First Pioneers
These organizations are no longer just experimenting with AI. They've restructured around it. From fintech to enterprise software to language learning, companies across sectors are making AI foundational to how they operate, hire, and create value. They're not asking whether to adopt AI; they're asking what becomes possible when AI is the default.
The Algorithm's Shadow
The displacement narrative is real and deserves honest acknowledgment. Projections point to 92 million jobs displaced by 2030. MIT's research shows nearly 12% of the workforce already exposed. Entry-level white-collar roles face the steepest cliff. These aren't scare tactics. They're the stakes that make developing human agency urgent.
Hinton's Warning: The Coyote Over the Cliff
In 2016, Nobel laureate Geoffrey Hinton—the "Godfather of AI"—delivered a stark warning to radiologists: you're already over the edge, you just haven't looked down yet. It became one of the most cited predictions in AI discourse.
Radiology's Last Exam: September 2025
Nine years after Hinton's prediction, here's the actual data. Board-certified radiologists still lead at 83% diagnostic accuracy. The best AI models? Around 30%. Trainees outperform every machine. As of September 2025, board-certified radiologists still hold a significant lead.
Radiology's Last Exam: November 2025
Two months later, the picture shifts. Gemini 3.0 Pro jumps to 51%—now outperforming radiology trainees. The gap between human experts and AI narrowed meaningfully in just eight weeks. The ground is moving.
Jevons Paradox: Why "Stop Training Radiologists" Was Wrong
Hinton's technical prediction may eventually prove correct—but his policy conclusion missed something fundamental. Jevons Paradox: when technology makes a resource more efficient, demand often increases rather than decreases. AI won't eliminate the need for radiologists; it may multiply it.
Fei-Fei Li on Human-AI Collaboration
Stanford's Fei-Fei Li points to the real opportunity: not humans versus AI, but humans with AI. The most productive path forward isn't replacement—it's collaboration. Augmentation beats automation when humans stay engaged.
In the Intelligence Revolution: Agency > Intelligence
This is the core reframe. When intelligence becomes abundant and cheap, it stops being the bottleneck. What matters is the capacity to direct that intelligence, to know what's worth doing, to evaluate whether it was done well, and to take responsibility for the outcome. Agency becomes the scarce resource.
Karpathy: Agency Is More Powerful and More Scarce
Andrej Karpathy—OpenAI co-founder and former Tesla AI director—admits he had it backward for decades. Our culture venerates intelligence, obsesses over IQ. But agency, he now recognizes, is significantly more powerful and significantly more scarce. The insight comes from someone who helped build the intelligence.
The Quantified Skill Delta: AI Collaboration as a Distinct Capability
New research from Northeastern and Harvard confirms what practitioners suspected: working effectively with AI requires fundamentally different skills than working alone. The gap between solo performance and AI-collaborative performance can now be measured—and it varies dramatically across individuals. This skill is increasingly important to measure and develop.
The Human Agency Stack: A Framework for Flourishing
Agency isn't one skill—it's a system. This framework maps the full stack: from intentionality (choosing direction) through self-determination, human judgment, and collective coordination, to human-AI amplification and responsible governance. Each layer has its own constructs, skills, and mindsets. Together, they form the architecture for human flourishing in the age of abundant intelligence.
The Power of Inquiry to Spark Breakthrough Ideas
When answers become abundant, the question becomes the competitive advantage. Warren Berger's insight from A More Beautiful Question takes on new urgency: the capacity to ask the right question, to frame the problem worth solving, is now the scarce skill.
The Intelligence-Value Opportunity Matrix: Where Are the Ideas?
Most AI ideas cluster in the lower-left quadrant—core use cases that current models can already handle. But as model intelligence climbs, a vast opportunity zone opens up: high-value applications that require capabilities just now becoming available. The bottleneck isn't intelligence anymore. It's imagination.
Filling the Frontier: Ideas Follow Intelligence
As the intelligence curve advances, ideas follow. The biggest area of opportunity—high value, high intelligence requirements—starts to fill with possibilities that weren't conceivable before. The question shifts from "can AI do this?" to "what should we ask it to do?"
Intelligence Frontier: The Questions That Matter Now
Four questions to escape the displacement-and-efficiency trap: What's already working that we'd 10X with abundant intelligence? What problems feel impossible only because we can't hire enough talent? What new work becomes affordable? And crucially—what can only humans do, and how do we use AI to multiply it?
Generative AI Value-Creation Pyramid
A framework for building AI capabilities systematically. Start with individual improvements that build confidence. Scale to collective intelligence across teams. Transform core processes for human-AI collaboration. At the peak: visionary innovation that creates entirely new possibilities. Each level builds on the one below—skip steps at your peril.
Andrew Ng: Better Workflows Beat Bigger Models
Google Brain co-founder Andrew Ng cuts through the hype: the real breakthroughs won't come from bigger models alone. They'll come from better workflows. How humans and AI work together is where the leverage is. The competitive advantage isn't access to intelligence; it's the design of how that intelligence gets deployed.
Agency Economy OS: The Operating System
Three elements, multiplied together: Human Agency directs the work. Agentic Workflows structure how it gets done. Agentic AI provides the capability. Remove any one and the system stalls. This is the operating system for thriving in the Intelligence Revolution.
The Agency Economy Flywheel
Here's how sustained advantage compounds. Human agency directs intelligence and specifies workflows. Agentic workflows enable scale and expand capacity. Agentic AI frees capacity and increases leverage. That freed capacity grows human agency, which directs more intelligence. The cycle accelerates. At the center: human potential, systematically amplified. The flywheel requires human agency to function properly.
Human Agency Scale: H1 to H5
Not every task needs the same level of human involvement. This scale maps the spectrum: from H1 (AI operates autonomously, humans intervene only on alerts) through H3 (true partnership with iterative collaboration) to H5 (complete human control, AI passively available). The right level depends on the task. Choosing deliberately is the skill.
A Framework for Choosing Where and How to Use Generative AI
Two dimensions determine the right human-AI configuration: the type of knowledge required (tacit vs. explicit) and the cost of errors (low vs. high). Explicit data with low error costs? Let AI handle it autonomously. Tacit knowledge with high stakes? Humans lead. The matrix makes the choice systematic, not intuitive.
David Solomon: The Last 5% Now Matters
Goldman Sachs CEO David Solomon crystallizes the shift: when AI commoditizes 95% of the work, the final 5% (the judgment, the finishing, the decisions that require accountability) becomes where value concentrates. The question isn't whether you can do the work; it's whether you can do what the work requires at the end.
10 Stages to the Goldman Sachs S-1 Workflow
Here's what agentic workflow design looks like in practice. A complex IPO process mapped to the Human Agency Scale: mandate framing and post-mortems require full human agency (H5). Data assembly and compliance checks run autonomously (H1). Everything in between calibrated to the appropriate level. This is how elite organizations are redesigning work.
Goldman Sachs S-1 Workflow: Human Agency Scale Visualized
The same workflow as a visual journey. Watch how human agency rises and falls across the ten stages—peaking at the bookends (framing and learning) and dipping to autonomous AI for data assembly and compliance. The pattern reveals where human judgment creates value and where AI handles the systematic work.
Goldman Sachs S-1 Workflow: Task Time Distribution
The punchline: 60% of task time runs at H1 (autonomous AI). Another 15% at H2 (delegated). Only 15% requires full human agency (H5) or augmented human agency (H4). Human attention concentrates on the moments that require judgment.
Scaffolding AI into Learning
The transition to AI-integrated education isn't about throwing technology at students—it's about building the support structures that help them climb. Like scaffolding on a construction site, AI can provide temporary support that enables learners to reach heights they couldn't achieve alone, and is gradually removed as capability develops.
Agentic Learning Design
A new discipline for a new era. Agentic learning design asks: how do we structure educational experiences so students develop agency, not just consume AI outputs? It's the intentional architecture of human-AI collaboration in service of human growth.
Instructor Control of AI in Assessment
Faculty authority, codified. This framework gives instructors four clear modes: AI-Required (students must use AI and demonstrate judgment), AI-Encouraged (AI supports but students own the work), AI-Optional (student's discretion), and Purely Human (no AI permitted). The choice is pedagogical, not technological. It belongs to faculty.
Developmental Progression: The Agency Curve
Students should move through different agency levels during an assignment—not stay fixed at one. H5 (full human agency) for authentic voice and values. H4 (augmented) for AI coaching while students create. H3 (shared) for collaborative problem-solving. H2 and H1 should generally be avoided in pedagogical contexts. The goal is developing judgment, not outsourcing it.
Human Agency Scale: Detailed Reference
The complete framework for human-AI collaboration. Each level defined by team dynamics, human role, AI role, when it makes sense, and minimum guardrails required. From H1's autonomous AI (humans monitor dashboards) to H5's full human agency (AI silent or passive). Use this as a decision tool for designing any workflow.
AI Roles Mapped to Human Agency
For educators: three practical zones between the extremes. AI-Led (H2): AI does most work, humans supervise and constrain. AI-Augmented (H3-H4): shared creation with frequent handoffs, student leads key decisions. AI-Assisted (H4): human-owned work, AI provides feedback and micro-tasks. Faculty determine the role; the H-level sets the boundary.
10 Steps to the Funny Research Project
Agentic learning design in action. A student research assignment mapped to the Human Agency Scale: topic choice and thesis require full human agency (H5). Evidence planning and outlining operate in shared agency (H3). Draft coaching and revision use augmented human agency (H4). The workflow builds judgment through structured practice.
Funny Research Project: Agentic Learning Workflow Visualized
The same assignment as a visual journey. Watch how human agency rises and falls across ten steps—peaking at topic choice, thesis, and the final "Dear Reader" letter, dipping for evidence planning and outline structure. Students never drop below H3. The pattern reflects an intentional design choice to protect moments that build human capability.
Vinod Khosla: The Constraint That Shaped Everything
Sun Microsystems co-founder Vinod Khosla names the hidden assumption behind modern work design: expert time has always been scarce, so we built every system to ration it. Now that constraint is dissolving. The real question is what becomes possible when expertise is no longer the bottleneck?
London Business School's Andrew J. Scott captures the essence of our entire journey in one profound insight: the better AI becomes at mechanical tasks, the more valuable our uniquely human qualities become—creativity, empathy, judgment, purpose, and agency. This isn't a competition between humans and machines; it's an evolution where artificial intelligence pushes us to develop our most human capacities more intentionally than ever before. The winners in the age of Abundant Intelligence won't be those who try to out-compute the machines, but those who become more human, more creative, more empathetic, and more purposeful in response to living alongside artificial intelligence.