Alignment Report — March 2026
How the AI Agility Challenge meets, exceeds, and completes the U.S. Department of Labor's AI Literacy Framework. A report for higher education and workforce development leaders.
The Core Distinction
Educators have seen this pattern before. Digital literacy. Information literacy. Data literacy. Each time, institutions taught people to operate the tools, declared them literate, and watched the competency decay. Because operating the thing was never the hard part. AI literacy is about to repeat this pattern at scale. Unless we go further.
Students can operate AI tools competently. The DOL standard. Necessary. Every institution should meet it.
AI Agility includes everything in AI Literacy, plus the human capabilities that make it productive, sustainable, and career-defining.
AI Literacy produces graduates who can use the tools.
AI Agility produces graduates who get more capable over time, not less.
One checks a box. The other changes a trajectory.
Part 1
On February 13, 2026, the U.S. Department of Labor published TEN 07-25, establishing the first federal AI literacy framework for the American workforce. It defines five foundational content areas and seven delivery principles. Below: how the AI Agility Challenge maps to each one.
Self-assessment by humanskills.ai. Full mapping documentation available on request.
| DOL Sub-Area | AI Agility Challenge Element |
|---|---|
| Pattern recognition & probabilistic outputs | ✓Module 1.3: Learners test the same prompt across multiple tools, observing different outputs firsthand. |
| Capabilities and modalities | ✓Module 1.3: Tool recommendations matched to tasks. Writing, research, code, creative, and data tools compared. |
| Training and inference | ✓Module 2.1: Distinguishing inference (fast, reactive) from reasoning (deliberate, step-by-step). |
| Hallucinations and accuracy limits | ✓Module 2.2: Evaluating, verifying, refining outputs. Module 2.4: Auditing trust and verification habits. |
| Human design and oversight | ✓Module 1.4: AI-First vs Human-First. Learners decide when to lead with speed vs. judgment. |
| DOL Sub-Area | AI Agility Challenge Element |
|---|---|
| Range of AI applications | ✓Modules 1.1-4.5: Writing, analysis, creative, advisory, data, workflow design across 20 modules. |
| Industry-specific uses | ✓Virgil personalizes every exercise to the learner's industry, role, and context. |
| Emerging applications | ✓Module 4.3: Generative AI. Module 4.4: Capability building. Module 3.5: Data analytics. |
| DOL Sub-Area | AI Agility Challenge Element |
|---|---|
| Prompt design | ✓Module 1.1: ROAC. Module 2.1: CROW. Module 4.1: Structured blocks with delimiters and guardrails. |
| Context and constraints | ✓Module 2.3: Chunking, Progressive Refinement, Curating, Knowledge Injection. |
| Iterative refinement | ✓Module 2.2: Four-stage elevation. Module 3.1: Meta-prompting with self-critique loops. |
| Collaborative interaction | ✓Module 1.2: Command vs. collaborative prompting. Build both, run both, compare. |
| DOL Sub-Area | AI Agility Challenge Element |
|---|---|
| Accuracy assessment | ✓Module 2.2: Verify stage. Identify claims that sound confident but might be wrong. |
| Bias detection | ✓Module 3.4: Bias-checking habits as part of responsible AI checklist. |
| Professional standards | ✓Module 2.2: Personalize stage. What would the learner add that AI doesn't know? |
| DOL Sub-Area | AI Agility Challenge Element |
|---|---|
| Privacy practices | ✓Module 3.4: Privacy rules. Module 2.4: Audit of actual data sharing behavior. |
| Transparency | ✓Module 3.4: Documentation standards for AI involvement. |
| Ethical awareness | ✓Module 2.4: Trust challenges. Module 2.5: AI's impact on wellbeing. |
Every exercise uses the learner's real work. Virgil personalizes to industry, role, and capability maturity. Learners run prompts in their own AI tools, compare outputs, iterate. Not simulation. Practice on real work with real tools.
A healthcare admin, retail manager, and nonprofit director all in the same cohort receive entirely different exercises from the same behavioral specification. Context is the architecture, not an add-on.
The DOL names five human skills in one page. The HumanAI Taxonomy decomposes those five into 8 domains, 64 skills, 512 micro-skills, and 5,120 research-grounded building blocks citing 765 researchers. Every module maps to specific micro-skills. Virgil watches for building blocks at specific steps.
No technical prerequisites. Virgil reads capability and adjusts scaffolding. Native language support. Any device with a browser.
Four courses over 18-24 months. One-year learning community for all participants. Each course builds on the previous.
Trains the people who will train others. Leaders, staff, and managers develop AI fluency alongside their teams.
Version 14 of the curriculum in 16 months. Tool-agnostic by design. Durable human skills layer. New research patched in weeks, not semesters.
Part 2
"The DOL framework tells you what workers should be able to do. It doesn't explain how they develop that capability."
The DOL framework operates as a top-down model: defining competencies from policy goals downward. Valuable for setting standards. Silent on how learners actually build these skills across different contexts, expertise levels, and starting points.
Working effectively with AI requires goal articulation, audience awareness, critical evaluation, knowing when to trust and when to override. These are the same capabilities that make human collaboration work.
Users who plan, monitor, and evaluate their thinking while using AI produce better outcomes than those who simply know how AI works. Knowledge matters less than cognitive regulation.
In a field experiment with 776 professionals at P&G, individuals with AI matched the performance of human teams without AI. But only when users brought collaborative skills to the interaction.
Students report over-relying on AI without thinking critically about approach and choices. The risk is not that people can't use AI. It's that they stop thinking while using it.
The implication: learning to use AI productively is not a technical training problem. It is a human development problem. The learning design must be socio-cognitive too.
The AI Agility Challenge was built on this premise before the research confirmed it.
Part 3
Delivery Principle 3 tells institutions to build complementary human skills. It names five. One page. That same list has appeared in every strategic plan for the past decade. Nothing has changed because these terms are too broad to act on.
"Critical thinking" is a suitcase term. It sounds specific, but it contains multitudes. Until unpacked into teachable, assessable components, telling an institution to "build critical thinking" is no more actionable than telling a patient to "be healthier."
9 years of development · 9 major versions · 100+ collaborators · 100% research grounding
Intentionality
Self-Determination
Human Judgment
Situational Awareness
Human Agency
Navigating Uncertainty
Collective Agency
Human × AI
Responsible AI
"Build critical thinking"
Critical Thinking → 8 micro-skills → 64 building blocks. Each citing named researchers. Each mapped to exercise steps.
When a learner demonstrates checking evidence quality at Step 4 of Module 2.2, the system recognizes it. When they skip verification, it prompts reflection.
Part 4
The research is clear: collaborating with AI is socio-cognitive, not technical. The AI Agility Challenge is the most evolved digital learning experience available for developing these capabilities. Built around an Agentic Learning Guide with no precedent in workforce development.
Adjusts content difficulty. Responds to right/wrong answers. Content is fixed; path varies.
Provides hints when stuck. Follows scripted decision trees. Limited by its scripts.
Guide with judgment: when to push, scaffold, challenge, or step back. Pedagogical decisions at every step. Builds capability, not dependence.
Virgil is not a chatbot. It is an Agentic Learning Guide governed by a behavioral specification defining character, method, and responsiveness. The specification defines how much support Virgil provides at every step and what kind.
14 versions in 16 months. New research patched in weeks, not semesters.14 versions in 16 months. Improves faster than any instructor corps could retrain.
Same modules, different exercises. Personalized to role, industry, context.
Exercises in the learner's preferred language. Bilingual communities need no translation.
Never used AI? More scaffolding. Experienced user? Pushed harder. No configuration.
Structured data on emerging capabilities. Which teams advance, where support is needed.
Virgil handles practice. Humans handle community and edge cases. 75% completion where 5-15% is typical.
Part 5
Twenty modules over approximately 90 days (average: 5.5 weeks). Each about twenty minutes. Short video then applied practice with Virgil using real work. Tool-agnostic. Privacy by architecture.
Human × AI Collaboration
0-6 months
Process Integration
0-6 months
Redesigning Work
6-12 months
Systems & Scale
12-24 months
Part 6
"The AI Agility Challenge gave us a practical way to build confidence, capability, and momentum across our entire community."
Randy VanWagoner, President, Mohawk Valley CC"The most valuable aspect of using AI is having a partner working alongside me."
Kim Whiteside, Workforce Instructional Designer, Metropolitan CC"I watched people who were hesitant transform into confident leaders who now use AI every day."
Karen Korotzer, CEO, The Arc Oneida-Lewis Chapter"The content is very aligned with our values, which put humans at the center."
Marie Holive, CEO, Proteus InternationalTypical deployment: first conversation to active cohort in 90 days