Alignment Report — March 2026

Beyond AI Literacy

How the AI Agility Challenge meets, exceeds, and completes the U.S. Department of Labor's AI Literacy Framework. A report for higher education and workforce development leaders.

Prepared byhumanskills.ai
FrameworkTEN 07-25, February 13, 2026
ProgramAI Agility Challenge, Version 5

The Core Distinction

AI Literacy Is the Starting Line

Educators have seen this pattern before. Digital literacy. Information literacy. Data literacy. Each time, institutions taught people to operate the tools, declared them literate, and watched the competency decay. Because operating the thing was never the hard part. AI literacy is about to repeat this pattern at scale. Unless we go further.

AI Literacy

Students can operate AI tools competently. The DOL standard. Necessary. Every institution should meet it.

Understand AI Principles Explore AI Uses Direct AI Effectively Evaluate AI Outputs Use AI Responsibly
Building AI Agility

AI Agility includes everything in AI Literacy, plus the human capabilities that make it productive, sustainable, and career-defining.

Human Agency
Decide when and whether to use AI. Judgment, not dependency.
Human Skills
20 skills employers hire for: critical thinking, collaboration, discernment, systems thinking.
Healthy Habits
Sustainable practices that protect focus, relationships, and meaningful work.
Value Creation
From completing tasks to creating outcomes. The difference between faster and better.
Workflow Design
Redesigning how work gets done. Promotable, not just productive.
Adaptive Capability
Building new skills continuously. When tools change, these students adapt.
Collaborative Intelligence
AI collaboration is socio-cognitive. The same skills that make a good teammate.
Purpose
Connecting AI use to what matters. Tools without direction produce activity without progress.

AI Literacy produces graduates who can use the tools.
AI Agility produces graduates who get more capable over time, not less.

One checks a box. The other changes a trajectory.

Part 1

The Federal Standard

On February 13, 2026, the U.S. Department of Labor published TEN 07-25, establishing the first federal AI literacy framework for the American workforce. It defines five foundational content areas and seven delivery principles. Below: how the AI Agility Challenge maps to each one.

5/5
Foundational Content Areas
Addressed
7/7
Delivery Principles
Addressed
6
Areas Extending Beyond Scope
Beyond DOL

Self-assessment by humanskills.ai. Full mapping documentation available on request.

Foundational Content Areas

01Understand AI PrinciplesComplete
"A foundational component of AI literacy is developing a clear grasp of what artificial intelligence is and how it works... the vocabulary and mental models needed to understand how today's AI tools operate."DOL TEN 07-25, Content Area 1
DOL Sub-AreaAI Agility Challenge Element
Pattern recognition & probabilistic outputsModule 1.3: Learners test the same prompt across multiple tools, observing different outputs firsthand.
Capabilities and modalitiesModule 1.3: Tool recommendations matched to tasks. Writing, research, code, creative, and data tools compared.
Training and inferenceModule 2.1: Distinguishing inference (fast, reactive) from reasoning (deliberate, step-by-step).
Hallucinations and accuracy limitsModule 2.2: Evaluating, verifying, refining outputs. Module 2.4: Auditing trust and verification habits.
Human design and oversightModule 1.4: AI-First vs Human-First. Learners decide when to lead with speed vs. judgment.
Strength: Complete. Understanding built through direct experimentation, not lectures on AI theory.
02Explore AI UsesComplete
"Workers benefit from understanding the range of AI applications across industries and tasks... from content creation to data analysis to decision support."DOL TEN 07-25, Content Area 2
DOL Sub-AreaAI Agility Challenge Element
Range of AI applicationsModules 1.1-4.5: Writing, analysis, creative, advisory, data, workflow design across 20 modules.
Industry-specific usesVirgil personalizes every exercise to the learner's industry, role, and context.
Emerging applicationsModule 4.3: Generative AI. Module 4.4: Capability building. Module 3.5: Data analytics.
Strength: Complete. Exploration is applied, not theoretical. Learners use AI for their real work.
03Direct AI EffectivelyExceeds
"Effective use of AI requires the ability to communicate clearly with AI systems... including prompt design, providing context and constraints, and iterative refinement."DOL TEN 07-25, Content Area 3
DOL Sub-AreaAI Agility Challenge Element
Prompt designModule 1.1: ROAC. Module 2.1: CROW. Module 4.1: Structured blocks with delimiters and guardrails.
Context and constraintsModule 2.3: Chunking, Progressive Refinement, Curating, Knowledge Injection.
Iterative refinementModule 2.2: Four-stage elevation. Module 3.1: Meta-prompting with self-critique loops.
Collaborative interactionModule 1.2: Command vs. collaborative prompting. Build both, run both, compare.
Exceeds. Five full modules on prompt design, from foundational to state-of-the-art.
04Evaluate AI OutputsComplete
"Workers need skills to critically assess AI-generated content for accuracy, bias, and relevance."DOL TEN 07-25, Content Area 4
DOL Sub-AreaAI Agility Challenge Element
Accuracy assessmentModule 2.2: Verify stage. Identify claims that sound confident but might be wrong.
Bias detectionModule 3.4: Bias-checking habits as part of responsible AI checklist.
Professional standardsModule 2.2: Personalize stage. What would the learner add that AI doesn't know?
Strength: Complete. Evaluation is a four-stage discipline, not a concept.
05Use AI ResponsiblyExceeds
"Responsible AI use encompasses privacy, transparency, ethical considerations, and awareness of societal impacts."DOL TEN 07-25, Content Area 5
DOL Sub-AreaAI Agility Challenge Element
Privacy practicesModule 3.4: Privacy rules. Module 2.4: Audit of actual data sharing behavior.
TransparencyModule 3.4: Documentation standards for AI involvement.
Ethical awarenessModule 2.4: Trust challenges. Module 2.5: AI's impact on wellbeing.
Exceeds. Three dedicated modules on responsible use, trust, and wellbeing.

Delivery Principles

D1Enable Experiential LearningExceeds

Every exercise uses the learner's real work. Virgil personalizes to industry, role, and capability maturity. Learners run prompts in their own AI tools, compare outputs, iterate. Not simulation. Practice on real work with real tools.

Exceeds. 20 applied exercises, each using the learner's actual work context.
D2Embed Learning in ContextComplete

A healthcare admin, retail manager, and nonprofit director all in the same cohort receive entirely different exercises from the same behavioral specification. Context is the architecture, not an add-on.

Complete.
D3Build Complementary Human SkillsExceeds
"Critical thinking, creativity, communication, values-based decisions, domain expertise."DOL TEN 07-25, Delivery Principle 3

The DOL names five human skills in one page. The HumanAI Taxonomy decomposes those five into 8 domains, 64 skills, 512 micro-skills, and 5,120 research-grounded building blocks citing 765 researchers. Every module maps to specific micro-skills. Virgil watches for building blocks at specific steps.

Exceeds by an order of magnitude. This is the single largest differentiator. See Part 3.
D4Address PrerequisitesComplete

No technical prerequisites. Virgil reads capability and adjusts scaffolding. Native language support. Any device with a browser.

Complete. Prerequisites addressed by the system, not the learner.
D5Create Pathways for Continued LearningComplete

Four courses over 18-24 months. One-year learning community for all participants. Each course builds on the previous.

Complete.
D6Prepare Enabling RolesComplete

Trains the people who will train others. Leaders, staff, and managers develop AI fluency alongside their teams.

Complete.
D7Design for AgilityExceeds
"Training must adapt as AI evolves."DOL TEN 07-25, Delivery Principle 7

Version 14 of the curriculum in 16 months. Tool-agnostic by design. Durable human skills layer. New research patched in weeks, not semesters.

Exceeds. Agility is structural, not aspirational.

Part 2

Bridging the How Gap

"The DOL framework tells you what workers should be able to do. It doesn't explain how they develop that capability."

The DOL framework operates as a top-down model: defining competencies from policy goals downward. Valuable for setting standards. Silent on how learners actually build these skills across different contexts, expertise levels, and starting points.

Research

AI Collaboration Is Socio-Cognitive

Working effectively with AI requires goal articulation, audience awareness, critical evaluation, knowing when to trust and when to override. These are the same capabilities that make human collaboration work.

Sidra & Mason, 2025. Collaborative AI Literacy and Metacognition Scales. Intl Journal of Human-Computer Interaction.
Research

Metacognition Predicts AI Performance

Users who plan, monitor, and evaluate their thinking while using AI produce better outcomes than those who simply know how AI works. Knowledge matters less than cognitive regulation.

Atchley et al., 2024. Human and AI Collaboration in Higher Education. Cognitive Research: Principles and Implications.
Research

AI Replicates Team Benefits

In a field experiment with 776 professionals at P&G, individuals with AI matched the performance of human teams without AI. But only when users brought collaborative skills to the interaction.

Dell'Acqua, Sadun, Mollick, Lakhani et al., 2025. The Cybernetic Teammate. HBS Working Paper 25-043.
Research

Over-Reliance Erodes Thinking

Students report over-relying on AI without thinking critically about approach and choices. The risk is not that people can't use AI. It's that they stop thinking while using it.

Sandhaus et al., 2024. Via Dang, 2025. Human-AI Collaborative Learning. British Journal of Educational Technology.

The implication: learning to use AI productively is not a technical training problem. It is a human development problem. The learning design must be socio-cognitive too.

The AI Agility Challenge was built on this premise before the research confirmed it.

Part 3

What the Standard Cannot Provide

Delivery Principle 3 tells institutions to build complementary human skills. It names five. One page. That same list has appeared in every strategic plan for the past decade. Nothing has changed because these terms are too broad to act on.

"Critical thinking" is a suitcase term. It sounds specific, but it contains multitudes. Until unpacked into teachable, assessable components, telling an institution to "build critical thinking" is no more actionable than telling a patient to "be healthier."

What Actionable Looks Like

8
Domains
across 3 tiers
64
Human Skills
512
Micro-Skills
5,120
Building Blocks
765
Named Researchers

9 years of development · 9 major versions · 100+ collaborators · 100% research grounding

Building Agency

Intentionality
Self-Determination
Human Judgment

Operating Conditions

Situational Awareness
Human Agency
Navigating Uncertainty

Deploying Agency

Collective Agency
Human × AI
Responsible AI

From Standard to Practice

DOL Says

"Build critical thinking"

Taxonomy Decomposes

Critical Thinking → 8 micro-skills → 64 building blocks. Each citing named researchers. Each mapped to exercise steps.

Virgil Detects

When a learner demonstrates checking evidence quality at Step 4 of Module 2.2, the system recognizes it. When they skip verification, it prompts reflection.

Part 4

Agentic Learning Design

The research is clear: collaborating with AI is socio-cognitive, not technical. The AI Agility Challenge is the most evolved digital learning experience available for developing these capabilities. Built around an Agentic Learning Guide with no precedent in workforce development.

Adaptive Learning

Adjusts content difficulty. Responds to right/wrong answers. Content is fixed; path varies.

Intelligent Tutoring

Provides hints when stuck. Follows scripted decision trees. Limited by its scripts.

Agentic Learning Design

Guide with judgment: when to push, scaffold, challenge, or step back. Pedagogical decisions at every step. Builds capability, not dependence.

Virgil is not a chatbot. It is an Agentic Learning Guide governed by a behavioral specification defining character, method, and responsiveness. The specification defines how much support Virgil provides at every step and what kind.

14 versions in 16 months. New research patched in weeks, not semesters.

Evolves Continuously

14 versions in 16 months. Improves faster than any instructor corps could retrain.

Every Learner Different

Same modules, different exercises. Personalized to role, industry, context.

Native Language

Exercises in the learner's preferred language. Bilingual communities need no translation.

Reads Capability

Never used AI? More scaffolding. Experienced user? Pushed harder. No configuration.

Telemetry for Leaders

Structured data on emerging capabilities. Which teams advance, where support is needed.

Human-Supported

Virgil handles practice. Humans handle community and edge cases. 75% completion where 5-15% is typical.

Part 5

The System

Twenty modules over approximately 90 days (average: 5.5 weeks). Each about twenty minutes. Short video then applied practice with Virgil using real work. Tool-agnostic. Privacy by architecture.

The Learning Pathway

Live

AI Agility

Human × AI Collaboration

0-6 months

Apr 2026

AI Workflows

Process Integration

0-6 months

Jul 2026

Agentic Workflow Design

Redesigning Work

6-12 months

Jan 2027

AI Orchestration

Systems & Scale

12-24 months

75%
90-Day Completion
(avg 5.5 wks | industry 5-15%)
14
Curriculum Versions
in 16 Months
10+
Industries Deployed
1 yr
Learning Community

Part 6

Testimonials

"The AI Agility Challenge gave us a practical way to build confidence, capability, and momentum across our entire community."

Randy VanWagoner, President, Mohawk Valley CC

"The most valuable aspect of using AI is having a partner working alongside me."

Kim Whiteside, Workforce Instructional Designer, Metropolitan CC

"I watched people who were hesitant transform into confident leaders who now use AI every day."

Karen Korotzer, CEO, The Arc Oneida-Lewis Chapter

"The content is very aligned with our values, which put humans at the center."

Marie Holive, CEO, Proteus International

The Partnership Model

Your Institution Brings

  • Local relationships and community trust
  • Enrollment infrastructure
  • Prerequisite and wrap-around support
  • Institutional knowledge and context
  • Faculty and staff engagement

humanskills.ai Brings

  • Curriculum and Virgil technology
  • HumanAI Taxonomy (8 domains, 5,120 BBs)
  • Active cohort facilitation
  • Ongoing evolution and updates
  • 1-year learning community

Typical deployment: first conversation to active cohort in 90 days

AI Literacy Is the Baseline.
AI Agility Helps Build Directors of Intelligence.