SPOTLIGHT 01.15.26 • Previous Spotlight
A 2026 AI Risk Lens for Higher Ed Boards
Dr. Aviva Legatt, Founder of EdGenerative and Affiliated Faculty at the University of Pennsylvania
In my work with boards, presidents, and AI councils over the last two years, I’ve seen a clear shift: AI is no longer a future issue you can monitor from a distance. It is a live governance responsibility that is beginning to surface in accessibility enforcement, accreditation reviews, and federal funding conversations.
There are three clocks that support institutions to focus on that responsibility:
1. Accessibility: a fixed compliance date
The Department of Justice’s final Title II web and mobile rule requires state and local governments, including public universities, to bring websites and apps into alignment with WCAG 2.1 AA, with enforcement beginning April 24, 2026, for larger entities.
Any AI-enabled system that lives on your public web or mobile footprint sits inside that obligation. Boards need a clear, time-bound plan that identifies which AI tools are student-facing, their current accessibility status, and how they will reach WCAG 2.1 AA before the deadline.
2. Accreditation: a continuous evidence expectation
In October 2025, the Council of Regional Accrediting Commissions (C-RAC) confirmed that AI can support learning evaluation when institutions can demonstrate transparency, human accountability, and protections against bias. CHEA’s Guiding Principles for Artificial Intelligence in Accreditation and Recognition add further expectations: Documented human oversight, equity, privacy, and reliability in how AI is used.
Taken together, these statements turn every self-study and reaffirmation from 2026 onward into an AI governance review. Admissions, transfer credit, advising, and assessment processes now need written explanations of where AI is used, how people remain accountable for decisions, and what evidence demonstrates fairness and reliability. When those explanations exist in well-structured documents, accreditors see intentional practice rather than improvised experimentation.
3. Federal funding and standards: a shared language for risk
On July 22, 2025, the U.S. Department of Education’s Dear Colleague Letter encouraged institutions to use federal funds to improve outcomes with AI, while insisting that civil rights, privacy, and program requirements remain intact.
At the same time, the NIST AI Risk Management Framework and its Generative AI Profile have become the backbone for serious institutional conversations about AI risk, organized around four functions: Govern, Map, Measure, Manage. WCET’s 2025 AI Education Policy & Practice Ecosystem Framework reinforces this trajectory by placing governance ownership, risk management, and baseline safeguards ahead of large-scale deployment of advanced tools.
Boards can treat these frameworks as alignment tools. A practical next step is to ensure that the institution’s AI governance framework is explicitly mapped to NIST’s four functions and is being used to steer pilots, procurement, and scale-up decisions—not just to describe them after the fact.
Turning clocks into constructive oversight
Across the campuses I support, the most productive boards focus on three areas:
Inventory. They require a current, institution-wide inventory of AI tools that touch student data or student-facing experiences, tagged with basic risk tiers and accessibility status.
Evidence. They ensure that, for major academic and student-success processes, written AI governance evidence already exists—evidence that can be handed to an accreditor, a civil-rights investigator, or a funder without a scramble.
Iteration. They expect the AI governance framework to be versioned, monitored, and revised. Each revision is tied to metrics, incidents, and external standards, and the board sees that history over time.
That posture is not about slowing AI down. It is about building a structure that your institution can defend, improve, and ultimately use to unlock the upside of AI with credibility.
Dr. Aviva Legatt is the founder of EdGenerative and Affiliated Faculty at the University of Pennsylvania, where she advises boards, presidents, and systems on AI governance, AI adoption, and AI microcredentials. A longtime Forbes contributor on the future of education, she helps institutions design AI strategies that support mission and stand up to regulators and accreditors.
Key public references:
Dr. Aviva Legatt
Founder of EdGenerative and Affiliated Faculty at the University of Pennsylvania
Dr. Aviva Legatt is the founder of EdGenerative and Affiliated Faculty at the University of Pennsylvania, where she advises boards, presidents, and systems on AI governance, AI adoption, and AI microcredentials. A longtime Forbes contributor on the future of education, she helps institutions design AI strategies that support mission and stand up to regulators and accreditors.
Read the previous spotlights: