AI: The Core of Everything?
From the stock market to the classroom and to the boardroom, AI is top of mind. Since risks must be reviewed cross-functionally and enterprise-wide, I am sharing my favorite AI data points from Educause, ACCRAO, and insights from CrowdStrike to demonstrate the interconnectedness of risk. When looking at this data holistically, institutions can see the broad impacts AI will have on higher education, the workforce, and the way institutions will need to manage cybersecurity.
Educause recently published "The Impact of AI on Work in Higher Education" by Jenay Robert. This research and survey, in partnership with AIR, NACUBO, and CUPA-HR, summarizes work-related institutional AI strategies, policies, and guidelines and the risks, opportunities, and challenges associated with using AI for work in higher education. I will focus on the risks and challenges, but you can read the full report here.
More than two-thirds of the respondents (67%) identified six or more risks as "urgent."
The three most frequently mentioned challenges associated with using AI tools at work are AI's rapid pace of change, a lack of AI expertise, and a lack of AI best practices.
AI policies and guidelines continue to lag behind AI adoption.
Though respondents believe AI offers opportunities (such as offloading simple tasks), most responses focused on risks, including job elimination, replacing subject-matter experts with bots, and the inability to keep up with new and evolving cybersecurity threats.
AACRAO recently released "Research Updates and News, AI in Higher Ed, Faculty-Development Trends, 2026 Higher Education Trends, and More." Below are some of the key takeaways:
AI is impacting research supervision. An 18-month autoethnography concluded the following:
AI tools measurably improved the learner's work when approached with healthy skepticism.
The supervisor's role is shifting away from information-giver toward ethical facilitator.
Responsible integration hinges on transparency and documentation.
A TechCrunch article provided additional insights on how AI is impacting curriculum.
Traditional computer-science programs are seeing a dip in enrollment as learners are pivoting toward specialized AI degrees.
AI-related academic programs are on the upswing to meet learner demand and to remain competitive.
AI adoption is creating friction among faculty members, many of whom remain AI-hesitant, even though students, parents, and employers value AI training.
A CBS interview with Adam Meyers from CrowdStrike showcases shocking data.
Threat actors are moving faster than ever!
There is an 89% increase in AI-enabled attacks
The average "breakout time" is 29 minutes.
27 sec is the fastest eCrime breakout time on record.
AI is being used as a weapon and to increase the attack surface.
What does this all mean?
AI risks are broad, but governance is lagging. Across higher education, most recognize AI as both a risk and an opportunity. However, whether hampered by a lack of AI expertise or the absence of established best practices, there is a widening gap between responsible adoption and accountability.
Registrars and admissions professionals report productivity gains from AI tools, but only when deployed with transparency and clear documentation. The role of faculty is also evolving —researchers and professors are shifting from "information providers to ethical facilitators". This finding showcases how AI adoption will profoundly affect how institutions educate and operate. Academic programs will need to be realigned to meet market demand and reduce enrollment risk.
The cyber threat landscape is accelerating dangerously. The CrowdStrike data underscores the need for resources and risk management tools: threat actors are moving faster than ever, and AI-driven attacks are on the increase. Understanding risk velocity—not just risk probability is an essential consideration in your ERM framework.
Bottom line: AI has moved from a trending headline to the foundation of how the world will think, decide, and create. Its impact on higher education will further help define that trajectory. The institutions and organizations that understand AI's risks and rewards will need to balance governance with innovation and also include AI risk management as strategic priorities—not afterthoughts.