AI in Academia-Risks and Frameworks
AI and the impacts of its utilization in higher education are referenced daily. From research to strategic partnerships, admissions, and student success, AI usage is exponentially increasing. “Resistance is futile,” as the Borg said in Star Trek. Organizations will have to evolve to adopt AI responsibly.
A desire to improve efficiency and facilitate better decision-making are key drivers propelling the AI revolution. However, with these advancements, there are corresponding risks that must be addressed to safeguard students, faculty, and staff.
What are the risks?
Academic integrity: cheating and plagiarism
Data privacy and governance: breach of sensitive student data
Ethics and equity: bias and unfair treatment.
Quality of education: over-reliance on technology and stifling of creativity
Transparency and accountability: AI systems are difficult to understand, resulting in distrust.
What's next?
Organizations will have to determine the level of acceptable risk in the context of the institution's mission. In addition, prioritizing what AI projects move forward will have to be cross-functional in nature. The question, is how?
How can AI be assimilated by incorporating a risk-based approach?
In my experience, colleges and universities operate in a decentralized structure where many departments function independently. This can be a challenge when implementing an AI risk management program. A shared governance structure can balance central oversight with localized autonomy. This hybrid model can help set priorities and encourage mindful adoption of AI.
Policy development, encouraging AI literacy, and training are all part of a strategy. An ethics committee and other working groups can provide additional insight. However, selecting a framework is essential to ensuring a sustainable and effective program.
Options to consider:
The National Institute of Standards and Technology (NIST), an agency of the US Department of Commerce, released its framework in April 2024. Many colleges and universities may already utilize NIST guidance in their cybersecurity efforts, which makes this a viable option.
The Organization for Economic Cooperation and Development (OECD), an international organization, updated its Principles on AI in May 2024. Their strategy focuses on human-centric values of ethics and equity, which helps in mitigating bias.
The International Organization for Standardization (ISO), a global standards organization that is non-governmental, has also published its guidance.
Mastering AI Policies: A Framework for Institutional Alignment, a publication from edtech firm Anthology, highlights key areas of guidance for an educational setting.
The good news is that there are several options available to support colleges and universities in combating the challenges of AI risks to students, faculty and staff while still encouraging innovation and enhanced learning.
Has your organization instituted an AI policy framework?
Are you aligning your policies and practices with one of the options above, adopting a hybrid approach or developing your own guidelines?
If you have suggestions on how to mitigate AI risks, create a resilient AI usage policy, or would like to share your experience in adopting a framework, please share them in the comments.