SPOTLIGHT 10.08.25 • Previous Spotlight

Blending Risk Assessment and AI Learning

Joel Swee, Associate Director, Office of Risk Management and Compliance, National University of Singapore

How can we foster a culture of risk awareness amongst our students? Enabling them to not only comply with safety protocols but also appreciate risk as part of a dynamic decision-making process?

At the National University of Singapore (NUS), this challenge has become the driving force behind a new AI-powered approach to activity-based risk assessments.

Each year, NUS students organise more than 300 co-curricular and outdoor adventure activities, ranging from local events to overseas expeditions. The scale and variety of these programs demand a robust and consistent framework for safety and health (S&H) risk management. However, beyond regulatory compliance lies an equally important goal: cultivating a deeper understanding of risk among the students themselves.

How the AI Tool Supports Better Risk Decisions

To address this dual objective, NUS has introduced a generative AI application that integrates a safety planning tool and a learning companion for student leaders. The tool is a web-based application that utilises a data-secured version of a large language model to provide context-specific guidance on common hazards and corresponding safety measures tailored to the planned activity. 

It is important to note that the AI tool does not generate responses instantaneously. In fact, it does the opposite. Rather than providing ready-made answers, the AI prompts students to reflect more deeply. For example;

Are the identified hazards comprehensive?

Have environmental, medical, and even geopolitical risks been considered, especially for overseas activities?

Do the proposed control measures meaningfully reduce the risk?

Based on students' inputs, the tool will flag incomplete areas such as overlooked hazards and weak control measures. This structured, iterative engagement encourages students to internalise risk thinking and makes the tool a valuable extension of traditional S&H education.

Collaboration is essential

Student leaders, as well as staff who are not S&H professionals, were actively involved in the development of the AI risk

assessment tool from the early proof-of-concept stage to the final user acceptance testing. This collaborative approach ensured that the resulting solution was both user-friendly and effective in meeting the needs of those conducting risk assessments.

Guardrails, Accountability, and the Human In the Loop

Recognising that AI's value depends on its safeguards, the system is built around a human-in-the-loop design. The system operates in the following manner:

  • The AI's recommendations draw from a curated library of activity types, hazard examples, and control measures. Subject matter experts vet the contents.

  • Updates to this library can only be made by designated S&H professionals, not the AI.

  • A staff advisor reviews every student-generated risk assessment before the activity takes place.

By operating in this manner,  AI becomes a coach, not a decision maker. The result is greater efficiency in routine consultations, allowing human experts to focus on other priorities such as emerging risks, audit performance, and trend monitoring.

A Scalable Model for Culture Change

While the initial rollout supports student activities, the tool's architecture is designed for broader application. Future phases aim to support activities such as laboratory work, field investigations, makerspace, and machine shop-based learning, thereby extending the tool's benefits to principal investigators and research teams.

Looking Ahead

What distinguishes this AI initiative is the philosophy behind it. AI is being positioned not just as a functional tool, but as an educational catalyst. It strengthens the role of those closest to the risks by guiding them to think critically, act responsibly, and make informed decisions. This is a skill that will be useful for students when they enter the workplace. 

Further, by engaging students in this innovative framework, institutions can better leverage AI to develop risk solutions more efficiently and transform the way safety risk management is approached at the University and around the globe.

Download PDF

Joel Swee, Associate Director

Office of Risk Management and Compliance, National University of Singapore

Joel has over a decade of experience in safety and health risk management. He currently serves at the National University of Singapore, where he fosters a culture in which risks are understood, owned, and actioned.