Nudge Users to Catch Generative AI Errors
Using large language models to generate text can save time but often results in unpredictable errors. Prompting users to review outputs can improve their quality.
AI-Related Risks Test the Limits of Organizational Risk Management
An international panel of AI experts believe many organizations need to adjust risk management practices in the context of AI. Discover insights and strategies to keep pace with rapid tech advancements.
4 Types of Gen AI Risk and How to Mitigate Them
Understand the different types of risks associated with generative AI and provide strategies for mitigating these risks to ensure safe and responsible use of AI technologies.
8 AI Security Issues Leaders Should Watch
Learn about key AI security issues leaders should monitor, including data poisoning, model theft, adversarial attacks, and unintended biases.
Navigating the perils of artificial intelligence: a focused review on ChatGPT and responsible research and innovation
Explore how using ChatGPT and similar AI tech raises ethical challenges, demanding responsible research to manage risks and boost societal gains.
In AI We Trust – Too Much?
Explore the complex relationship between human trust and technology, delving into the urgent need for blending human emotional intelligence into AI tools.
AI Systems are Getting Better at Tricking Us
Discover how AI systems, designed without intent, are capable of unexpected deception to achieve goals, shedding light on the challenges of controlling and understanding AI.
My deepfake shows how valuable our data is in the age of AI
Discover deepfake’s impact on truth discernment in society. Reflect on data privacy, ownership, and ethical AI considerations amid technological advancement.
AI’s Trust Problem
AI’s trust issues escalate with its power. Key concerns include disinformation, bias, job loss, and environmental impact.
Tackling AI Risks: Your Reputation is at Stake
Assessing AI risks in the context of their use, rather than as inherent in the technologies themselves, leads to smarter and safer implementations.
Privacy Impact Assessments for GenerativeAI Instructional Use
Guidance for instructional use of AI based on current Privacy Impact Assessments (PIA) including any recommendations for Instructors…
UBC OCIO – Policy, Standards, & Resources
The CIO has published Information Security Standards that govern the use and protection of University data and computing resources.
UBC – Protection of Privacy
UBC’s privacy guidance for faculty and staff is informed by federal and provincial laws.
Government of Canada – Responsible use of artificial intelligence (AI)
The Government of Canada is providing information and services about the responsible exploration of the future of generative AI.
Government of Canada – Generative Artificial Intelligence Guidance
The Government of Canada is providing information and services about the responsible exploration of the future of generative AI.
AI-generated misinformation: 3 teachable skills to help address it
Discover essential strategies for educators to navigate the pervasive threat of AI-generated misinformation…
Canadian privacy regulators launch principles for the responsible development and use of generative AI
Federal, provincial and territorial privacy authorities have launched a set of principles to advance the responsible…
Principles for responsible, trustworthy and privacy-protective generative AI technologies
Federal, provincial and territorial privacy authorities have launched a set of principles to advance the responsible, trustworthy and privacy-protective development and use of generative artificial intelligence (AI) technologies in Canada.
Tackling Trust, Risk and Security in AI Models
Learn how a comprehensive AI trust, risk, security management (TRiSM) program helps you integrate much-needed governance upfront.