Is AI Scaring You?

Alongside the excitement surrounding the rapid introduction of AI, there’s a lot of scary talk, like the breathtaking scale of internet personal data feeding Large Language Models (the algorithmic basis for chatbots like OpenAI's ChatGPT and Google's Bard); the discriminatory bias of large-scale AI outputs that lead to unchecked decision-making; and AI in the hands of evil geniuses who now have the means to manipulate not only individuals, but whole governments.

Privacy and copyright authorities are on high alert, and the unknown consequences for you and your organization for breaking rules you don’t even understand are downright scary. If you’re so inclined, you can indulge your taste for AI horror by browsing the thousands of unbelievable scenarios revealed in the AI Incident Database.

Should Businesses Avoid Using AI?

As is often the case, while the threats are real, the risks to you or your organization are manageable and certainly don’t outweigh the evident value of AI as a business tool. Not to mention the fact that many applications in health care, finance and planning have incorporated specialized AI elements into their operation for many years. To be honest, you would be hard pressed to find an organization not using it.

It’s prudent to take mitigating steps to protect the security and integrity of your information; starting with:

  • limiting interaction between your data and public Large Language Models (LLM);

  • minimizing data through de-identification;

  • having humans review decision-making outputs, including sources; and,

  • ensuring accurate and limited assignment of internal user access privileges.

There is no technical silver bullet to ensuring the safe use of AI (even though, ironically, some are touting the ability of AI to regulate itself). Implementing safe AI will come down to those things that many organizations like to avoid, like information policy, user awareness and training, and monitoring of at-risk information, all of which require good information governance – something we’ll get into in a future blog.

Need Help?

Understandably, some people and organizations are finding the world of AI and its associated risks and regulations more overwhelming than anything they’ve had to deal with – for some, in the history of being in business. If you’re feeling ill equipped, Cenera has an expert team dedicated to helping you understand the evolving privacy legislation applicable to AI, how it impacts your organization, and what you need to do.

Reach out with your questions: https://www.cenera.ca/contact-us

Additional Resources

We’ve written numerous blogs on various topics within the realm of AI. Visit our blog and type AI into the search bar.

Canadian Guardrails for Generative AI – Code of Practice

Guide on the use of generative artificial intelligence - Canada.ca

AI Risk Management Framework

Rick Klumpenhouwer

A passion for strategic information management and a strong academic background make Rick Klumpenhouwer a highly capable advisor for those seeking to integrate compliance with real-world management. In addition to his Masters degrees in Archival Studies and History, Rick is also certified with the Canadian Institute of Access and Privacy Professionals (CIAPP) at Master status, and as a Specialist in Electronic Content Management with the Association of Information and Image Management (AIIM). For many years, he has played the role of hockey and Irish dancing dad while indulging his love of European and world soccer leagues and tournaments.

Next
Next

Fall Clean-Up: Year-End Planning for Your Organization's HR Needs