March 9, 2026

Check your AI literacy

What students need to know about using Gen AI tools
Person using a laptop and notebook at a sunlit wooden table

Tools like ChatGPT, Copilot, and Claude can be amazing to help us save time, gain understanding, and do further research – but what should students know to effectively and ethically leverage these tools? 

To find out, we talked to Leeanne Morrow, Associate University Librarian and Director of CAEILI (Centre for Artificial Intelligence Ethics, Literacy and Integrity) at UCalgary.

Smiling woman with crossed arms in front of red wall

Leanne Morrow at the Doucette Library

Ana DuCristea

LLMs are not a search engine (or an academic source)

Large Language Models (LLMs) such as OpenAI’s GPT, Claude’s Sonnet, and Google’s Gemini models are AI systems trained on massive amounts of text data to predict and generate human-like text by identifying complex patterns. They can sound like they’re confidently presenting facts, but what they’re really doing is predicting the next most likely word based on patterns from its dataset. That means you can get a smooth, assured answer with no guarantee that it’s correct. 

Even if you hit the “web search” or equivalent button in ChatGPT, Claude, or any other platform, your results may not be accurate. Web search will pull in pages, but the AI can misrepresent, oversimplify, or lean on low-quality sources. Plus, the open web is not the same as academic sources. As Morrow points out, strong research often depends on library databases and scholarly sources, which are not easily scrapable online. 

Part of what makes LLMs convincing is a phenomenon called sycophancy: they’re designed to please you. “It’s made to give you what you want,” says Morrow, “it will sometimes overconfidently provide you with inaccurate information because it wants you to be happy.” 

This is especially dangerous when it comes to citations. ChatGPT and similar tools have been known to been known to generate completely fake references that seem plausible with real authors and journal names, because they are algorithmically piecing together patterns. “Faculty members know the citations in their area of discipline,” Morrow warns, “They’ll see it a mile away.” Submitting fabricated citations is an academic misconduct issue, so always trace a citation back to its actual source.

Know the expectations before you start

Perspectives and approaches to the use of gen AI differ across campus — rules and expectations will vary from course to course. Some instructors are fine with it, some have restrictions, and others may prohibit it entirely. Before you use an AI tool to help with an assignment, check the course outline or ask your professor directly if you are unsure. Getting clarity upfront is much easier than explaining yourself after the fact.

Your data is the product

Free AI tools are not charities. “[Their purpose is] to make money,” says Morrow. “Every time you go and do something in there, you’re sharing more data with them, and they’re building better tools to sell it right back to you.” Anything you type into a public AI tool could be used to train the next version of that model. 

Think twice before sharing anything sensitive (personal info, private docs, copyrighted works) with an AI chatbot. This also applies to uploading PDFs of library articles or lecture slides to get a quick summary. Even if your intentions are good, uploading copyrighted material to third-party tools may violate institutional policies or licensing agreements.

A better, safer approach is to use your own notes: type up what you took away from a lecture, then ask AI to explain further or turn it into a quiz.

Just because you can, doesn’t mean you should

AI literacy is not just about how to use these tools; it’s knowing when not to.

This is Morrow’s most important point: AI literacy is not just about how to use these tools; it’s knowing when not to. Offloading too much of your thinking to AI, especially early in your studies, means you’ll lack the ability to build upon your knowledge. 

“Unless you have foundational knowledge in a discipline to actually evaluate something that AI has produced, you’re going to copy and paste something that’s inaccurate and you’re going to end up with a misconduct.” 

AI works best as a support, not a replacement. Use it to brainstorm, organize, or stress-test ideas you already have, not create ideas you haven’t yet thought about.

Where to go from here

CAEILI, located in the Doucette Library (Education Block 370), is your starting point for navigating AI use – whether that’s workshops, a self-paced AI literacy certificate, or a one-on-one appointment with a subject librarian who can point you in the right direction for your discipline. Starting this fall, the centre will also host monthly student drop-in mixers where you can bring a project, ask questions, and connect with other students across disciplines. 

The library has also been trialling AI research assistants like Consensus and scite.ai, which pull information from scholarly datasets. 

If you’re looking to get more technical with AI, there are communities on campus for that as well. AI Research School offers a structured dive around practical AI work, and the UCalgary AI Club hosts workshops, project opportunities, and more. 

“I would like to see all our students graduate with some AI fluency,” says Morrow, “and able to critically evaluate their AI use.” 

As she puts it: AI supports, but integrity should lead.