Generative AI Tools from UTS

Microsoft Copilot Discover Copilot
Copilot is a sophisticated AI assistant crafted to assist you with a variety of tasks. Whether you need answers, information, or help with productivity and creativity, Copilot is here to adapt to your needs.
Want to use a third-party AI tool at McMaster University?
UTS will conduct thorough assessments of third-party AI tools to ensure they meet standards for security and functionality. This process begins with opening a service ticket, which allows us to systematically evaluate the tool’s compliance with our requirements and its suitability for use within our environment.
AI Learning & Educational Tools for McMaster Community Members

What is Generative AI? Follow the Learning Path (Videos)
LinkedIn Learning offers a learning path for staff interested in understanding more about generative AI.

Learn More About Generative AI Discover More AI Resources
Navigating the world of generative AI can be overwhelming with the abundance of resources, news articles, and rapidly evolving information. To help you get started, we’ve compiled a collection of valuable insights and materials for your learning journey.
McMaster University & Artificial Intelligence

McMaster AI News & Events Updates, Events and Resources on Generative Artificial Intelligence
Discover news, events, resources and updates on generative AI at McMaster University.

McMaster AI Advisory Committee Discover the AI Advisory Committee
The Artificial Intelligence (AI) Advisory Committee serves as a strategic body to guide the university’s endeavours related to AI, with a focus on generative AI, ensuring a holistic approach that encompasses academic, research, and operational perspectives.

AI Resources Visit the Resources
Navigating the world of generative AI can be overwhelming with the abundance of resources, news articles, and rapidly evolving information. To help you get started, we’ve compiled a collection of valuable insights and materials for your learning journey.

Privacy Considerations Discover AI Privacy Considerations
Each tool comes with its own set of privacy and security concerns, but there are general risks to be mindful of before using them. Here, you’ll find an overview of the privacy risks and considerations when using generative AI tools.
Artificial Intelligence & Information Security
AI technology offers tremendous benefits in terms of efficiency and productivity. It can automate repetitive tasks, analyze vast amounts of data quickly, and provide insights that drive better decision-making. However, it’s important to be aware of the potential risks associated with AI systems. These include data quality issues, harmful biases, privacy concerns, and the need for human oversight to ensure ethical and responsible use. Understanding these risks helps in developing and deploying AI systems that are both effective and trustworthy.
AI systems can have unique risks such as data quality issues, harmful biases, dependency on large datasets, and changes during training that can alter performance. Additionally, AI systems pose privacy risks due to enhanced data aggregation capabilities, and their complexity can make it difficult to predict failure modes.
These risks are not fully addressed by current risk frameworks, making it essential to develop new approaches to manage them effectively.
Data quality is crucial because AI systems rely on accurate and representative data to function correctly. Poor data quality can lead to harmful biases, reduced trustworthiness, and negative impacts on AI system performance.
For example, if the data used to train an AI system is not representative of the intended context, the system may make incorrect or biased decisions, leading to unintended consequences. Ensuring high data quality helps maintain the reliability and fairness of AI systems.
Pre-trained models can significantly advance research and improve the performance of AI systems by leveraging existing knowledge and reducing the time required for training.
However, they also introduce risks such as statistical uncertainties, bias issues, and challenges in managing scientific validity and reproducibility. These models may carry over biases from the data they were originally trained on, and their complexity can make it difficult to understand and control their behavior fully.
AI systems are harder to test due to their complexity, opacity, and the lack of established testing standards. Unlike traditional software, which follows predefined rules, AI systems learn from data and can exhibit unpredictable behavior.
This makes it challenging to determine what to test and how to ensure the system’s reliability and safety. Additionally, AI systems may require more frequent updates and maintenance to address issues such as data drift and model degradation.
Human oversight is essential to ensure that AI systems make decisions that align with ethical standards and legal requirements. It helps mitigate risks and ensures that AI systems are used responsibly and effectively.
Human oversight can involve monitoring AI system performance, making critical decisions that the AI system cannot handle, and intervening when necessary to prevent harmful outcomes. This oversight is particularly important in high-stakes applications where AI decisions can have significant impacts on individuals and society.