Skip to Main Content
Main Library & McBay Science Library
Display of Opening hours
Hours
Main Library 7:30am – 2:00am
Circulation Desk 7:30am – 2:00am
Digital Humanities Lab 7:30am – 2:00am
Interlibrary Loan Office 8:00am – 5:00pm
Reference Desk 9:00am – 10:00pm
All Library Hours

Online Privacy: AI & Privacy

A guide for beginners to understand why privacy is important and how to protect their privacy in the digital world.

Introduction

For more information about Artificial Intelligence outside of the privacy concerns perspective, visit our AI Literacy Guide!

Security Considerations

Privacy Concerns

Artificial Intelligence (AI) operates on its capacity to collect, process, and analyze vast amounts of data. This practice enhances information gathering, decision-making, and personalization for users. However, it also raises significant concerns about data privacy and exploitation.

One critical issue is the covert collection of personal data by AI-powered systems. This data can be exploited, often without the explicit knowledge or consent of the user. For example, businesses may leverage this information to for marketing insights, allowing them to tailor customer engagement strategies more effectively. These insights can also be monetized further by selling the users' data to other companies, perpetuating a cycle of data commodification.

Beyond marketing uses, data exploitation can erode trust, contribute to invasive surveillance practices by both corporate and government entities, and pose risks to individual security and autonomy. Moreover, as AI systems become more sophisticated, the potential for misuse—whether intentional or unintentional—grows, highlighting the need for stringent data governance, transparency, and ethical guidelines.

Even information which has been ostensibly anonymized has the capacity to be de-anonymized and used for tracking. Tracking across technology undermines the expectations people used to be able to have about anonymity in public and at home. Large language models (LLMs), used for creating text based Generative AI tools, are trained data from across the internet and beyond, including sensitive or personal data (like healthcare information, information from social media pages, personal finance data, biometric data used for facial recognition or fingerprint verification) which the creator may not have consented to sharing. This data may appear in outputs accidentally, or it may be discovered intentionally by malicious actors.

Utilizing their extensive data resources, machine learning can be used to make highly accurate predictions about users from seemingly irrelevant data. Mental health issues can be identified via a user's keystroke patterns. Other seemingly innocuous information from users can be used to pinpoint their political ideology, sexual identity, or health issues. When these tools are utilized for important decisions like who to bring in for a job interview or who to put at the top of a suspect list. These inherently biased profiles more often negatively impact minority groups.

Generative AI Privacy Checklist

Action Items for Protecting Privacy from Generative AI Tools like ChatGPT:

  1. Compare and contrast privacy policies of Generative AI tools to make sure the one you are using is best for you. This article compares ChatGPT, Gemini, and Claude.
  2. Avoid sharing sensitive or personally identifiable information with an AI tool. If discussing sensitive topics, anonymize or generalize the data to prevent unintended exposure.
  3. Check your privacy settings. If the tool allows, disable or limit chat or activity logs.
  4. Look for transparency from the company. Prioritize using tools provided by companies that offer clear documentation on their data usage practices.

Learn More about AI on YouTube