Hours |
|
---|---|
Main Library | 7:30am – 2:00am |
Circulation Desk | 7:30am – 2:00am |
Digital Humanities Lab | 7:30am – 2:00am |
Interlibrary Loan Office | 8:00am – 5:00pm |
Reference Desk | 9:00am – 10:00pm |
For more information about Artificial Intelligence outside of the privacy concerns perspective, visit our AI Literacy Guide!
Artificial Intelligence (AI) operates on its capacity to collect, process, and analyze vast amounts of data. This practice enhances information gathering, decision-making, and personalization for users. However, it also raises significant concerns about data privacy and exploitation.
One critical issue is the covert collection of personal data by AI-powered systems. This data can be exploited, often without the explicit knowledge or consent of the user. For example, businesses may leverage this information to for marketing insights, allowing them to tailor customer engagement strategies more effectively. These insights can also be monetized further by selling the users' data to other companies, perpetuating a cycle of data commodification.
Beyond marketing uses, data exploitation can erode trust, contribute to invasive surveillance practices by both corporate and government entities, and pose risks to individual security and autonomy. Moreover, as AI systems become more sophisticated, the potential for misuse—whether intentional or unintentional—grows, highlighting the need for stringent data governance, transparency, and ethical guidelines.
Even information which has been ostensibly anonymized has the capacity to be de-anonymized and used for tracking. Tracking across technology undermines the expectations people used to be able to have about anonymity in public and at home. Large language models (LLMs), used for creating text based Generative AI tools, are trained data from across the internet and beyond, including sensitive or personal data (like healthcare information, information from social media pages, personal finance data, biometric data used for facial recognition or fingerprint verification) which the creator may not have consented to sharing. This data may appear in outputs accidentally, or it may be discovered intentionally by malicious actors.
Utilizing their extensive data resources, machine learning can be used to make highly accurate predictions about users from seemingly irrelevant data. Mental health issues can be identified via a user's keystroke patterns. Other seemingly innocuous information from users can be used to pinpoint their political ideology, sexual identity, or health issues. When these tools are utilized for important decisions like who to bring in for a job interview or who to put at the top of a suspect list. These inherently biased profiles more often negatively impact minority groups.
Action Items for Protecting Privacy from Generative AI Tools like ChatGPT: