Skip to Main Content
Main Library & McBay Science Library
Display of Opening hours
Hours
Main Library 7:30am – 2:00am
Circulation Desk 7:30am – 2:00am
Digital Humanities Lab 7:30am – 2:00am
Interlibrary Loan Office 8:00am – 5:00pm
Reference Desk 9:00am – 10:00pm
All Library Hours

Introduction to AI Literacy

Using Generative AI in Research

Generative AI tools can be helpful throughout the research process. They can assist with things like finding background sources, writing survey questions, or even creating code to analyze data. But while these tools offer exciting possibilities, they also come with some important risks.

One major concern is privacy, especially when research involves sensitive data like health records or personal information. Even if data is anonymized (meaning names and other identifiers are removed), it can sometimes still be traced back to individuals.

Generative AI tools also have technical risks. Like any software, they can be vulnerable to cyber-attacks, which could expose private research data. There have even been cases where AI tools accidentally shared users’ personal information with others. Because of these risks, researchers need to be transparent with their participants and explain how they’re protecting their data. These concerns should also be addressed in their IRB (Institutional Review Board) applications. Additional guidance on AI use in research can be found here


AI and Academic Integrity

For students and instructors, the biggest concern with AI is academic honesty. Many students wonder "Is using AI considered cheating?"

The answer depends on your instructor and the course. At the University of Georgia (UGA), the current rule is that the use of generative AI is not permitted "unless it is explicitly authorized by the course instructor” (UGA Center for Teaching and Learning, 2024).

So, it’s important to check your syllabus and ask your instructor if you’re unsure. Instructors should clearly explain their expectations at the beginning of the semester to avoid confusion.

For graduate students writing theses or dissertations, the Graduate School provides these additional guidelines: “The use of generative AI in theses and dissertations is considered unauthorized assistance unless specifically approved by the advisory committee. If approved, the extent of AI use must be disclosed in the document” (UGA Graduate School, 2024).


Copyright and Generative AI

Generative AI tools are trained using huge amounts of data, including books, artwork, and other creative content. Some of this material is protected by copyright, which means the original creators have legal rights over how their work is used. This raises a few important questions:

  • Is it okay for AI tools to use copyrighted content to learn?
  • Can AI-generated content accidentally copy someone else’s work?
  • Can something created by AI be copyrighted at all?

Right now, the answers to these questions aren’t totally clear. There are ongoing court cases trying to figure out how copyright laws apply to AI. For example, some cases are looking at whether AI generated content can be protected by copyright or how much human involvement is needed for something to count as original work. Because these legal issues are still being worked out, the rules around copyright and AI are changing. This page will be updated as new guidelines become available.

If you’re interested in learning more from a legal expert, check out the video here for a deeper dive into the topic.


Generative AI and the Environment

Training and using generative AI models takes a lot of energy. For example, creating GPT-3 used about the same amount of energy as 123 gas-powered cars driving for a whole year. And when AI is added to search engines like Google, each search can use four to five times more energy than a regular one.

AI also uses a surprising amount of fresh water. Data centers (the buildings that store and run AI hardware) need water to stay cool and maintain the right humidity. This water has to be clean to avoid damaging the equipment. Microsoft reported that its water use jumped 34% in just one year, reaching nearly 1.7 billion gallons. Researchers say this increase is mostly due to the company’s investment in generative AI.

The environmental effects of AI aren’t the same everywhere. Communities near data centers, especially in hot or dry places like Georgia, often feel the impact more. In hot climates, data centers need even more water to stay cool, and in dry areas, water is more expensive and harder to come by.

While AI has environmental downsides, there are steps companies and individuals can take to reduce the impact.

Companies can:

  • Move training to more energy-efficient locations
  • Train models at night when it’s cooler
  • Avoid building data centers in sensitive environments
  • Be transparent about their environmental practices

Individuals can:

  • Use AI only when necessary (sometimes a regular Google search is enough!)
  • Choose AI tools from companies that prioritize sustainability
  • Avoid repetitive or unnecessary use
  • Be thoughtful with prompts to reduce energy use
  • Advocate for transparency from AI companies

Tip: If your searches are defaulting to AI-generated summaries and you’d prefer a regular search result, you can change this in your settings or add “-ai” to your search.


Human Impacts of Generative AI

Generative AI tools can offer many benefits to people. They can help improve education, make technology more accessible for people with disabilities, and support better outcomes in research. But alongside these positives, there are also serious risks especially for vulnerable communities. AI systems can reflect and even amplify bias found in the data they’re trained on. This can lead to inequitable treatment in areas like:

  • Job applications
  • Loan approvals
  • Police surveillance

This can happen when AI is trained on resumes from current employees—if those employees mostly come from similar backgrounds, the AI may favor applicants who look like them, leaving out others. In Chicago, AI tools used for financial lending were trained on old data that included the period of redlining, a practice where banks denied loans to people in mostly Black neighborhoods. Even if race isn’t directly included in the data, AI can still pick up on patterns that reflect racial bias. This can lead to automatic loan denials for people from marginalized communities. 

AI used in predictive policing has also been criticized for unfairly targeting Black communities. While some cities say it helps improve safety, research shows it can actually increase racial bias, violate privacy, and damage public trust in law enforcement.

Additionally, AI systems require a lot of human labor to train and moderate behind the scenes. Many of these jobs are outsourced to workers in low-income regions, often in the global south. These workers are sometimes exposed to disturbing content and work under poor conditions.