Skip to Main Content

Northwestern Health Sciences University

Generative AI at NWHSU

Gain a general understanding of generative AI, understand the risks, and learn how to use it wisely.

Key Concepts

When using AI, you should consider the broader context of how AI works. Questions you might ask are "Is my information secure?" and "Can I assume this is accurate?"

Here are some ways to think about generative AI:

Data Privacy & Consent Considerations

Generative AI is built on large training data sets. Data can come from scraping content off of the internet or getting data sets from companies directly. Most people do not think about the amount of data that is being collected about them. Ethical companies will protect your data, but you should consider that they may be in the minority. Issues include:

  • Reidentification Risk: Anonymized data can be used by AI to recreate identifiable information, leading to a privacy breach
  • Opaque Data Use: Users don’t know or understand when their data is being collected and how it will be used
  • Model Exploitation: If an AI model is hacked, the training data is at risk of exposure
  • Unclear or Nonexistent Consent: AI models often use publicly available data (e.g., social media content) without seeking consent from the original creators, leaving people unaware their data is being used.
  • Lack of Informed Consent: Even when consent is obtained, it’s often through vague terms of service that don’t clearly explain how personal data will be used for AI training, making it hard for users to make informed decisions.
  • Difficulty Withdrawing Consent: Once data is used to train a generative AI model, it is difficult for individuals to revoke consent or request their data be removed from the model’s knowledge base.

Example:

You are using an AI tool that your healthcare provider promotes. You input your symptoms and, paired with your personal health data, receive a recommendation for treatment. If that system were hacked, it would expose an enormous amount of information about you. In addition to exposing PII, it could result in targeted healthcare scams, affect employment status, etc.

If you want to use an AI tool that requires very personal information, read the fine print to understand what you are agreeing to and how to protect yourself.

Intellectual Property Considerations

The ethics around Intellectual Property affect both the data going in and the data coming out and the legal issues are not settled. Issues include:

  • Copyright Infringement: Training data uses copyrighted content. Is that Fair Use?
  • Ownership of AI Content: AI output is not human, so who owns the copyright (and can it even be protected by copyright law)?
  • Lack of Compensation: Creators do not receive compensation when their work is used to train AI. 

Example:

Your article is going to be published in a journal and you sign over copyright to the publisher. You do this for the benefit of increasing awareness of your work and the affect it may have on your reputation. The publisher later sells their content to a company creating an AI tool that turns it into customizable online tutorials, with no credit or compensation to you or the other authors.

While it may be legal, is it ethical? Can you put yourself in the shoes of the content creators?
 

Bias & Fairness Considerations

Attention must be paid to including the appropriate training data and taking steps to remove bias from the evaluation process.  AI is trained to look for patterns as a way to learn skills, so careful data set curation and algorithm transparency are required to ensure fairness and inclusivity. Issue include:

  • Training Data Bias: Datasets may reflect societal biases and, therefore, generate biased outputs. They can even unintentionally amplify bias.
  • Unequal Representation: If certain demographics are underrepresented in training data, the outputs may fail to reflect diverse perspectives, leading to unfair or skewed results.

Example:

Amazon created a tool to screen resumes. It was built from the resumes of top-performing employees. The result was that the tool took on the human biases of the manual screening process, which initially resulted in hiring these people. AI recognizes patterns in the training data and, by design, runs with them - good or bad. 

Keep this in mind when evaluating content from an AI tool. Question if there may be bias present in the results.

Environmental Impact Considerations

The environmental impact of generative AI primarily stems from the large computational resources required to train and run these models.  This is an ethical challenge for AI users. Issues include:

  • High Energy Consumption: Training advanced models requires massive amounts of energy, often non-renewable.
  • Carbon Footprint: This energy usage leads to a substantial carbon footprint, contributing to climate change.
  • Resource-Intensive Hardware: Powerful processing hardware is made from rare minerals and extensive manufacturing processes.
  • Sustainability Challenges: Balancing demand with resources will require work to remain sustainable.

It’s an exciting world of opportunity, but is the environmental impact worth the reward?

Disclosure Statement

ChatGPT was used to generate ideas for this topic, using prompts such as “Briefly describe ethical issues related to generative AI.”