Skip to Main Content
University of Brighton logo Accessibility Statement

GENERATIVE AI: Ethics and Generative AI

Where does the data used to train Generative AI come from?

There is often a lack of transparency about where the data that is used to train Generative AI models has come from. This raises a number of ethical issues around copyright, intellectual property and data protection. There are also concerns that AI naturally reproduces the bias, assumptions and stereotypes that the humans who created the content held. Generative AI is known to reflect disparities in race, gender, and socioeconomic status and there is evidence that AI outputs have a prevalence towards white, western English content.

Quick, Draw!

An example of this is Quick, Draw! an online game that challenges players to draw a picture and then uses artificial intelligence to guess what the drawing represents:

  • Despite existing in different languages, it does not seem to differentiate images culturally.
  • When asked to depict a hospital, it will guess correctly if you draw a cross.
  • In the Muslim world, most hospitals have a crescent on them, not a cross. This gives an indication of how AI perpetuates a bias towards the dominant knowledge data sets it has been trained on.

You can read more about this in Maha Bali’s great blog post Where are the crescents in AI? | LSE Higher Education.

Screenshot from Quick, Draw! showing lots of small doodles of hospitals, the majority of which have a cross on them

Is Generative AI ethical?

There is a lot of controversy around the ethical development and use of Generative AI. AI systems are known to use information and knowledge without giving appropriate attribution and some AI tools store users’ information and data to enable further training. Training and running AI systems requires a great deal of computing power and electricity which has an impact on the environment. There is also a question of whether access or lack of access to AI creates inequality. Do users of premium versions of AI tools give some an advantage over others?

Understanding these issues and their impact will help you to become an Engaged Global Citizen which is one of the University of Brighton's seven Graduate Attributes.

Always:

  • Read the terms of service and privacy policy before using an AI tool
  • Make sure you are happy for the data you input to be reused to train the AI
  • Ensue all inputs are your own otherwise you risk infringing other people’s copyright.
  • Where use is permitted for an assessment, declare the contribution of AI tools and add an attribution.

The University of Brighton has an institutional license for Microsoft Copilot chatbot, which:

  • Operates under commercial data protection regulations and therefore does not retain personal data.
  • Does not use your data to train future AI language models.
  • Does not save data, so remember to capture your prompts.