There is often a lack of transparency about where the data that is used to train Generative AI models has come from. This raises a number of ethical issues around copyright, intellectual property and data protection. There are also concerns that AI naturally reproduces the bias, assumptions and stereotypes that the humans who created the content held. Generative AI is known to reflect disparities in race, gender, and socioeconomic status and there is evidence that AI outputs have a prevalence towards white, western English content.
An example of this is Quick, Draw! an online game that challenges players to draw a picture and then uses artificial intelligence to guess what the drawing represents:
You can read more about this in Maha Bali’s great blog post Where are the crescents in AI? | LSE Higher Education.
There is a lot of controversy around the ethical development and use of Generative AI. AI systems are known to use information and knowledge without giving appropriate attribution and some AI tools store users’ information and data to enable further training. Training and running AI systems requires a great deal of computing power and electricity which has an impact on the environment. There is also a question of whether access or lack of access to AI creates inequality. Do users of premium versions of AI tools give some an advantage over others?
Understanding these issues and their impact will help you to become an Engaged Global Citizen which is one of the University of Brighton's seven Graduate Attributes.
The University of Brighton has an institutional license for Microsoft Copilot chatbot, which: