In a world where generative AI tools are reshaping the way we approach writing tasks, it is crucial to understand the potential pitfalls and implications of relying too heavily on these technologies. Unintended damaging consequences can be avoided by being mindful of our intentions, prompts, and use of outputs.
Let’s explore how we can navigate the complexities of generative AI tools with awareness and responsibility.
Hi, I'm Nazly. Connect with me on LinkedIn to share ideas and keep the conversation going!
Generative AI tools are already advanced enough to replace many writing tasks, especially short-form web articles and social media posts. A study from Bloomberry analysed job posts for freelancer positions in Upwork since the launch of chatGPT. Jobs in fields like Writing, Customer service and Translation were reduced by almost 35%.
What can go wrong with this type of job shifting? What can happen to our outputs and new content when we rely on an AI model to do the job?
How AI-designed systems are built
AI-designed systems involve different components.
First, the researchers and developers, are technical people with deep knowledge about how the algorithms work and building new ways to get patterns from data.
Then the data and algorithms form the core of the AI models. The data for generative AI tools comes from years of collected information, mainly on the internet, like Wikipedia or other websites. The algorithms are trained using this data, and can learn the preconceptions or patterns in the data.
And finally, the applications and users. These are the systems that are available for us to use. For example, chatGPT is an app of the GPT-3 and GPT-4 models. These apps retrain as they are used, so the way we use the apps informs their future responses and outputs.
Because of these components, AI-designed systems have implicit bias. Bias is when there is a generalisation of a group based on a smaller sample. Generalisations based on smaller samples is exactly how GenAI operates. Therefore, we must consider bias at the forefront of the ethical limitations of AI.
Bias in AI systems
Pre-existing:
This is the most common type of bias. That's because it's the type that normally makes the news, and gains wider notice.
Last week I googled the term "doctor nurse". Despite the efforts that Google Images has made to show diverse and unbiased results, the top images still showed the stereotype of doctor = male and nurse = young female.
Unfortunately, these limitations come from our history. Misrepresentations of certain roles are common in our culture and media, and historically, gender stereotypes have informed who gains access and visibility in fields like medicine. When GenAI is trained on existing data, stereotypes and historical inequality form much of that data.
Technical:
Limitations on software and hardware can produce technical bias. User Interfaces can also prioritise certain ordering, sizes, and proportions of display that can provide the user with misleading information.
In this example, Google smart rankings and UX provide additional information when I search "what is the best country in the world".
Results are sorted using a criteria that is unknown to me, but it displays United States in the first place.
The first article is from a U.S. news website.
Emergent:
The trickiest type of bias. It happens when the users interact with the AI systems, creating use cases that the model may not be prepared for.
Think of a language model that performs very well in writing tasks in English, but when the user types in another language, like Spanish, the system creates mistakes or wrong outputs.
Some emergent biases are more obvious than others. In the documentary The Code Gaze the well-known facial recognition system is challenged. The model works great until it fails to detect faces from darker skin colours, to the point that only the face wearing a complete white mask is recognised.
Bias vs Harm
The real problem is when bias produces harm.
Bias is inevitable, and it is present in our current technical systems. The data that we use in algorithms carries implicit generalisations. Systems need to deal with the risks that these limitations can carry, and determine how willing they are to assume that risk.
The harm caused by bias, on the other hand, is a preventable consequence. It is inherently social, involves human perceptions, and depends on the context in which the systems are being used.
Some of the examples I showed before carry some risks, but some, especially emergent biases, can produce harm to some.
We must take responsibility for how we design and use AI systems to reduce harm.
Using AI responsibly
Even though AI-designed systems have some pitfalls, we can use them responsibly.
Let’s focus on the role of the user interacting with applications. Think of how to use Microsoft Copilot, Google Gemini or OpenAI chatGPT writing assistants.
When we are using an AI writing assistant, we can think of three phases:
The intention I have as a user: what is my use case, what do I want to get and why?
The prompt: the actual instructions given to the assistant, the natural language interpretation of my request.
And the use of the output: What is the purpose of the result, where I am going to share it and who is going to see it?
The first step we can take is to be aware and conscious that our intentions, interpretation and use of the output can produce unexpected harm.
Ask yourself some questions and double-check from the beginning; is this the right task to perform with generative AI tools? Am I the right person to generate this content? Is the tool understanding my requests correctly? Do I know enough about the reach that this piece of content is going to have?
Tips to keep in mind
Involve people whenever you can, people from your team, or people who think differently from you. Test your outputs and get feedback to challenge your results.
Take ownership of the task, get yourself informed, dig deeper and use diverse sources and research to better understand certain topics and outputs.
Follow regulations and developments of the AI tools. The EU Artificial Intelligence Act is soon going to be in place and changes on design and use of systems can impact the way we provide inputs and use outputs.
Use the tools as your sparring partner to challenge your views and limitations.
Let's elaborate on that last point with an example!
They intended to create different slogans aligned with a carefree and relaxed lifestyle and print them in their bottles. One slogan got the public attention.
"The perfect beer for removing "no" from your vocabulary for the night"
If the beer company were about to launch a similar campaign today, they could have used the power of Generative AI to challenge their assumptions about their audience and see the possibilities of harm. Let's try it using ChatGPT.
In this case, my intention is to get challenged and receive the opinion on the slogan. For that, I am using the prompt to create a "team of critics" with diverse backgrounds in terms of age, gender and race. I included a young woman in the team of critics to see how this slogan would be perceived by her.
In the output, ChatGPT gives me a couple of sentences associated with each critic, not surprisingly, the young woman mentions the insensitivity towards consent and the serious issue of saying 'No'. This was exactly the reason why Bud Light received backlash; the audience criticised the trivialisation of such an important topic.
I oversimplified the "team of critics" in this example to prove my point, but continuous use of this practice can result in a guide of good practices in your workplace whenever content is created using Generative AI tools.
Putting it all together!
Use this set of steps when you want to create content with GenAI while being mindful of Diversity, Equity and Inclusion.
Think about the content you are creating. Not all content is suitable for GenAI tools, so consider your use case carefully. Internal communication is different from a blog post or a marketing piece. Double-check if the writing or visualisation task can be fully delegated to AI.
Once you have a concrete use case, focus on the AI tool. Read the terms and conditions of the tool and read carefully to take ownership of the input you provide and the outputs you get.
Depending on the use case, research and dig deeper into the content you are creating. If you are writing a blog post or a LinkedIn post, search for similar content and see how it can be harmful to DE&I. E.g. Job posts on Linkedin can be insensitive towards certain populations, like people without a University degree or of a certain age.
Whenever possible, create a 'team of critics' that challenges your results. To form this team, use the Wheel of Diversity. Pay extra attention to the External Dimensions.
Diversity Wheel (Loden et al. 1991; Gardenswartz et al. 2003)
The wheel goes beyond diversity in terms of age, race or gender. Consider other dimensions like parental status (not everyone has a father and a mother) or personal habits (not everyone has the privilege to have hobbies or spare time).
After iterations, feedback and examples, build your guidelines. Document what has worked for you and make it available to your teammates.
Dive deeper into the topic
If you want to explore more about DE&I in the workplace on the context of AI, here are some resources to check:
"Xtereotype.com" : A software that can analyse how your content will land in different populations, giving a full report.
Wrapping Things Up
This is a short introduction to a very complex topic. DE&I is a human problem that has a historical load, it is linked to sociology, politics or geography. AI systems are complex, with different components, and improving very fast. Neither users, nor regulations are evolving at the same speed. We need to take control and responsibility in the way we use the systems, and focus on overcoming biases and impact others positively.
Keep your intention in mind, take care in how you use your prompts and evaluating the results you get.
Take ownership and research more about the harm you can cause to certain groups, and be aware of the risks of unintended insensitivities.
Collect your best practices and spread these guidelines among your peers. Start cultural changes in your workplace.
What did you think of this Microlearning?
Hi, I'm Nazly. Connect with me on Linkedin to share ideas and keep the conversation going!