Should We Fear Artificial Intelligence?

Lameck Mbangula Amugongo

You’ve probably heard of ChatGPT or Dale-2 even if you’re not clear on what they are.

ChatGPT and Dale-2 are examples of generative pre-trained transformers – computer systems that take what a person enters in a search box and a chatbot replies as if it is a human being.

The emergence of generative pre-trained transformers such as ChatGPT and Dale-2 has brought artificial intelligence (AI) into the mainstream.

Since OpenAI launched, AI experts have sounded alarms about potential “existential or extinction” risks.

This article explains what AI is, how it works, and what African countries such as Namibia should do to avert potential harm.

AI is the use of computer systems to mimic human-like intelligence. They have the ability to perform tasks that traditionally require human intelligence.

To achieve this, AI practitioners use an AI subset called machine learning (ML), whereby computer systems automatically learn and improve their abilities using new data without human intervention.

Think of it this way: Traditionally, a computer programmer enters the data a computer needs to operate.

With ML, the computer learns without explicit instructions. Instead, it draws from different algorithms or models and by inferring from different patterns.

Tutaleni I Asino

Imagine you want to teach a computer how to recognise cats.

Instead of programming every single detail about cats into the computer, you can show it many pictures of cats and let it figure out the patterns on its own.

The computer will learn to identify common features like pointy ears, whiskers and fur, and will start recognising these features in new pictures.

So, when a computer gets a new picture, it can say “yes, this is a cat!” because it learned from other pictures.

The idea that more data equals better AI systems proliferated deep learning techniques, which has abilities to automatically learn from large volumes of unstructured data such as text, videos, or images.

LANGUAGE

AI’s benefits have been demonstrated in different disciplines, including radiology (reading medical images), manufacturing, and understanding natural language.

Language is an intriguing ability of the human mind.

It serves as a universal tool for expressing our understanding of the world and communicating our intentions, viewpoints and sometimes emotions.

To contextualise how tools such as ChatGPT work, take the fast-changing natural language processing (NLP) scene.

NLP is a subfield of AI that aims to create computer systems that comprehend and understand generated natural language.

Common applications that use NLP technologies are Siri, Google and Alexa. These systems are created by feeding an algorithm with a dataset containing text.

The model converts the data into meaningful information by breaking phrases into words and creating context on how words are related.

Put simply, NLP technology uses the context and relationships between words to predict the next word.

The AI community has recently focused on developing models that improve prediction quality.

Since then, large language models (LLMs) have been at the centre of the race to achieve general artificial intelligence.

LLMs require a lot of data to train (create) and are trained on a large collection of data scraped from different websites on the internet.
Several issues are associated with using data to train models such as ChatGPT.

A 2021 paper, ‘On the Dangers of Stochastic Parrots’, highlighted that the internet is not representative and contains misogynistic/racial slurs.

Moreover, the internet is about 60% in English, but of the world’s nearly 8 billion people, only 15% speak English.

With such data, you can guess the small number of African languages or cultures represented on the internet.

NAMIBIAN EXAMPLES

Let’s return to ChatGPT and Dale-2. On 30 November 2022, a Silicon Valley company, OpenAI, launched the ChatGPT system, which mimics human-like conversation by responding to anything a user inputs into the dialogue.

A user can ask a question or make a request as if they were speaking to another human being.

For example, we entered “tell me a story about why Hendrik Witbooi is a hero in Namibia” and ChatGPT wrote it.

A few months earlier, the same company created a system called Dale-2 that allows you to create an image based on a description.
While you may not be familiar with Dale-2, you’ve probably seen products of images created by such systems like the viral one of Pope Francis wearing a puffy white coat (the picture was not real, it was computer generated).

Tools such as ChatGPT are good at fooling people by generating texts that seem comprehensive, especially in common languages such as English.

However, low resource languages (languages with little or no digital corpus), produce incomprehensible text.

For example, we asked ChatGPT to “write a joke in Oshiwambo”.

ChatGPT’s response: “Omalume okwa hehele epandi, oya emu shithindi. Omapando oya pamu epaleni, oko ka ove na pambu oshihaka.”

Translation: A pastor was walking in the field and stepped on a thorn. He shouted in pain, saying it was a spiritual attack.

In addition, tools like ChatGPT make up things that are not true. So, users need to be cautious in trusting tools such as ChatGPT.

CONCERNS

New technologies (or ideas) always come with a mixture of fear and excitement.

You may have heard people say “AI is going to kill us all!” or “students won’t need to know how to read anymore or think for themselves”, and “machines will take over all jobs”.

While concerns have been expressed about AI risk by experts including from companies such as Meta, Google and Microsoft, these are a diversion from real ethical issues.

AI-inflicted harm happening now include

– Data worker exploitation: Workers in Kenya are paid US$2 an hour to remove biases and misogynistic comments from training data
– Climate impact: These models use a lot of power for training

– Amplifying dominant perspectives excessively and embedding biases that have the potential to harm marginalised communities
– AI can widen the digital divide.

While it’s important to think critically about technology, we should not forget that AI systems are artefacts that depend on the labour of many.

AI is not going to enslave us all, nor will it end humanity. This is just fearmongering.

TIME TO ACT

Of course, powerful AI tools will transform and automate many jobs.

Yes, it’ll force us to question many things, just as it did with the printing press, heart transplants and the automobile.

But terminator fantasies of AI taking over the world and injuring the human race are far-fetched.

Does it mean we should sit back and do nothing? No. The time to act is now.

We need to focus on what AI means in our context, instead of waiting for others to think through issues for us.

We can start by creating regulations to shape the actions of big tech to protect human rights in our regions.

African countries, including Namibia, need to move fast to develop policies and regulations to guide the development of AI systems – big tech will never self-regulate.

This has to be done in a balanced and comprehensive way that doesn’t destroy or stifle innovation.

Regulations should focus on ensuring transparency, promoting equity and solidarity, and ensuring there are mechanisms to hold organisations developing AI accountable for AI failures.

We can’t rely on nor trust Silicon Valley to create AI technologies for us.

What we know about innovations is they emerge from needs in the community.

We have to explore how new technologies, such as AI, can solve our country’s needs.

Africa must empower its youth to capitalise on AI and create innovative solutions that will help us address pressing challenges on our continent and create much-needed digital jobs to combat high unemployment.

  • Lameck Mbangula Amugongo is currently a postdoctoral researcher at the Institute of Ethics in Artificial Intelligence (Technical University of Munich).
  • Tutaleni I Asino is an associate professor in learning, design and technology and director of the emerging technology and creativity research lab at Oklahoma State University’s College of Education and Human Sciences.

Stay informed with The Namibian – your source for credible journalism. Get in-depth reporting and opinions for only N$85 a month. Invest in journalism, invest in democracy –
Subscribe Now!

Latest News