1. Skip to content
  2. Skip to main menu
  3. Skip to more DW sites
Media

Kenza Ait Si Abbou Lyadini: How to use AI properly

Martina Bertam
November 14, 2020

Engineer, author and robotics expert Kenza Ait Si Abbou Lyadini spoke to DW about ethics and algorithms in AI. She says that there are big challenges lying ahead in informing the public about potentials and risks of AI.

Kenza Ait Si Abbou Lyadini
Image: Hendrik Gerken

DW: Computer software can produce reports on all kinds of news today, from stock market information to weather updates to the latest sport results. In some newsrooms, the use of AI has already become commonplace. Can AI tools help the media industry find a way out of its current crisis?

Kenza Ait Si Abbou Lyadini: The media crisis is a complex topic that needs to be analyzed and addressed from a number of different angles. The challenges it presents cannot all be solved with artificial intelligence.

AI solutions certainly have the potential to make certain processes more efficient, increase automation, reduce costs and even solve challenges that traditional systems cannot. Particularly in the media, AI can help with smart search engines to conduct efficient fact checks and perform quick queries. AI can help detect fake news and media bias. It can generate copy in a particular style - though this still needs to be checked by a human editor. And it can also help in building the design and layouts of articles. 

Risks and challenges of artificial intelligence

00:55

This browser does not support the video element.

AI has the potential of changing societies around the world in groundbreaking ways, which last probably were seen with the advent of the internet. What kind of changes can we already observe in some societies? 

Artificial intelligence is everywhere. It is in our cell phones, in our watches, in our cars, in our washing machines. It is hard to think of a digital product today that doesn't have any AI built into it.

And since digital products spread within seconds, it doesn't really matter where they were developed; the whole world will still get to use them. Still, different cultures interact differently with the digital world, and different societies have different needs. The impact of such products is much more significant than we think. We believe that machines are neutral, but then we find out that they discriminate in the same ways we humans do. As a society we are still learning to deal with this technology, and it will take time until we can use it responsibly.

Do you really want AI in your life?

02:14

This browser does not support the video element.

It is one of the task of journalists to inform the public broadly and critically about all kind of phenomena, and this includes reporting on AI. Considering the speed with which such technology keeps changing, this seems like a Herculean task. Do you know any best-practice examples here? What do you recommend journalists should bear in mind when they report on AI? 

It is indeed a big challenge to inform the general public about artificial intelligence with all its potentials and its risks, without increasing their fear. For most people, AI is a myth, it is a new topic, and everything about it sounds complex - probably because of all the technical terms that only computer and data scientists understand.

But it is exactly for that reason that it is necessary to explain it in a simple and easy way for everybody to understand. This was the main reason why I wrote my book. Also, the designs and visuals used in articles about AI are often quite dark, featuring images of menacing robots. However, most commercial AI solutions are purely software-based.

These kinds of images increase the fear, in my opinion. I wish to read more informative and neutral articles about AI and fewer articles about scandals and machines taking wrong decisions. And even when machines do take wrong decisions, it is necessary to explain the mechanisms behind everything, and how such mistakes can be avoided in the future. 

Read more: AI is when your smartphone knows that you have COVID-19

Many people have a distorted view on Artificial IntelligenceImage: picture-alliance/K. Ohlenschläger

AI is now able to calculate the course of a cellular disease, for example. It can even predict certain human behavior better than humans can themselves. Building on this information, it can also develop systems that endanger democratic values such as freedom in opinion-forming processes, which can result in social unrest and other issues. What can we do when AI is intrumentalized for such abuses of power by populists and autocrats, for example? 

AI is just a tool. It can be used for good or for bad. It is our responsibility to use it properly. The laws and regulations we have in the analog world are also applicable in the digital one. Obviously, the potential and the wide reach of the digital world is far higher, and this is why the risks are so severe when AI gets in the wrong hands.

For this very reason, it is imperative that we inform the general public about this technology; about the mechanisms behind fake news, digital bubbles and echo chambers. People should learn to distinguish right sources from suspicious ones. They should be encouraged to diversify their media sources and use different platforms. They ought to actively look for different perspectives and do more self-reflection - especially if they see that everybody around them has the same opinion.

It is part of the business model of global tech companies to play on the emotions of their users. This increases their sense of brand loyalty. But it can also reinforce unconscious prejudices, create hostility, promote polarization and, in the worst-case scenario, cost human lives. Do these platforms have to adapt their business practices to the general ethics of people and society? 

Absolutely, the ethics we have in the analog world should be part of the digital one as well. They should be included in the design phase of digital products and platforms.

We don't need a new set of digital ethics. We should just make sure that the same ethical judgments we have applied in the past are being translated into the digital world. 

Should global tech giants be treated like news organizations when they share and distribute news items? Or is big tech right in saying that they do not actually produce news content themselves, and should therefore not be held responsible for content? 

That is a complex question. IT companies, like all companies, have a certain social responsibility that they must comply with. And it doesn’t matter if the products are analog or digital. The responsibility is the same.

Kenza Ait Si Abbou Lyadini was born in Africa, studied in Spain, Germany, and China, and now works as Senior Manager of Robotics and Artificial Intelligence at Deutsche Telekom. Her recent book is called "No P@nic - it's only technology" and is published in German.

This interview was conducted by Martina Bertram. 

Read more: GMF digital session: Media and Information Literacy in the age of coronavirus

Skip next section Explore more
Skip next section DW's Top Story

DW's Top Story

Skip next section More stories from DW