• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer

Codemotion Magazine

We code the future. Together

  • Discover
    • Events
    • Community
    • Partners
    • Become a partner
    • Hackathons
  • Watch
    • Talks
    • Playlists
    • Edu Paths
  • Magazine
    • Backend
    • Frontend
    • AI/ML
    • DevOps
    • Dev Life
    • Soft Skills
    • Infographics
  • Talent
    • Discover Talent
    • Jobs
    • Manifesto
  • Companies
  • For Business
  • IT
  • ES
Home » Videos » AI ethics: Careful with that AI Eugene!
Machine Learning

AI ethics: Careful with that AI Eugene!

IBM's Najla Said takes a look at how AI ethics is challenging the benefits of AI as part of the Codemotion Deep Learning Conference

June 22, 2020 by

Najla Said is IBM Cloud Data Science Team Manager in Italy.  She takes a look at how AI benefits come with a cost. The use of intelligent machines, indeed, not only introduces new forms of issues, but also widen our attack surface, and is triggering huge philosophical debates. This article offers a dive into some of the issues raised around AI ethics, but we also encourage you to take a look at the presentation below:

Loading the player...

According to Najla, artificial intelligence hype started in the 60s which halted in the 1970s, called the AI winter “due to the lack of the computational power and data that are needed to build an artificial intelligence system. And then in the 1990s, we started to work again in artificial intelligence until  2016, when we faced something we didn’t expect.”

Tay bot

Tay_bot_logo

In 2016 machine learning enthusiasts were shaken by Tay bot, an artificial intelligence chatterbot that was originally released by Microsoft via Twitter on March 23, 2016. It caused subsequent controversy when the bot began to post inflammatory and offensive tweets through its Twitter account, causing Microsoft to shut down the service only 16 hours after its launch. Microsoft attributed the problem to trolls who “attacked” the service as the bot made replies based on its interactions with people on Twitter.

According to Najla “We need discussions about regulation, and how to manage these kinds of issues and started to grow also in the decision-making levels. AI is complex and it has every complex technology. It comes with a lot of power but also a lot of potential issues. So we should be aware of this issue and understand how to be sure that we use technology.”

Simplification

Najla notes that not everyone can be a data scientist. Critical skills are required in mathematics, programming and statistics.

So when you start to work in a project with artificial intelligence, you have to be sure that you have the right competencies at the table that you work with a good skilled, team

Interpretation problems: Data can be misinterpreted. Data science and artificial intelligence problems usually need to contain both technological knowledge and domain knowledge. “So we need experts that work with our data scientists and guide them through the work and are there to check if the results are meaningful in this kind of domain. Domain experts should always participate in AI projects.”

Cognitive bias is a painpoint of AI ethics

According to Najla, “When we take a decision, we are affected by more than 180 cognitive biases.”

Artificial intelligence systems are built by humans and are trained on human data. AI systems can inherit this bias and thus, bias can be introduced in every step of the process. We should always check our solution for possible biases,  “We can check these in pre-processing on the data sets, we can check this during by creating the bias-resistant algorithm and we can check regularly, but we have to do it.”

Robustness

AI systems based on deep learning or machine learning introduce new surfaces for attacks. As Najla notes, “So in our case, before could exploit a vulnerability in an application now has the other things to access. You can act on the data sets of the training you can act on the algorithm by proposing to the algorithm, adversarial examples in order to obtain an unwanted outcome: These are really important issues and we have to harden our data sets and our code in order to be sure that we are not vulnerable to these kinds of issues.

Fortunately, help is at hand. The Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to evaluate, defend, certify and verify Machine Learning models and applications against the adversarial threats of Evasion, Poisoning, Extraction, and Inference.

Explanability

The explainability of an AI system helps in testing quality and in gaining trust in its decisions. “It’s a tool in our hands in order to gain trust in our solution. There are some algorithms that are explainable by themselves, but other algorithms like for example, neural networks work like black boxes. And in this kind of situation, we should create ways to explain the outcome of these algorithms and we can do it locally so on a single outcome or generally globally. So, global explainability is a little bit more difficult than local explain abilities. However, it is worth doing because this helps in understanding the quality and in understanding possible issues of the solution.”

Value alignment

How do we make sure machines will act as we expect? Trust is a critical part of AI ethics. As Najla explains, “I want to make sure that my Artificial Intelligence makes the right decision when the decision is not only some kind of business decision but is an ethical decision.”

According to Najla, “If I take a driving licence and I take a decision when driving, I have the responsibility of this decision. When I use a self-driving car, who has responsibility? How can I insert my assets inside my code? This is a very tough problem because also ethics is not static, it changes with culture, it changes with time, so it’s very difficult to understand how to determine a global, equally valuable assets to add to our codes.

An example of this at work is MIT’s moral machine which is a platform for gathering a human perspective on moral decisions made by machine intelligence such as self driving cars. It demonstrates moral issues where a driverless car must make a decision that is killing two passengers of five pedestrians.  Najla shares, “It’s a very tantalising tool to use. And you can see how your average answers are with respect to the median answers of the people so it’s quite compelling.”

Any discussions around ethics in AI will clearly require a shift in our discussion that will include not only technical people but also people that works in philosophical and ethical discussion every day. So it will take time to solve this kind of problem. We need multidisciplinary discussion and considered regulations – the ethics of everyone is required to have a real answer.

Tagged as:AI ethics

Related articles

  • TensorFlow Furthers the Development of Machine Learning
  • The Evolution of Conversational AI According to Rasa
  • How to teach Alexa to pronounce your name
  • What is the state of the developer in 2020?
  • IBM Think Digital 2020: Can the innovation of games prepare us for the future?
  • IBM Think Digital 2020: AI for Enterprise and the Value of Language
  • Using Machine Learning to diagnose COVID-19

Primary Sidebar

Free Whitepaper: The Ultimate Azure Cybersecurity Handbook.

Codemotion Talent · Remote Jobs

Flutter Developer

3Bee
Full remote · Android · Flutter · Dart

Python Back-end Developer

h-trips.com
Full remote · Django · Pandas · PostgreSQL · Python

AWS Cloud Architect

Kirey Group
Full remote · Amazon-Web-Services · Ansible · Hibernate · Kubernetes · Linux

AWS SysOps Administrator

S2E | Solutions2Enterprises
Full remote · Amazon-Web-Services · Terraform · Linux · Windows · SQL · Docker · Kubernetes

Latest Articles

An article about Image Containers and security best practices.

Container Images: Technical Refresher and Security Best Practices

Cybersecurity Uncategorized

10 Useful and Affordable IT Certifications Under $200

Dev Life

Anche le grandi idee falliscono, tech fails

Tech Fails: When Brilliant Ideas Go Bust

Stories

Javascript color library. Discover how to use it in this guide.

Unsupervised Learning in Python: A Gentle Introduction to Clustering Techniques for Discovering Patterns

Languages and frameworks Machine Learning

Footer

  • Magazine
  • Events
  • Community
  • Learning
  • Kids
  • How to use our platform
  • Contact us
  • Become a Contributor
  • About Codemotion Magazine
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

© Copyright Codemotion srl Via Marsala, 29/H, 00185 Roma P.IVA 12392791005 | Privacy policy | Terms and conditions

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • RSS