• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Codemotion Magazine

Codemotion Magazine

We code the future. Together

  • Magazine
  • Dev Hub
    • Community Manager
    • CTO
    • DevOps Engineer
    • Backend Developer
    • Frontend Developer
    • Web Developer
    • Mobile Developer
    • Game Developer
    • Machine Learning Developer
    • Blockchain Developer
    • Designer – CXO
    • Big Data Analyst
    • Security Manager
    • Cloud Manager
  • Articles
    • Stories
    • Events
  • Sign In
Home » Dev Hub » Machine Learning Developer » Overselling can hurt AI
Machine Learning Developer

Overselling can hurt AI

AI is becoming more and more pervasive in our lives and as a consequence we are getting aware of its benefits as well as of its limitations.

Last update October 23, 2019 by Gabriella Giordano

AI is becoming more and more pervasive in our lives and as a consequence we are getting aware of its benefits as well as of its limitations.

At Codemotion Milan 2019, Nicole Alexander will explain one of the main concerns of data scientists nowadays in her talk Diversity in AI: is the design of AI systems spoiled by our own same, very natural, very human, flaws?
If you are interested, do not miss this opportunity: tickets are still available!

Taking for granted efficiency, how much can we trust AI to be objective, neutral and fair? In other words, not just faster but better than us?

Health, social media, education, justice… the more we delegate to artificial intelligence, the more we risk to slip from technical to ethical issues, and maybe we are not ready for this yet.

From neuron cells to intelligent systems

Nobody knows exactly how the brain works and therefore, no one knows what intelligence truly is. When AI studies began decades ago, the best approach to dive into the mysteries of thinking was to mimic the physiological characteristics of brain tissues.

This very low-level approach lead to the theoretical definition of the neural networks that power artificial intelligent systems today.

Neural networks come in many variants and flavours, but they are all based on the same principle: a set of neurons is trained to respond to known inputs with the expected output.

A neuron is a simple abstraction: a threshold function whose result will be zero or non zero, according to the value of its weighted input. Just like real neuron cells, when the threshold is exceeded the neuron is activated, i.e. it produces an output for the next neuron and so on. The last layer in the network provides the final result, based on the elaborations carried out from the previous layers.

The training process makes use of back propagation feedback to assess the connections between neurons, try after try.

The training is complete when that is the network is converging, i.e. it produces the expected results for the inputs provided during the training. At this stage, we can feed new inputs and expect plausible results in return.

It has been observed in nature that complex neural networks, with many neurons and many layers, usually can carry out finer elaborations.

This is the basic assumption behind what we call deep-learning: increasing the complexity of neural networks we can make them smarter.

One of the major issues of this approach is the initial setup the network: how many neurons do you need? How many layers? How to pick up the initial weights? Can the network converge at all?

The first attempts to build neural networks in the 1980 were based on random initialization that made convergence very hard to achieve. Besides, the definition of artificial neural networks suffered both the lack of data and computational power.

Today we live in a the big-data world and artificial intelligence is gaining popularity again thanks to the development of new methodologies with a less naive approach to initialization. Furthermore, neural networks can now take advantage of hardware resources that were simply inconceivable a few years ago.

Nevertheless, although modern neural networks achieve incredible results compared to the past, we now face new challenges and new issues.

What’s your bias?

Culture, religion, gender, age? We all have more than one bias that creeps in every thought and action. While we, as human beings, can resort to ethics and moral thinking to workout our own issues, AI systems are biased by their own foundations.

Neural networks are indeed a sophisticated tool to approximate an unknown function defined only by a set of known inputs and outputs pairs. They are surprisingly good at their job, but they can handle only a limited amount of unpredictability.

That practically means that you can not ask neural networks to elaborate something that is too far from what they were built for.

In other words, neural networks are inherently biased by their training dataset with consequences that go far beyond technical issues.

The diversity.ai project has a set of case studies that illustrate how the biases in the training dataset affects neural networks performance.

Imagine what happens if the AI used by an health insurance company is unable to properly assess the hearth attack risk on some individuals just because they belong to under represented categories. Both people and business are damaged.

Since diversity is affecting everyone in some way, these kind of problems are occurring in every use-case scenario for AIs, with unpredictable consequences.

That’s the reason why scientists are now shifting their attention from network configuration to data sampling methodologies, trying to cut down the barrier of diversity once and for all at its roots, at least for AI.

This trend helped in realize that non only it takes a huge effort to build representative datasets, but also a change in the mindset of the people involved into the sampling process is needed in order to accomplish this task.

Still, this praiseworthy effort could be not enough, and for once in history we can not simply blame it on society.

Trust me, I’m 50% sure!

It is indeed true that trying to manage diversity in neural networks could produce better results, but there is big catch: we need to agree what better means in this context.

For the training process of neural networks to succeed, whether supervised or unsupervised, the training dataset must be, at least to some extent, homogeneous.

That practically means that there must be some inherent similarity between distinct samples of the training set in order to specialize the network to do its job.

When we feed the network with very heterogeneous samples we inevitably give up the rate of confidence on the results that the network produces, that is we trade accuracy for generality.

Can we just pack more layers to make the network smarter? Yes, but there is not a easy way to do so. Actually, increasing the complexity of a neural network makes convergence harder, if not impossible.

Surprisingly, bias in neural networks is not such a bad thing: it has not the same meaning that it has for people. After all, we should ask ourselves: what’s the use for an AI that is really fair but has very poor confidence rates?

Biases in AI should be regarded as a technical limitation of the current state of the art, rather than a side effect of our complex society. We should therefore, learn to use this tool not only efficiently but also responsibly.

Patronize vs marketing AI

Probably neural networks will evolve into truly intelligent systems one day… just not anytime soon. Unfortunately, whenever the line between science and marketing begins to blur, we introduce semantic issues that can have an impact on our choices.

Just like a we wouldn’t call Art any painting on the wall, we should not use the term AI so easily. The word “intelligence”, when referred to algorithms, captures not only the attention but also the trust of users, raising unrealistic expectations.

Although we can not easily delegate to AI the solution of problems that we still need to address on our own, this topic will be one of the most debated in the years to come.

Do not miss the chance to attend to Diversity in AI by Nicole Alexander at Codemotion Milan 2019 and discover the role that both ethics and technology play in the design of the AI systems of tomorrow. Tickets are still available: get yours here!

Open Governance: a real case
Previous Post
Codemotion Feeds Who Is Eating the World
Next Post

Primary Sidebar

Subscribe to our newsletter

I consent to the processing of personal data in order to receive information on upcoming events, commercial offers or job offers from Codemotion.
THANK YOU!

Whitepaper & Checklist: How to Organise an Online Tech Conference

To help community managers and companies like ours overcome the Covid-19 emergency we have decided to share our experience organizing our first large virtual conference. Learn how to organise your first online event thanks to our success story – and mistakes!

DOWNLOAD

Latest

we love founders

Thinking Like a Founder – meet Chad Arimura

CTO

Move Over DevOps, It’s Time for DesignOps and the Role of UX Engineer

Designer - CXO

developer

The State of AI in 2021

Machine Learning Developer

Machine Learning on the Network Edge

The Rise of Machine Learning at the Network Edge

Machine Learning Developer

robot programming

Are You Ready for the FaaS Wars?

Backend Developer

Related articles

  • The State of AI in 2021
  • The Rise of Machine Learning at the Network Edge
  • The Future of Machine Learning at the Edge
  • AI Ladder: the IBM Approach to Artificial Intelligence
  • Questions and Answers in Virtual Assistants
  • Voice Control: Building Your Voice Assistant
  • Seeing Is Believing: Image Recognition on a €10 MCU
  • Exploring LIME Explanations and the Mathematics Behind it
  • ML at the Edge: a Practical Example
  • Google AI Hub: what, why, how

Subscribe to our newsletter

I consent to the processing of personal data in order to receive information on upcoming events, commercial offers or job offers from Codemotion.
THANK YOU!

Footer

  • Learning
  • Magazine
  • Community
  • Events
  • Kids
  • How to use our platform
  • About Codemotion Magazine
  • Contact us
  • Become a contributor
  • How to become a CTO
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

DOWNLOAD APP

© Copyright Codemotion srl Via Marsala, 29/H, 00185 Roma P.IVA 12392791005 | Privacy policy | Terms and conditions

  • Learning
  • Magazine
  • Community
  • Events
  • Kids
  • How to use our platform
  • About Codemotion Magazine
  • Contact us
  • Become a contributor
  • How to become a CTO
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

DOWNLOAD APP

CONFERENCE CHECK-IN