• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
Codemotion Magazine

Codemotion Magazine

We code the future. Together

  • Discover
    • Live
    • Tech Communities
    • Hackathons
    • Coding Challenges
    • For Kids
  • Watch
    • Talks
    • Playlists
    • Edu Paths
  • Magazine
    • Backend
    • Frontend
    • AI/ML
    • DevOps
    • Dev Life
    • Soft Skills
    • Infographics
  • Talent
    • Discover Talent
    • Jobs
  • Partners
  • For Companies
Home » AI/ML » Machine Learning » Overselling can hurt AI
Machine Learning

Overselling can hurt AI

AI is becoming more and more pervasive in our lives and as a consequence we are getting aware of its benefits as well as of its limitations.

October 23, 2019 by Gabriella Giordano

AI is becoming more and more pervasive in our lives and as a consequence we are getting aware of its benefits as well as of its limitations.

At Codemotion Milan 2019, Nicole Alexander will explain one of the main concerns of data scientists nowadays in her talk Diversity in AI: is the design of AI systems spoiled by our own same, very natural, very human, flaws?
If you are interested, do not miss this opportunity: tickets are still available!

Taking for granted efficiency, how much can we trust AI to be objective, neutral and fair? In other words, not just faster but better than us?

Health, social media, education, justice… the more we delegate to artificial intelligence, the more we risk to slip from technical to ethical issues, and maybe we are not ready for this yet.

From neuron cells to intelligent systems

Nobody knows exactly how the brain works and therefore, no one knows what intelligence truly is. When AI studies began decades ago, the best approach to dive into the mysteries of thinking was to mimic the physiological characteristics of brain tissues.

This very low-level approach lead to the theoretical definition of the neural networks that power artificial intelligent systems today.

Neural networks come in many variants and flavours, but they are all based on the same principle: a set of neurons is trained to respond to known inputs with the expected output.

A neuron is a simple abstraction: a threshold function whose result will be zero or non zero, according to the value of its weighted input. Just like real neuron cells, when the threshold is exceeded the neuron is activated, i.e. it produces an output for the next neuron and so on. The last layer in the network provides the final result, based on the elaborations carried out from the previous layers.

The training process makes use of back propagation feedback to assess the connections between neurons, try after try.

The training is complete when that is the network is converging, i.e. it produces the expected results for the inputs provided during the training. At this stage, we can feed new inputs and expect plausible results in return.

It has been observed in nature that complex neural networks, with many neurons and many layers, usually can carry out finer elaborations.

This is the basic assumption behind what we call deep-learning: increasing the complexity of neural networks we can make them smarter.

One of the major issues of this approach is the initial setup the network: how many neurons do you need? How many layers? How to pick up the initial weights? Can the network converge at all?

The first attempts to build neural networks in the 1980 were based on random initialization that made convergence very hard to achieve. Besides, the definition of artificial neural networks suffered both the lack of data and computational power.

Today we live in a the big-data world and artificial intelligence is gaining popularity again thanks to the development of new methodologies with a less naive approach to initialization. Furthermore, neural networks can now take advantage of hardware resources that were simply inconceivable a few years ago.

Nevertheless, although modern neural networks achieve incredible results compared to the past, we now face new challenges and new issues.

What’s your bias?

Culture, religion, gender, age? We all have more than one bias that creeps in every thought and action. While we, as human beings, can resort to ethics and moral thinking to workout our own issues, AI systems are biased by their own foundations.

Neural networks are indeed a sophisticated tool to approximate an unknown function defined only by a set of known inputs and outputs pairs. They are surprisingly good at their job, but they can handle only a limited amount of unpredictability.

That practically means that you can not ask neural networks to elaborate something that is too far from what they were built for.

In other words, neural networks are inherently biased by their training dataset with consequences that go far beyond technical issues.

The diversity.ai project has a set of case studies that illustrate how the biases in the training dataset affects neural networks performance.

Imagine what happens if the AI used by an health insurance company is unable to properly assess the hearth attack risk on some individuals just because they belong to under represented categories. Both people and business are damaged.

Since diversity is affecting everyone in some way, these kind of problems are occurring in every use-case scenario for AIs, with unpredictable consequences.

That’s the reason why scientists are now shifting their attention from network configuration to data sampling methodologies, trying to cut down the barrier of diversity once and for all at its roots, at least for AI.

This trend helped in realize that non only it takes a huge effort to build representative datasets, but also a change in the mindset of the people involved into the sampling process is needed in order to accomplish this task.

Still, this praiseworthy effort could be not enough, and for once in history we can not simply blame it on society.

Trust me, I’m 50% sure!

It is indeed true that trying to manage diversity in neural networks could produce better results, but there is big catch: we need to agree what better means in this context.

For the training process of neural networks to succeed, whether supervised or unsupervised, the training dataset must be, at least to some extent, homogeneous.

That practically means that there must be some inherent similarity between distinct samples of the training set in order to specialize the network to do its job.

When we feed the network with very heterogeneous samples we inevitably give up the rate of confidence on the results that the network produces, that is we trade accuracy for generality.

Can we just pack more layers to make the network smarter? Yes, but there is not a easy way to do so. Actually, increasing the complexity of a neural network makes convergence harder, if not impossible.

Surprisingly, bias in neural networks is not such a bad thing: it has not the same meaning that it has for people. After all, we should ask ourselves: what’s the use for an AI that is really fair but has very poor confidence rates?

Biases in AI should be regarded as a technical limitation of the current state of the art, rather than a side effect of our complex society. We should therefore, learn to use this tool not only efficiently but also responsibly.

Patronize vs marketing AI

Probably neural networks will evolve into truly intelligent systems one day… just not anytime soon. Unfortunately, whenever the line between science and marketing begins to blur, we introduce semantic issues that can have an impact on our choices.

Just like a we wouldn’t call Art any painting on the wall, we should not use the term AI so easily. The word “intelligence”, when referred to algorithms, captures not only the attention but also the trust of users, raising unrealistic expectations.

Although we can not easily delegate to AI the solution of problems that we still need to address on our own, this topic will be one of the most debated in the years to come.

Do not miss the chance to attend to Diversity in AI by Nicole Alexander at Codemotion Milan 2019 and discover the role that both ethics and technology play in the design of the AI systems of tomorrow. Tickets are still available: get yours here!

facebooktwitterlinkedinreddit
Share on:facebooktwitterlinkedinreddit
Open Governance: a real case
Previous Post
Codemotion Feeds Who Is Eating the World
Next Post

Related articles

  • Neural Networks: The Evolution of Deepfakes
  • 6 Courses to Dive Deep Into Machine Learning in 2022
  • Programmable Logic: FPGA Internal and External Interfacing
  • Embedded Processing in Programmable Logic
  • FPGAs: What Do They Do, and Why Should You Use Them?
  • How to Optimise Your IoT Device’s Power Consumption
  • How to Implement Data Version Control and Improve Machine Learning Outcomes
  • The Rise of Machine Learning at the Network Edge
  • The Future of Machine Learning at the Edge
  • Questions and Answers in Virtual Assistants

Primary Sidebar

Learn new skills for 2023 with our Edu Paths!

Codemotion Edu Paths for 2023

Codemotion Talent · Remote Jobs

Game Server Developer

Whatwapp
Full remote · TypeScript · Kubernetes · SQL

Back-end Developer

Insoore
Full remote · C# · .NET · .NET-Core · Kubernetes · Agile/Scrum

Full Stack Developer

OverIT
Full remote · AngularJS · Hibernate · Oracle-Database · PostgreSQL · ReactJS

Data Engineer

ENGINEERING
Full remote · Amazon-Web-Services · Google-Cloud-Platform · Hadoop · Scala · SQL · Apache-Spark

Latest Articles

Will Low-Code Take Over the World in 2023?

Frontend

Pattern recognition, machine learning, AI algorithm

Pattern Recognition 101: How to Configure Your AI Algorithm With Regular Rules, Events, and Conditions

AI/ML

automotive software

Automotive Software Development: Can Agile and ASPICE Coexist?

DevOps

programming languages, 2023

Which Programming Languages Will Rule 2023?

Infographics

Footer

  • Magazine
  • Events
  • Community
  • Learning
  • Kids
  • How to use our platform
  • Contact us
  • Become a Contributor
  • About Codemotion Magazine
  • How to run a meetup
  • Tools for virtual conferences

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • YouTube
  • RSS

DOWNLOAD APP

© Copyright Codemotion srl Via Marsala, 29/H, 00185 Roma P.IVA 12392791005 | Privacy policy | Terms and conditions

Follow us

  • Facebook
  • Twitter
  • LinkedIn
  • Instagram
  • RSS

DOWNLOAD APP

CONFERENCE CHECK-IN