{"id":11239,"date":"2020-10-12T09:52:24","date_gmt":"2020-10-12T07:52:24","guid":{"rendered":"https:\/\/www.codemotion.com\/magazine\/?p=11239"},"modified":"2022-01-05T20:06:17","modified_gmt":"2022-01-05T19:06:17","slug":"image-recognition-on-mcu","status":"publish","type":"post","link":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/","title":{"rendered":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU"},"content":{"rendered":"\t\t\t\t<div class=\"wp-block-uagb-table-of-contents uagb-toc__align-left uagb-toc__columns-1  uagb-block-819603c7      \"\n\t\t\t\t\tdata-scroll= \"1\"\n\t\t\t\t\tdata-offset= \"30\"\n\t\t\t\t\tstyle=\"\"\n\t\t\t\t>\n\t\t\t\t<div class=\"uagb-toc__wrap\">\n\t\t\t\t\t\t<div class=\"uagb-toc__title\">\n\t\t\t\t\t\t\tTable Of Contents\t\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t\t<div class=\"uagb-toc__list-wrap \">\n\t\t\t\t\t\t<ol class=\"uagb-toc__list\"><li class=\"uagb-toc__list\"><a href=\"#why-we-need-image-recognition-at-the-edge\" class=\"uagb-toc-link__trigger\">Why we need image recognition at the edge<\/a><li class=\"uagb-toc__list\"><a href=\"#a-brief-history-of-image-recognition\" class=\"uagb-toc-link__trigger\">A brief history of image recognition<\/a><li class=\"uagb-toc__list\"><a href=\"#using-cnns-for-image-recognition\" class=\"uagb-toc-link__trigger\">Using CNNs for image recognition<\/a><li class=\"uagb-toc__list\"><a href=\"#steps-for-image-recognition\" class=\"uagb-toc-link__trigger\">Steps for image recognition<\/a><li class=\"uagb-toc__list\"><a href=\"#a-practical-implementation\" class=\"uagb-toc-link__trigger\">A practical implementation<\/a><li class=\"uagb-toc__list\"><a href=\"#conclusions\" class=\"uagb-toc-link__trigger\">Conclusions<\/a><\/ol>\t\t\t\t\t<\/div>\n\t\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t<\/div>\n\t\t\t\n\n\n<p class=\"eplus-efny3Z\">This series is exploring the rationale for moving machine <span id=\"urn:enhancement-f64aa37d\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/learning\">learning<\/span> to the network edge. This <span id=\"urn:enhancement-3cadbbcc\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/article_publishing\">article<\/span> looks in more detail at <span id=\"urn:enhancement-b5fbeed8\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition, one of the prime <span id=\"urn:enhancement-3bf4095b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/use_case\">use cases<\/span> for <span id=\"urn:enhancement-a994b11b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/machine_learning_2\">ML<\/span> at the edge.<\/p>\n\n\n\n<p class=\"eplus-kYwOOr\">As explained in previous <span id=\"urn:enhancement-f256fadf\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/article_publishing\">articles<\/span>, there are many <span id=\"urn:enhancement-1ae479fc\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/use_case\">use cases<\/span> for <a href=\"https:\/\/www.codemotion.com\/magazine\/dev-hub\/machine-learning-dev\/machine-learning-edge-example\/\">machine learning at the network edge<\/a>. So far, I explained the rationale for running ML models at the edge. introduced some of the <a href=\"https:\/\/www.codemotion.com\/magazine\/dev-hub\/machine-learning-dev\/edge-machine-learning\/\">hardware and tools<\/a>, and gave a practical example of <a href=\"https:\/\/www.codemotion.com\/magazine\/dev-hub\/machine-learning-dev\/machine-learning-edge-example\/\">implementing gesture recognition<\/a>. Here, I explore <span id=\"urn:enhancement-ec4c4a0b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition in detail. You will learn what <span id=\"urn:enhancement-2625c6cf\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition is and how convolutional <span id=\"urn:enhancement-84e16c43\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural networks<\/span> help implement it. At the end, there is a practical example of implementing <span id=\"urn:enhancement-2fde0ec7\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition on a <span id=\"urn:enhancement-42c5bd76\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/integrated_circuit\">Microchip<\/span> SAM E54 <span id=\"urn:enhancement-cd12d75c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/microcontroller\">MCU<\/span>.<\/p>\n\n\n\n<h2 class=\"eplus-4ruyBr wp-block-heading\" id=\"h-why-we-need-image-recognition-at-the-edge\">Why we need image recognition at the edge<\/h2>\n\n\n\n<p class=\"eplus-A4l30i\">Nowadays, everyone is familiar with <span id=\"urn:enhancement-d1f1d923\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition. It enables autonomous driving. It powers <span id=\"urn:enhancement-ded48785\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/facial_recognition_system\">facial recognition systems<\/span>. Medics can even use it to <a href=\"https:\/\/www.nature.com\/articles\/s41586-019-1799-6\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">diagnose breast cancer<\/a> from mammograms. Many of these <span id=\"urn:enhancement-eb86270\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/application_software\">applications<\/span> need to run in <span id=\"urn:enhancement-a333382a\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/real-time_data\">real-time<\/span>. They cannot rely on network connectivity and often they need to run in lightweight <span id=\"urn:enhancement-f4af073e\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/computer_hardware\">hardware<\/span> with low power draw. As a result, they are a perfect <span id=\"urn:enhancement-df13838\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/use_case\">use case<\/span> for <span id=\"urn:enhancement-83dffc31\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/machine_learning_2\">ML<\/span> at the network edge.<\/p>\n\n\n\n<p class=\"eplus-u4FRSb\">All these <span id=\"urn:enhancement-ac6d52b7\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/application_software\">applications<\/span> rely on ML models, such as <a href=\"https:\/\/en.wikipedia.org\/wiki\/Convolutional_neural_network\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">convolutonal neural networks<\/a> (CNNs). As explained below, these algorithms allow a computer to pick out and identify <span id=\"urn:enhancement-aec7e06e\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/feature_machine_learning\">features<\/span> within an <span id=\"urn:enhancement-b02ef443\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span>. However, CNNs are complex, requiring lots of parallel <span id=\"urn:enhancement-c433ed1f\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/business_operations\">operations<\/span> to run efficiently. Fortunately, modern embedded MCUs are able to run quite large <span id=\"urn:enhancement-2deeef73\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural networks<\/span>. But let\u2019s start by looking at the history of CNNs and <span id=\"urn:enhancement-105fabb5\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition.<\/p>\n\n\n\n<h2 class=\"eplus-czIKrc wp-block-heading\" id=\"h-a-brief-history-of-image-recognition\">A brief history of image recognition<\/h2>\n\n\n\n<p class=\"eplus-VPDRnT\"><span id=\"urn:enhancement-1f4bfc80\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/computer_vision\">Image recognition<\/span> may seem quite new. It\u2019s only a few years since the first reports about <span id=\"urn:enhancement-19f6aebe\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/watson_computer\">computers<\/span> <span id=\"urn:enhancement-96b631ba\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/learning\">learning<\/span> to recognise cats. But in fact, <span id=\"urn:enhancement-fca5bd59\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition can be traced back many decades. Of course, the original <span id=\"urn:enhancement-4d045aa8\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition systems were nothing like as powerful as today. But they were still able to perform some useful tasks.<\/p>\n\n\n\n<h3 class=\"eplus-fIjoUs wp-block-heading\" id=\"h-the-artificial-neuron\">The artificial neuron<\/h3>\n\n\n\n<p class=\"eplus-Nq3AqF\">The underlying element in any <span id=\"urn:enhancement-f1697c7b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural network<\/span> is called an artificial neuron. These data structures are loosely based on human neurons. Each neuron takes a number of <span id=\"urn:enhancement-e25b887c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/weight_function\">weighted<\/span> inputs and combines them using a <span id=\"urn:enhancement-ae730564\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/transfer_function\">transfer function<\/span>. It then uses an <span id=\"urn:enhancement-943b9de6\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/activation_function\">activation function<\/span> to determine whether to fire or not. This is shown in the following diagram.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-GwwsMa\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image2-1024x512.png\" alt=\"Artificial neural network\" class=\"wp-image-11242\" width=\"732\" height=\"366\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image2-1024x512.png 1024w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image2-300x150.png 300w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image2-768x384.png 768w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image2.png 1200w\" sizes=\"auto, (max-width: 732px) 100vw, 732px\" \/><figcaption>Structure of an artificial neural network<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-yNMGE0\">The <span id=\"urn:enhancement-77734571\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/transfer_function\">transfer function<\/span> is typically a simple sum or product. The <span id=\"urn:enhancement-11d08fcc\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/activation_function\">activation function<\/span> can be one of several operators including a step, <span id=\"urn:enhancement-48ef382c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/identity_function\">identity function<\/span>, or logistic <span id=\"urn:enhancement-e4346e9d\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/subroutine\">function<\/span>.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-jgGYNp\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"494\" height=\"174\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image1.png\" alt=\"Different functions give different outputs from the neuron\" class=\"wp-image-11243\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image1.png 494w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image1-300x106.png 300w\" sizes=\"auto, (max-width: 494px) 100vw, 494px\" \/><figcaption>Different functions give different outputs from the neuron<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-AlJGuT\">If the output of the <span id=\"urn:enhancement-fb68ab2\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/activation_function\">activation function<\/span> exceeds the threshold, the neuron fires, otherwise it remains dormant.<\/p>\n\n\n\n<h3 class=\"eplus-8Max8x wp-block-heading\" id=\"h-from-neurons-to-neural-networks\">From neurons to neural networks<\/h3>\n\n\n\n<p class=\"eplus-7pGdpl\">By itself, an artificial neuron is not much use. But they are very powerful when you combine them into <span id=\"urn:enhancement-916ed47b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural networks<\/span>. All artificial neural networks (ANNs) consist of at least 3 layers: an input <span id=\"urn:enhancement-38bdf69\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/abstraction_layer\">layer<\/span>, 1 or more hidden layers, and an output <span id=\"urn:enhancement-da8c1bc6\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/abstraction_layer\">layer<\/span>. The number of inputs depends on your <span id=\"urn:enhancement-df116d01\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/data\">data<\/span>. The ANN <span id=\"urn:enhancement-6f77de6e\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/statistical_classification\">classifies<\/span> the input <span id=\"urn:enhancement-6749b27d\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/data\">data<\/span> into 2 or more outputs. When you feed in your <span id=\"urn:enhancement-7cb544bd\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/data\">data<\/span>, only one output neuron should activate. Which neuron fires depends on the <span id=\"urn:enhancement-f1068b4c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/weight_function\">weights<\/span> you set in the network. The <span id=\"urn:enhancement-5bfde728\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/process_computing\">process<\/span> of setting these <span id=\"urn:enhancement-f84bb7df\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/weight_function\">weights<\/span> so that the correct output fires is called <span id=\"urn:enhancement-234354b1\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/training\">training<\/span>.&nbsp;<\/p>\n\n\n\n<h2 class=\"eplus-7MBi83 wp-block-heading\" id=\"h-using-cnns-for-image-recognition\">Using CNNs for image recognition<\/h2>\n\n\n\n<p class=\"eplus-hNNKGC\">Convolutional <span id=\"urn:enhancement-18af3912\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural networks<\/span> are a form of deep <span id=\"urn:enhancement-d1cf88a2\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural network<\/span> widely used for <span id=\"urn:enhancement-c3e3cf74\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition. They are deep because they utilise multiple hidden <span id=\"urn:enhancement-bcf24783\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/abstraction_layer\">layers<\/span>. They are convolutional because many of their hidden <span id=\"urn:enhancement-922e506f\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/abstraction_layer\">layers<\/span> <span id=\"urn:enhancement-18982782\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/convolution\">convolve<\/span> or simplify the input <span id=\"urn:enhancement-fb73a376\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> into a feature map. For <span id=\"urn:enhancement-a48c0cbf\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/instance_computer_science\">instance<\/span>, you might take a 64&#215;64 <span id=\"urn:enhancement-ae3c8197\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/pixel\">pixel<\/span> <span id=\"urn:enhancement-6260c3b6\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/bitmap\">bitmap<\/span> and reduce it to a 4&#215;4 matrix showing which areas of the <span id=\"urn:enhancement-8f45a0a3\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> were more dark and which more light.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-AyFgqs\"><figure class=\"aligncenter size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"321\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image6-1024x321.jpg\" alt=\"Convolution can simplify hand-written numerals\" class=\"wp-image-11246\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image6-1024x321.jpg 1024w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image6-300x94.jpg 300w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image6-768x241.jpg 768w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image6.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><figcaption>Convolution can simplify hand-written numerals<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-98d643\">One of the fathers of <span id=\"urn:enhancement-34010ddc\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition is the French <span id=\"urn:enhancement-9ccb5679\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/computer_scientist\">computer scientist<\/span>, <a href=\"https:\/\/en.wikipedia.org\/wiki\/Yann_LeCun\">Yann LeCun<\/a>. In the late 1980s, he worked at AT&amp;T Bell Laboratories in New Jersey. A key project at the lab looked to create a <span id=\"urn:enhancement-f6c41ec1\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/system\">system<\/span> to recognise hand-written zip codes on envelopes. The aim was to automate the sorting of mail. In 1989, LeCun\u2019s colleagues published a paper showing how to use a <span id=\"urn:enhancement-45aa189b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural network<\/span> to perform this task. However, this network had to be laboriously tuned by hand. LeCun made a huge breakthrough by applying a <span id=\"urn:enhancement-2f850200\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/process_computing\">process<\/span> called <a href=\"https:\/\/en.wikipedia.org\/wiki\/Backpropagation\">backpropagation<\/a> to the problem.&nbsp;<\/p>\n\n\n\n<p class=\"eplus-nRveeD\"><span id=\"urn:enhancement-5af82485\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/backpropagation\">Backpropagation<\/span> involves taking the outputs and working backwards through the CNN changing the <span id=\"urn:enhancement-aa73beef\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/weight_function\">weights<\/span> of each neuron, the aim being to reduce the mean error each time. There are numerous approaches for doing this, such as gradient descent. You can read more about it in this <a href=\"https:\/\/towardsdatascience.com\/understanding-backpropagation-algorithm-7bb3aa2f95fd\">tutorial<\/a> on Towards Data Science.<\/p>\n\n\n\n<h2 class=\"eplus-BCmbi1 wp-block-heading\" id=\"h-steps-for-image-recognition\">Steps for image recognition<\/h2>\n\n\n\n<p class=\"eplus-j9hmFz\">Modern <span id=\"urn:enhancement-1d2b7532\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition divides the problem into three steps. Detection (or localisation), <span id=\"urn:enhancement-6c0a9fe9\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/statistical_classification\">classification<\/span>, and <span id=\"urn:enhancement-7279fa75\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image_segmentation\">segmentation<\/span>.&nbsp;<\/p>\n\n\n\n<h3 class=\"eplus-e0Jhqp wp-block-heading\" id=\"h-detection-localisation\">Detection\/localisation<\/h3>\n\n\n\n<p class=\"eplus-mY8puy\">This involves identifying different <span id=\"urn:enhancement-69f54d0b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/feature_machine_learning\">features<\/span> within the <span id=\"urn:enhancement-7932a847\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span>. In the figure below (from a <a href=\"https:\/\/engineering.fb.com\/ml-applications\/segmenting-and-refining-images-with-sharpmask\/\">Facebook Engineering blogpost<\/a>) the <span id=\"urn:enhancement-67de69e0\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/system\">system<\/span> has identified a number of different elements.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-2lUCAc\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image5.png\" alt=\"Facebook engineers demonstrate how image recognition and object detection works\" class=\"wp-image-11247\" width=\"483\" height=\"264\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image5.png 950w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image5-300x164.png 300w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image5-768x420.png 768w\" sizes=\"auto, (max-width: 483px) 100vw, 483px\" \/><figcaption>Facebook engineers demonstrate how object detection works<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-0CJ2yt\">There are numerous approaches to detection and localisation. The aim is to identify whether adjacent <span id=\"urn:enhancement-ae05a78\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/pixel\">pixels<\/span> are related to each other. In the handwritten numeral 3 example above, this is relatively easy. You just identify which areas are black and which are white. But in a detailed colour <span id=\"urn:enhancement-e2cc3663\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span>, this gets more tricky.<\/p>\n\n\n\n<h3 class=\"eplus-EgOJ3I wp-block-heading\" id=\"h-classification\">Classification<\/h3>\n\n\n\n<p class=\"eplus-PsxXgE\">Next, you try to classify each region you have identified. This is where machine <span id=\"urn:enhancement-e682f3f1\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/learning\">learning<\/span> really comes to the fore. It allows you to train your <span id=\"urn:enhancement-a4d6a09c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/statistical_classification\">classifier<\/span> against a large, labelled <span id=\"urn:enhancement-d4efc467\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/data_set\">dataset<\/span>. In the <span id=\"urn:enhancement-2004e39d\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> above, the <span id=\"urn:enhancement-7f6bbe14\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/statistical_classification\">classifier<\/span> is able to tell that there are 3 classes of <span id=\"urn:enhancement-ce82c52b\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/object_computer_science\">object<\/span>. It is thus able to determine that the <span id=\"urn:enhancement-47606b37\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> contains a man, a dog, and 5 sheep.&nbsp;<\/p>\n\n\n\n<h4 class=\"eplus-WGhm3P wp-block-heading\" id=\"h-segmentation\">Segmentation<\/h4>\n\n\n\n<p class=\"eplus-r7byC4\">The final stage is to understand how the items relate to each other. For <span id=\"urn:enhancement-f68341b8\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/instance_computer_science\">instance<\/span>, is the man in front of or behind the sheep? This <span id=\"urn:enhancement-a1980a53\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/process_computing\">process<\/span> is known as <span id=\"urn:enhancement-a55ef2df\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image_segmentation\">segmentation<\/span> or semantic <span id=\"urn:enhancement-59288274\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> <span id=\"urn:enhancement-43ca7505\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image_segmentation\">segmentation<\/span>. The <span id=\"urn:enhancement-39b447e2\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> below shows the result for the <span id=\"urn:enhancement-56d82a5\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">picture<\/span> of the shepherd.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-XJeTr1\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image8-1024x615.png\" alt=\"The system is able to segment the image and work out the relationship between elements\" class=\"wp-image-11248\" width=\"480\" height=\"288\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image8-1024x615.png 1024w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image8-300x180.png 300w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image8-768x461.png 768w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image8.png 1124w\" sizes=\"auto, (max-width: 480px) 100vw, 480px\" \/><figcaption>The system is able to segment the image and work out the relationship between elements<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-YM1EaD\">This gets much harder when you have complex <span id=\"urn:enhancement-2af1bb9f\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/digital_image\">images<\/span> like a typical street scene. Self-driving vehicles need to do real-time <span id=\"urn:enhancement-db3505b1\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image_segmentation\">segmentation<\/span> as shown in the following frame taken from Hengshuang Zhao\u2019s YouTube video \u2018ICNet for Real-Time Semantic <span id=\"urn:enhancement-4ad5db07\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image_segmentation\">Segmentation<\/span> on High-Resolution Images\u2019.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-nvz9Ux\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-1024x576.png\" alt=\"There are many different objects to interpret with image recognition in this street scene\" class=\"wp-image-11249\" width=\"609\" height=\"342\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-1024x576.png 1024w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-300x169.png 300w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-768x432.png 768w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-896x504.png 896w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7-400x225.png 400w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image7.png 1200w\" sizes=\"auto, (max-width: 609px) 100vw, 609px\" \/><figcaption>There are many different objects to interpret in this street scene<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-upMXlR\">Once a <span id=\"urn:enhancement-5fa3dd7c\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/artificial_neural_network\">neural network<\/span> has been <span id=\"urn:enhancement-58c8de1a\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/training\">trained<\/span> to perform <span id=\"urn:enhancement-2281b6a7\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition, you can deploy it on any suitable <span id=\"urn:enhancement-433cc2be\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/computer_hardware\">hardware<\/span>. Here, we are interested in deploying <span id=\"urn:enhancement-ed2442b6\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition at the network edge. The <span id=\"urn:enhancement-85835d65\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/representational_state_transfer\">rest<\/span> of this <span id=\"urn:enhancement-eca2ad82\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/article_publishing\">article<\/span> explores how to do this in practice.<\/p>\n\n\n\n<h2 class=\"eplus-CM1MTG wp-block-heading\" id=\"h-a-practical-implementation\">A practical implementation<\/h2>\n\n\n\n<p class=\"eplus-X1o1A3\">As we already saw, <span id=\"urn:enhancement-37a33b12\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/image\">image<\/span> recognition requires a <span id=\"urn:enhancement-1aa0102f\" class=\"textannotation disambiguated wl-thing\" itemid=\"http:\/\/data.wordlift.io\/wl01770\/entity\/training\">trained<\/span> neural network or other machine learning model. In this example, we will create a simple person-detection model. For this, you need the model, a platform to run the model on, and a camera or other image source. For this example, I have selected the Microchip SAM E54. Or, more precisely, the <a href=\"https:\/\/www.mouser.com\/new\/microchip\/microchip-sam-e54-xplained-pro\/\">SAM E54 Xplained Pro Evaluation Kit.<\/a> This MCU evaluation board is ideal for developing ML models:<\/p>\n\n\n\n<ul class=\"eplus-1vg7Mw wp-block-list\"><li>ATSAME54P20A 32-bit Arm\u00ae Cortex\u00ae-M4F Microcontroller&nbsp;<\/li><li>256MB Flash memory<\/li><li>An SD card slot<\/li><li>2 USB ports (1 debug)<\/li><li>PCC camera interface<\/li><li>Headers for Xplained Pro Extension Kits&nbsp;<\/li><li>A built-in high-accuracy current meter (for precise power profiling)<\/li><\/ul>\n\n\n\n<h3 class=\"eplus-Ka19pk wp-block-heading\" id=\"h-setting-up-the-board\">Setting up the board<\/h3>\n\n\n\n<p class=\"eplus-mMq3MU\">Before you do anything else, you need to make some modifications to the evaluation kit board. For efficiency, the board reuses the same I\/O pins for multiple connectors. By default, the PCC interface is disabled. These steps enable the PCC interface and allow you to connect a suitable camera, such as the <a href=\"https:\/\/eu.mouser.com\/ProductDetail\/Seeed-Studio\/114991881?qs=%2Fha2pyFaduiiOQPErtp4ch5VKxcM9qYNRnQTc0BF0%2FBWmAa2YWuZTA%3D%3D\">Seeed Studio fisheye camera<\/a>.&nbsp;<\/p>\n\n\n\n<ol class=\"eplus-wTZczR wp-block-list\"><li>Remove the line of 9 surface mount resistors from adjacent to the PCC header. These are numbered <em>R621<\/em>, <em>R622<\/em>, <em>R623<\/em>, <em>R624<\/em>, <em>R625<\/em>, <em>R626<\/em>, <em>R628<\/em>, <em>R629<\/em>, and <em>R630<\/em>. This disables the Ethernet port and SD card.<\/li><li>Remove resistor <em>R308<\/em> to disable the Qtouch button.<\/li><li>Solder on 0\u03a9 resistors (0402 package, min 1\/16W) across the 4 pads <em>R205<\/em>, <em>R206<\/em>, <em>R207<\/em>, and R208. This enables the PCC camera header.<\/li><li>Solder a 2&#215;10 pin male header to the PCC header.&nbsp;<\/li><\/ol>\n\n\n\n<p class=\"eplus-iKLdru\">Having done this, you are ready to connect the camera. For this, you need a custom adapter board. You can buy the board from <a aria-label=\"undefined (opens in a new tab)\" href=\"https:\/\/oshpark.com\/shared_projects\/G03sxkXq\" target=\"_blank\" rel=\"noreferrer noopener\">Oshpark<\/a>, or you will find the necessary files to create it in <a aria-label=\"undefined (opens in a new tab)\" href=\"https:\/\/github.com\/Mouser-Electronics\/TensorFlowLite-Microchip\/tree\/master\/3D%20Files\" target=\"_blank\" rel=\"noreferrer noopener\">this <\/a><a href=\"https:\/\/github.com\/Mouser-Electronics\/TensorFlowLite-Microchip\/tree\/master\/3D%20Files\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">G<\/a><a aria-label=\"undefined (opens in a new tab)\" href=\"https:\/\/github.com\/Mouser-Electronics\/TensorFlowLite-Microchip\/tree\/master\/3D%20Files\" target=\"_blank\" rel=\"noreferrer noopener\">itlab folder<\/a>. Finally, you are ready to flash and test the ML model.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-RIyGyF\"><figure class=\"aligncenter size-large is-resized\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image10.png\" alt=\"The custom camera adapter board used for image recognition\" class=\"wp-image-11250\" width=\"492\" height=\"486\" srcset=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image10.png 683w, https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image10-300x296.png 300w\" sizes=\"auto, (max-width: 492px) 100vw, 492px\" \/><figcaption><em>The custom camera adapter board<\/em><\/figcaption><\/figure><\/div>\n\n\n\n<h3 class=\"eplus-s6HQ20 wp-block-heading\" id=\"h-preparing-the-code-and-toolchain\">Preparing the code and toolchain<\/h3>\n\n\n\n<p class=\"eplus-DvstGi\">There are several suitable IDEs available for the SAM E54. These include Microchip\u2019s own <a href=\"https:\/\/www.microchip.com\/mplab\/mplab-x-ide\">MPLAB<sup>\u00ae<\/sup> X<\/a> and <a href=\"https:\/\/www.microchip.com\/mplab\/avr-support\/atmel-studio-7\">Atmel Studio 7<\/a>. In this project, I will use MPLAB<sup>\u00ae<\/sup> X.&nbsp;The first step is to get hold of the required compiler and libraries. You can download the 32 bit compiler for the SAM E54 from <a href=\"https:\/\/www.microchip.com\/mplab\/compilers\">Microchip\u2019s website<\/a>. You will need to install this compiler on your system and set the correct PATH variable. Now you need to download the relevant drivers and libraries for the evaluation kit. The easiest way to do this is to go to the <a href=\"https:\/\/start.atmel.com\/\">Atmel START<\/a> website. Click the <strong>CREATE NEW PROJECT<\/strong> button and enter SAM E54 in the filter box. This should bring up a screen like the following.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large eplus-FZYjMk\"><img decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image9-1024x869.png\" alt=\"\" class=\"wp-image-11251\"\/><\/figure>\n\n\n\n<p class=\"eplus-xEOWcW\">Select the SAM E54 Xplained Pro and set the number of cameras to 1 in the left side menu. Then click <strong>CREATE NEW PROJECT<\/strong>. From the next screen, click <strong>EXPORT PROJECT<\/strong>.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large eplus-v7rONT\"><img decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image12-1024x569.png\" alt=\"\" class=\"wp-image-11252\"\/><\/figure>\n\n\n\n<p class=\"eplus-NoQyTD\">Make sure you select MPLAB X IDE and then download the pack.&nbsp;<\/p>\n\n\n\n<h3 class=\"eplus-uzhHbm wp-block-heading\" id=\"h-opening-the-project\">Opening the project<\/h3>\n\n\n\n<p class=\"eplus-0CmLR0\">Open MPLAB\u00ae&nbsp; X IDE and go to <strong>File \u2192 Import \u2192 START MPLAB Project<\/strong>. Locate the .atzip file you just downloaded and keep clicking <strong>Next<\/strong> until you reach the screen asking you to select a compiler. Make sure you select the 32 bit compiler you installed above and click <strong>Next<\/strong>. If you are happy with the location and project name, click <strong>Finish<\/strong>. You now have a blank project with the correct drivers and libraries for using the SAM E54 with a camera module.<\/p>\n\n\n\n<p class=\"eplus-dxelpu\">The next step is to clone the <a href=\"https:\/\/github.com\/Mouser-Electronics\/TensorFlowLite-Microchip\/tree\/master\/Software\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">source code<\/a> for the image detection project and add this to the project. There are various ways to do this. I chose to clone the software onto my computer and then manually add the files to the relevant folder within the project. The source code doesn\u2019t include the trained model data. It is just a skeleton. If you want to train an accurate model, you need to follow <a href=\"https:\/\/github.com\/tensorflow\/tensorflow\/blob\/master\/tensorflow\/lite\/micro\/examples\/person_detection\/training_a_model.md\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">this tutorial<\/a> from the TensorFlow Lite website. Once you have completely trained your model, you will have a new file called <code>person_detect_model_data.cc<\/code>. This should be added to your project.<\/p>\n\n\n\n<h3 class=\"eplus-SEOS57 wp-block-heading\" id=\"h-building-and-running-the-project\">Building and running the project<\/h3>\n\n\n\n<p class=\"eplus-8A05Ho\">The final step is to connect your board to the computer via USB. Make sure you connect to the Debug port at the top right of the board.<\/p>\n\n\n\n<div class=\"wp-block-image eplus-4xGUTr\"><figure class=\"aligncenter size-large\"><img decoding=\"async\" src=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image11-1024x913.png\" alt=\"\" class=\"wp-image-11253\"\/><figcaption>Make sure you connect to the Debug USB port<\/figcaption><\/figure><\/div>\n\n\n\n<p class=\"eplus-n5feqU\">You are now ready to build the project. Ensure you set the SAM E54 Xplained as the debug target in the project <strong>Properties<\/strong>. Also choose the onboard EDBG debug header. Then click on the <strong>Build<\/strong> icon. The project should build successfully and download to the evaluation board.<\/p>\n\n\n\n<p class=\"eplus-d0f0wC\">The software you just built provides a simple neural network model that uses image recognition for person detection. That means, it monitors the camera feed and tries to detect when a person is in the field of view.&nbsp;<\/p>\n\n\n\n<p class=\"eplus-UYX653\">Power on the board, making sure no person is in the camera\u2019s field of view. Wait for about 10s for the board to initialise. Now point the camera at a person. After a few seconds, the ML model should detect the person and the User LED will illuminate. Now move the camera to face away from the person. The User LED will switch off again.<\/p>\n\n\n\n<p class=\"eplus-hX7KrG\">This example is extremely simple. However, even this model has real life applications. For instance, being incorporated with a security camera to trigger recording when it detects a person in an empty room. If you want to extend this, you can try training a new model that can <a href=\"http:\/\/vis-www.cs.umass.edu\/fddb\/\" target=\"_blank\" aria-label=\"undefined (opens in a new tab)\" rel=\"noreferrer noopener\">detect faces<\/a>.&nbsp;<\/p>\n\n\n\n<h2 class=\"eplus-qJdba7 wp-block-heading\" id=\"h-conclusions\">Conclusions<\/h2>\n\n\n\n<p class=\"eplus-Q1hl9c\">Image recognition is one of the most important use cases for ML at the network edge. This is because it powers applications such as real-time facial recognition and self-driving vehicles. Image recognition requires the use of complex ML structures, such as convolutional neural networks. However, as we have seen, you can now run these models on an MCU chip that costs substantially less than \u20ac10. In time, the capabilities of such MCUs will grow and grow. Next time, I will explore another powerful application of ML at the edge\u2014voice control.&nbsp;<\/p>\n\n\n","protected":false},"excerpt":{"rendered":"<p>This series is exploring the rationale for moving machine learning to the network edge. This article looks in more detail at image recognition, one of the prime use cases for ML at the edge. As explained in previous articles, there are many use cases for machine learning at the network edge. So far, I explained&#8230; <a class=\"more-link\" href=\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\">Read more<\/a><\/p>\n","protected":false},"author":83,"featured_media":11430,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_editorskit_title_hidden":false,"_editorskit_reading_time":9,"_editorskit_is_block_options_detached":false,"_editorskit_block_options_position":"{}","_uag_custom_page_level_css":"","_genesis_hide_title":false,"_genesis_hide_breadcrumbs":false,"_genesis_hide_singular_image":false,"_genesis_hide_footer_widgets":false,"_genesis_custom_body_class":"","_genesis_custom_post_class":"","_genesis_layout":"","footnotes":""},"categories":[46],"tags":[7105],"collections":[],"class_list":{"0":"post-11239","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-ai-ml","8":"tag-mouser","9":"entry"},"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v26.9 (Yoast SEO v26.9) - https:\/\/yoast.com\/product\/yoast-seo-premium-wordpress\/ -->\n<title>Seeing Is Believing: Image Recognition on a \u20ac10 MCU - Codemotion<\/title>\n<meta name=\"description\" content=\"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Seeing Is Believing: Image Recognition on a \u20ac10 MCU\" \/>\n<meta property=\"og:description\" content=\"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\" \/>\n<meta property=\"og:site_name\" content=\"Codemotion Magazine\" \/>\n<meta property=\"article:publisher\" content=\"https:\/\/www.facebook.com\/Codemotion.Italy\/\" \/>\n<meta property=\"article:published_time\" content=\"2020-10-12T07:52:24+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2022-01-05T19:06:17+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"675\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Mark Patrick, Mouser Electronics\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@CodemotionIT\" \/>\n<meta name=\"twitter:site\" content=\"@CodemotionIT\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Mark Patrick, Mouser Electronics\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"11 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\"},\"author\":{\"name\":\"Mark Patrick, Mouser Electronics\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/664e4da6990fc1344a2299435a542654\"},\"headline\":\"Seeing Is Believing: Image Recognition on a \u20ac10 MCU\",\"datePublished\":\"2020-10-12T07:52:24+00:00\",\"dateModified\":\"2022-01-05T19:06:17+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\"},\"wordCount\":2158,\"publisher\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg\",\"keywords\":[\"Mouser\"],\"articleSection\":[\"AI\/ML\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\",\"url\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\",\"name\":\"Seeing Is Believing: Image Recognition on a \u20ac10 MCU - Codemotion\",\"isPartOf\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg\",\"datePublished\":\"2020-10-12T07:52:24+00:00\",\"dateModified\":\"2022-01-05T19:06:17+00:00\",\"description\":\"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage\",\"url\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg\",\"contentUrl\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg\",\"width\":1200,\"height\":675,\"caption\":\"image recognition\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.codemotion.com\/magazine\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"AI\/ML\",\"item\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Machine Learning\",\"item\":\"https:\/\/www.codemotion.com\/magazine\/ai-ml\/machine-learning\/\"},{\"@type\":\"ListItem\",\"position\":4,\"name\":\"Seeing Is Believing: Image Recognition on a \u20ac10 MCU\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#website\",\"url\":\"https:\/\/www.codemotion.com\/magazine\/\",\"name\":\"Codemotion Magazine\",\"description\":\"We code the future. Together\",\"publisher\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.codemotion.com\/magazine\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#organization\",\"name\":\"Codemotion\",\"url\":\"https:\/\/www.codemotion.com\/magazine\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2019\/11\/codemotionlogo.png\",\"contentUrl\":\"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2019\/11\/codemotionlogo.png\",\"width\":225,\"height\":225,\"caption\":\"Codemotion\"},\"image\":{\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.facebook.com\/Codemotion.Italy\/\",\"https:\/\/x.com\/CodemotionIT\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/664e4da6990fc1344a2299435a542654\",\"name\":\"Mark Patrick, Mouser Electronics\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/0d35fad9fee01e991637b67f54ae7cb8b001b5d2c1e4f7c1942b2105dad5a9bf?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/0d35fad9fee01e991637b67f54ae7cb8b001b5d2c1e4f7c1942b2105dad5a9bf?s=96&d=mm&r=g\",\"caption\":\"Mark Patrick, Mouser Electronics\"},\"url\":\"https:\/\/www.codemotion.com\/magazine\/author\/mark-patrick\/\"}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU - Codemotion","description":"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/","og_locale":"en_US","og_type":"article","og_title":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU","og_description":"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.","og_url":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/","og_site_name":"Codemotion Magazine","article_publisher":"https:\/\/www.facebook.com\/Codemotion.Italy\/","article_published_time":"2020-10-12T07:52:24+00:00","article_modified_time":"2022-01-05T19:06:17+00:00","og_image":[{"width":1200,"height":675,"url":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg","type":"image\/jpeg"}],"author":"Mark Patrick, Mouser Electronics","twitter_card":"summary_large_image","twitter_creator":"@CodemotionIT","twitter_site":"@CodemotionIT","twitter_misc":{"Written by":"Mark Patrick, Mouser Electronics","Est. reading time":"11 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#article","isPartOf":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/"},"author":{"name":"Mark Patrick, Mouser Electronics","@id":"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/664e4da6990fc1344a2299435a542654"},"headline":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU","datePublished":"2020-10-12T07:52:24+00:00","dateModified":"2022-01-05T19:06:17+00:00","mainEntityOfPage":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/"},"wordCount":2158,"publisher":{"@id":"https:\/\/www.codemotion.com\/magazine\/#organization"},"image":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage"},"thumbnailUrl":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg","keywords":["Mouser"],"articleSection":["AI\/ML"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/","url":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/","name":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU - Codemotion","isPartOf":{"@id":"https:\/\/www.codemotion.com\/magazine\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage"},"image":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage"},"thumbnailUrl":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg","datePublished":"2020-10-12T07:52:24+00:00","dateModified":"2022-01-05T19:06:17+00:00","description":"In this article, you will learn how to use image recognition as an exampe of edge machine learning, implementing it on a Microchip SAM E54 MCU.","breadcrumb":{"@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#primaryimage","url":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg","contentUrl":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg","width":1200,"height":675,"caption":"image recognition"},{"@type":"BreadcrumbList","@id":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/image-recognition-on-mcu\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.codemotion.com\/magazine\/"},{"@type":"ListItem","position":2,"name":"AI\/ML","item":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/"},{"@type":"ListItem","position":3,"name":"Machine Learning","item":"https:\/\/www.codemotion.com\/magazine\/ai-ml\/machine-learning\/"},{"@type":"ListItem","position":4,"name":"Seeing Is Believing: Image Recognition on a \u20ac10 MCU"}]},{"@type":"WebSite","@id":"https:\/\/www.codemotion.com\/magazine\/#website","url":"https:\/\/www.codemotion.com\/magazine\/","name":"Codemotion Magazine","description":"We code the future. Together","publisher":{"@id":"https:\/\/www.codemotion.com\/magazine\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.codemotion.com\/magazine\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.codemotion.com\/magazine\/#organization","name":"Codemotion","url":"https:\/\/www.codemotion.com\/magazine\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.codemotion.com\/magazine\/#\/schema\/logo\/image\/","url":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2019\/11\/codemotionlogo.png","contentUrl":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2019\/11\/codemotionlogo.png","width":225,"height":225,"caption":"Codemotion"},"image":{"@id":"https:\/\/www.codemotion.com\/magazine\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.facebook.com\/Codemotion.Italy\/","https:\/\/x.com\/CodemotionIT"]},{"@type":"Person","@id":"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/664e4da6990fc1344a2299435a542654","name":"Mark Patrick, Mouser Electronics","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.codemotion.com\/magazine\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/0d35fad9fee01e991637b67f54ae7cb8b001b5d2c1e4f7c1942b2105dad5a9bf?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/0d35fad9fee01e991637b67f54ae7cb8b001b5d2c1e4f7c1942b2105dad5a9bf?s=96&d=mm&r=g","caption":"Mark Patrick, Mouser Electronics"},"url":"https:\/\/www.codemotion.com\/magazine\/author\/mark-patrick\/"}]}},"featured_image_src":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-600x400.jpg","featured_image_src_square":"https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-600x600.jpg","author_info":{"display_name":"Mark Patrick, Mouser Electronics","author_link":"https:\/\/www.codemotion.com\/magazine\/author\/mark-patrick\/"},"uagb_featured_image_src":{"full":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg",1200,675,false],"thumbnail":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-150x150.jpg",150,150,true],"medium":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-300x169.jpg",300,169,true],"medium_large":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-768x432.jpg",768,432,true],"large":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-1024x576.jpg",1024,576,true],"1536x1536":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg",1200,675,false],"2048x2048":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg",1200,675,false],"small-home-featured":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1.jpg",100,56,false],"sidebar-featured":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-180x128.jpg",180,128,true],"genesis-singular-images":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-896x504.jpg",896,504,true],"archive-featured":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-400x225.jpg",400,225,true],"gb-block-post-grid-landscape":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-600x400.jpg",600,400,true],"gb-block-post-grid-square":["https:\/\/www.codemotion.com\/magazine\/wp-content\/uploads\/2020\/10\/image-recognition-1-600x600.jpg",600,600,true]},"uagb_author_info":{"display_name":"Mark Patrick, Mouser Electronics","author_link":"https:\/\/www.codemotion.com\/magazine\/author\/mark-patrick\/"},"uagb_comment_info":0,"uagb_excerpt":"This series is exploring the rationale for moving machine learning to the network edge. This article looks in more detail at image recognition, one of the prime use cases for ML at the edge. As explained in previous articles, there are many use cases for machine learning at the network edge. So far, I explained&#8230;&hellip;","lang":"en","_links":{"self":[{"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/posts\/11239","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/users\/83"}],"replies":[{"embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/comments?post=11239"}],"version-history":[{"count":10,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/posts\/11239\/revisions"}],"predecessor-version":[{"id":12381,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/posts\/11239\/revisions\/12381"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/media\/11430"}],"wp:attachment":[{"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/media?parent=11239"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/categories?post=11239"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/tags?post=11239"},{"taxonomy":"collections","embeddable":true,"href":"https:\/\/www.codemotion.com\/magazine\/wp-json\/wp\/v2\/collections?post=11239"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}