{"id":8136,"date":"2017-09-26T10:28:54","date_gmt":"2017-09-26T10:28:54","guid":{"rendered":"http:\/\/stblog.lunaeme.com\/?p=8136"},"modified":"2023-09-20T13:28:31","modified_gmt":"2023-09-20T13:28:31","slug":"deep-learning-neural-networks-1","status":"publish","type":"post","link":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/","title":{"rendered":"Introduction to Deep Learning Part 1: Neural Networks"},"content":{"rendered":"<p>The processes to train a machine are not that different from those that take place when humans learn. Indeed, scientists took inspiration from the human brain to create neural networks: When we try to make a complex concept understandable, an immediate\/natural way is to try and remember the way we originally absorbed the concept, what made us understand it or what we used to make it easier.<\/p>\n<p><!--more--><\/p>\n<p>The same logic should apply when faced with the task of trying to teach a machine to learn: We should teach the machine to use past examples to help it better understand a concept or use reinforcement learning techniques to ensure that the machine learns. Following this line, it makes sense to reduce the teaching process to a learning unit, and try to emulate its behavior.<\/p>\n<p>In such a world each neuron has incoming and outgoing connections. Through these connections flows a signal that activates the neurons according to its intensity. Each neuron then transmits the signal in turn depending on its activation. These premises planted the seed of the neural network theory, which surged during the mid-late twentieth century, but was later discarded: the algorithms were very heavy to train and not quite successful. Few data and low power machines were available. However, these last two points have changed radically with the arrival of Big Data and GPUs: created for video games, with a large matrix calculation capacity, perfect for network structures, which can now be more and more complex. This is where Deep Learning comes in.<\/p>\n<p><span style=\"font-weight: 400;\">Deep learning applications are now truly amazing, ranging from image detection to natural language processing (for example, automatic translation). It gets even more amazing when Deep Learning becomes unsupervised or is able to generate self-representations of the data. It is a fascinating subject, with stunning applications and it is now at the forefront of technical and technological innovation. It is this passion for such a motivating subject that led us to launch our first Introduction to Deep Learning course in the shape of a series of filmed sessions. In this post we introduce our first session (Please note that the <a href=\"https:\/\/www.youtube.com\/watch?v=pGOqWf7GwxI&amp;t=356s\" target=\"_blank\" rel=\"noopener noreferrer\">video tutorial: Deep Learning &#8211; introduction to neural networks<\/a> is in Spanish). &nbsp;<\/span><\/p>\n<h6>&nbsp;<img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8139\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/DeepLearning1.jpg\" alt=\"\" width=\"1024\" height=\"811\"> <span style=\"color: #808080; font-size: 10pt;\">&nbsp;<em>&nbsp;&nbsp;<span style=\"font-weight: 400;\">Figure 1: Deep Learning generated image. &nbsp;&nbsp;<\/span><\/em><\/span><\/h6>\n<p><span style=\"font-weight: 400;\">In this first filmed session, we start by defining neural networks as a machine learning model&nbsp;<\/span><span style=\"font-weight: 400;\">inspired by the human brain, which arise as a way to create highly non-linear features in an intelligent way. We then show how predictions are computed via forward propagation. After that, we define the error function and the gradient descent algorithm to minimize it, which for neural networks makes use of an additional algorithm called backpropagation. Note that training a neural network requires a lot of training data in order to obtain a good approximation of the gradient at each point of the parameter space (and because there are a lot of parameters in a network, it is a high-dimensional space). The backpropagation algorithm leads to a much more efficient computation of the gradient compared to a direct method.<\/span><\/p>\n<h2><b><br \/>\nNeural Networks for supervised learning<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">The remainder of this post focuses on how to use a neural network for supervised learning problems. We start with a dataset with D input features composed of examples (rows) which we treat as column vectors <\/span><b>x <\/b><span style=\"font-weight: 400;\">= (x<\/span><span style=\"font-weight: 400;\">1<\/span><span style=\"font-weight: 400;\">, x<\/span><span style=\"font-weight: 400;\">2<\/span><span style=\"font-weight: 400;\">, \u2026, x<\/span><span style=\"font-weight: 400;\">D<\/span><span style=\"font-weight: 400;\">)<\/span><span style=\"font-weight: 400;\">T<\/span><span style=\"font-weight: 400;\"> that we will use to teach our network. Note that the target column which indicates what the correct output should be for each example (needed by the network to learn) is not included in this vector. The motivation of neural networks is the need for an inherently non-linear model which is able, for instance, to perform well in classification problems where the decision boundary between the classes is non linear. Although linear models fed with non-linear transformation of the features (like powers or products between features) also exhibit non-linear boundaries, this rapidly becomes unfeasible when the number of features of the data grow.<\/span><\/p>\n<p><span style=\"font-size: 10pt;\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8177 aligncenter\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/Interconnected_human_neurons3.jpg\" alt=\"\" width=\"663\" height=\"434\"><br \/>\n<i><span style=\"font-weight: 400;\">&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp;Figure 2: two interconnected human neurons <\/span><\/i><\/span><\/p>\n<p><span style=\"font-weight: 400;\">Neural networks are inspired by how the human brain works: it is composed of simple neurons, each carrying a simple computation with its inputs and transmitting its output to another neuron (Fig. 2). Similarly, neural networks consist of simple units called artificial neurons (Fig. 3(a)) which apply some non-linear function <\/span><i><span style=\"font-weight: 400;\">g<\/span><\/i><span style=\"font-weight: 400;\"> (called <\/span><i><span style=\"font-weight: 400;\">activation function<\/span><\/i><span style=\"font-weight: 400;\">)<\/span> <span style=\"font-weight: 400;\">to a linear combination of its inputs. The coefficients <\/span><b>?<\/b> <span style=\"font-weight: 400;\">of the linear combination represent what the neuron has learned. The most commonly used activation function is the sigmoid.<\/span><\/p>\n<h6><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8138\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/artificial_neuron.jpg\" alt=\"\" width=\"1234\" height=\"490\"><span style=\"color: #808080;\">&nbsp; &nbsp;<span style=\"font-size: 10pt;\"> &nbsp;&nbsp;<i><span style=\"font-weight: 400;\">Figure 3: an artificial neuron with three inputs (left) and the sigmoid activation function (right)<\/span><\/i><\/span><\/span><\/h6>\n<p><span style=\"font-weight: 400;\">A neural network is composed of multiple interconnected artificial neurons organized in layers, such that the outputs (a.k.a. \u201cactivations\u201d) of the neurons of one layer act as inputs for the neurons of the next layer. Therefore, what the neurons in the inner layers are using as <\/span><i><span style=\"font-weight: 400;\">features<\/span><\/i><span style=\"font-weight: 400;\"> (i.e. the activations from previous layer) are indeed non-linear combinations of the original input features. Such combinations possibly represent higher-level concepts (let\u2019s call them <\/span><i><span style=\"font-weight: 400;\">mysterious features<\/span><\/i><span style=\"font-weight: 400;\">), instead of the raw features present in our data.<\/span><\/p>\n<h2><b>Multilayer perceptron and forward propagation to make predictions<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">If we number the neurons of the network, the output (predictions) of a fully connected network like the one displayed in Fig. 4 can be computed as shown in the image &#8211; this process is sometimes known as <\/span><i><span style=\"font-weight: 400;\">forward propagation<\/span><\/i><span style=\"font-weight: 400;\">. The notation and operations seem a bit ugly (a.k.a. \u201cmore technical\u201d), but feel free to skip the figure if you are not interested in the maths. It is nothing more than applying the equation of Fig. 3 repeatedly for each neuron in the hidden layer and in the output layer. In the figure we are collecting together the parameters of each layer in matrices B<\/span><span style=\"font-weight: 400;\">(j)<\/span><span style=\"font-weight: 400;\">. We call &nbsp;the set of all of these matrices <\/span><b><i>B<\/i><\/b><span style=\"font-weight: 400;\"> &nbsp;&#8211; that\u2019s why the output of the network is a function of the inputs named <\/span><i><span style=\"font-weight: 400;\">h<\/span><\/i><b><i>B<\/i><\/b><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8141\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/fully_connected_network.jpg\" alt=\"\" width=\"1235\" height=\"489\"><\/p>\n<h6><span style=\"color: #808080; font-size: 10pt;\"><i><span style=\"font-weight: 400;\">&nbsp; &nbsp; &nbsp; Figure 4: a fully connected network (multilayer perceptron) with one hidden layer and three inputs for examples with D = 3 features.<\/span><\/i><\/span><\/h6>\n<p><span style=\"font-weight: 400;\">In case we are facing a binary classification problem, the class is determined by re-coding the output to one of the classes, either setting a threshold on the output (binary classification) or having as many neurons in the output layer as classes in our problem (multiclass classification). The output class computed by the network corresponds to the index of the neuron of the output layer which yields the highest output value.<\/span><\/p>\n<h2><b>Training algorithm: cost function and gradient descent via backpropagation<\/b><\/h2>\n<p><span style=\"font-weight: 400;\">In any Machine Learning model, fitting a model means finding the best values of its parameters. The best model is one whose parameter values minimize the total error with respect to the actual outputs on our training data. Since the error depends on the parameters chosen, it is a function of the parameters, called the <\/span><i><span style=\"font-weight: 400;\">cost function <\/span><\/i><span style=\"font-weight: 400;\">J. The most common error measure is the MSE (Mean Squared Error): &nbsp;<\/span><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-8140 aligncenter\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/formula_DeepLearning.png\" alt=\"\" width=\"558\" height=\"54\"><span style=\"font-weight: 400;\">where h<\/span><span style=\"font-weight: 400;\">?0, ?1, \u2026, ?P<\/span><span style=\"font-weight: 400;\"> is the equation of the model (be it a neural network or any other machine learning model) depending on P+1 parameters, and <\/span><i><span style=\"font-weight: 400;\">m<\/span><\/i><span style=\"font-weight: 400;\"> is the number of training examples. <\/span><span style=\"font-weight: 400;\"><br \/>\n<\/span><\/p>\n<p><span style=\"font-weight: 400;\">If the cost function is convex, it only has one global optimum, which is the global minimum. It can be found by computing the partial derivatives of J with respect to every variable (gradient vector ?J), and solving an equation system to find the point where the derivatives are all equal to 0 simultaneously (exact solution). Unfortunately, this can be difficult when the model equation is complicated (which yields a non-convex error function with many <\/span><i><span style=\"font-weight: 400;\">local optima<\/span><\/i><span style=\"font-weight: 400;\">), has many variables (large equation system) or is not derivable. That\u2019s why we often use some approximate, iterative optimization algorithm for this task, which does not guarantee the global minimum but generally performs well. Gradient descent is the most common, although there is a wide family of techniques which share the approximate, iterative nature (getting closer and closer to the solution at each time step <\/span><i><span style=\"font-weight: 400;\">t<\/span><\/i><span style=\"font-weight: 400;\">) and the need for evaluating the partial derivatives of J at different parameter points ?<\/span><span style=\"font-weight: 400;\">(t)<\/span><span style=\"font-weight: 400;\">.<\/span><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8144\" src=\"http:\/\/blog.stratio.com\/wp-content\/uploads\/2017\/09\/plot_error_function2.jpg\" alt=\"\" width=\"605\" height=\"352\"><\/p>\n<h6><span style=\"color: #808080; font-size: 10pt;\"><i><span style=\"font-weight: 400;\">&nbsp; &nbsp; &nbsp; Fig 5: a plot of an error function J for two parameters and the iterative process of reaching a minimum (called \u201cconvergence\u201d). Note the starting point heavily affects the solution obtained.<\/span><\/i><\/span><\/h6>\n<p><span style=\"font-weight: 400;\">In our particular case, the parameters would be the set of matrices <\/span><b><i>B<\/i><\/b><span style=\"font-weight: 400;\"> composed of all the weights (betas) of the network. The equation of the output of a neural network is a highly complex and non-linear function of the inputs (more complex with more hidden layers), so it is really difficult to evaluate the partial derivatives. But there is a special way to compute and evaluate them at each point when training a neural network, called the <\/span><i><span style=\"font-weight: 400;\">backpropagation<\/span><\/i><span style=\"font-weight: 400;\"> algorithm. Although it is a bit tricky to explain here, the intuition behind it is to partition the network\u2019s error on a given train example among the neurons, to obtain <\/span><i><span style=\"font-weight: 400;\">how much each neuron has contributed <\/span><\/i><span style=\"font-weight: 400;\">to the total error on that example. This is done for every example from the output layer, back to the hidden layers until we reach the input layer, so it is like back-propagating the error from the output to the input neurons. Once backpropagation has been applied to evaluate the partial derivatives at the current point, the optimization algorithm uses them to update the network\u2019s weights (betas) and moves on to the next iteration. <\/span><\/p>\n<p><span style=\"font-weight: 400;\">In the next session we will address the subject in a more practical way, taking a look at some of the parameters like activation functions and optimizers, and of course how to get our hands dirty and start creating our own deep neural nets, using TensorFlow and Keras.<\/span><\/p>\n<p><b>Watch the video: <a href=\"https:\/\/youtu.be\/pGOqWf7GwxI\" target=\"_blank\" rel=\"noopener noreferrer\">Deep Learning Tutorial &#8211; Introduction to neural networks (Spanish)<\/a><\/b><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Deep learning applications are now truly amazing, ranging from image detection to natural language processing (for example, automatic translation). It gets even more amazing when Deep Learning becomes unsupervised or is able to generate self-representations of the data.<\/p>\n","protected":false},"author":2,"featured_media":8146,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[686],"tags":[],"ppma_author":[794],"yoast_head":"<!-- This site is optimized with the Yoast SEO Premium plugin v22.9 (Yoast SEO v22.9) - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Introduction to Deep Learning 1: discover the neural networks<\/title>\n<meta name=\"description\" content=\"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Introduction to Deep Learning Part 1: Neural Networks\" \/>\n<meta property=\"og:description\" content=\"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\" \/>\n<meta property=\"og:site_name\" content=\"Stratio\" \/>\n<meta property=\"article:published_time\" content=\"2017-09-26T10:28:54+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2023-09-20T13:28:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"730\" \/>\n\t<meta property=\"og:image:height\" content=\"312\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"admin\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:creator\" content=\"@stratiobd\" \/>\n<meta name=\"twitter:site\" content=\"@stratiobd\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"admin\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\"},\"author\":{\"name\":\"admin\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/af4f5fbbeb95bd7d55f79d9a677e615d\"},\"headline\":\"Introduction to Deep Learning Part 1: Neural Networks\",\"datePublished\":\"2017-09-26T10:28:54+00:00\",\"dateModified\":\"2023-09-20T13:28:31+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\"},\"wordCount\":1669,\"publisher\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/#organization\"},\"image\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg\",\"articleSection\":[\"Product\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\",\"url\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\",\"name\":\"Introduction to Deep Learning 1: discover the neural networks\",\"isPartOf\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg\",\"datePublished\":\"2017-09-26T10:28:54+00:00\",\"dateModified\":\"2023-09-20T13:28:31+00:00\",\"description\":\"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).\",\"breadcrumb\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage\",\"url\":\"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg\",\"contentUrl\":\"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg\",\"width\":730,\"height\":312},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.stratio.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Introduction to Deep Learning Part 1: Neural Networks\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#website\",\"url\":\"https:\/\/www.stratio.com\/blog\/\",\"name\":\"Stratio Blog\",\"description\":\"Corporate blog\",\"publisher\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.stratio.com\/blog\/?s={search_term_string}\"},\"query-input\":\"required name=search_term_string\"}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#organization\",\"name\":\"Stratio\",\"url\":\"https:\/\/www.stratio.com\/blog\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/stratio.com\/blog\/wp-content\/uploads\/2020\/06\/stratio-web-logo-1.png\",\"contentUrl\":\"https:\/\/stratio.com\/blog\/wp-content\/uploads\/2020\/06\/stratio-web-logo-1.png\",\"width\":260,\"height\":55,\"caption\":\"Stratio\"},\"image\":{\"@id\":\"https:\/\/www.stratio.com\/blog\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/stratiobd\",\"https:\/\/es.linkedin.com\/company\/stratiobd\",\"https:\/\/www.youtube.com\/c\/StratioBD\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/af4f5fbbeb95bd7d55f79d9a677e615d\",\"name\":\"admin\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/image\/589aaf4b404b1fe099b09564062c4563\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/9b181ae4395243dccaf1c3e3a4749d81?s=96&d=mm&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/9b181ae4395243dccaf1c3e3a4749d81?s=96&d=mm&r=g\",\"caption\":\"admin\"}}]}<\/script>\n<!-- \/ Yoast SEO Premium plugin. -->","yoast_head_json":{"title":"Introduction to Deep Learning 1: discover the neural networks","description":"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/","og_locale":"en_US","og_type":"article","og_title":"Introduction to Deep Learning Part 1: Neural Networks","og_description":"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).","og_url":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/","og_site_name":"Stratio","article_published_time":"2017-09-26T10:28:54+00:00","article_modified_time":"2023-09-20T13:28:31+00:00","og_image":[{"width":730,"height":312,"url":"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg","type":"image\/jpeg"}],"author":"admin","twitter_card":"summary_large_image","twitter_creator":"@stratiobd","twitter_site":"@stratiobd","twitter_misc":{"Written by":"admin","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#article","isPartOf":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/"},"author":{"name":"admin","@id":"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/af4f5fbbeb95bd7d55f79d9a677e615d"},"headline":"Introduction to Deep Learning Part 1: Neural Networks","datePublished":"2017-09-26T10:28:54+00:00","dateModified":"2023-09-20T13:28:31+00:00","mainEntityOfPage":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/"},"wordCount":1669,"publisher":{"@id":"https:\/\/www.stratio.com\/blog\/#organization"},"image":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg","articleSection":["Product"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/","url":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/","name":"Introduction to Deep Learning 1: discover the neural networks","isPartOf":{"@id":"https:\/\/www.stratio.com\/blog\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage"},"image":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage"},"thumbnailUrl":"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg","datePublished":"2017-09-26T10:28:54+00:00","dateModified":"2023-09-20T13:28:31+00:00","description":"Deep learning and neural networks applications are truly amazing, ranging from image detection to natural language processing (eg.: automatic translation).","breadcrumb":{"@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#primaryimage","url":"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg","contentUrl":"https:\/\/www.stratio.com\/blog\/wp-content\/uploads\/2017\/09\/Deep-Learning.jpg","width":730,"height":312},{"@type":"BreadcrumbList","@id":"https:\/\/www.stratio.com\/blog\/deep-learning-neural-networks-1\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.stratio.com\/blog\/"},{"@type":"ListItem","position":2,"name":"Introduction to Deep Learning Part 1: Neural Networks"}]},{"@type":"WebSite","@id":"https:\/\/www.stratio.com\/blog\/#website","url":"https:\/\/www.stratio.com\/blog\/","name":"Stratio Blog","description":"Corporate blog","publisher":{"@id":"https:\/\/www.stratio.com\/blog\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.stratio.com\/blog\/?s={search_term_string}"},"query-input":"required name=search_term_string"}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.stratio.com\/blog\/#organization","name":"Stratio","url":"https:\/\/www.stratio.com\/blog\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.stratio.com\/blog\/#\/schema\/logo\/image\/","url":"https:\/\/stratio.com\/blog\/wp-content\/uploads\/2020\/06\/stratio-web-logo-1.png","contentUrl":"https:\/\/stratio.com\/blog\/wp-content\/uploads\/2020\/06\/stratio-web-logo-1.png","width":260,"height":55,"caption":"Stratio"},"image":{"@id":"https:\/\/www.stratio.com\/blog\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/stratiobd","https:\/\/es.linkedin.com\/company\/stratiobd","https:\/\/www.youtube.com\/c\/StratioBD"]},{"@type":"Person","@id":"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/af4f5fbbeb95bd7d55f79d9a677e615d","name":"admin","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.stratio.com\/blog\/#\/schema\/person\/image\/589aaf4b404b1fe099b09564062c4563","url":"https:\/\/secure.gravatar.com\/avatar\/9b181ae4395243dccaf1c3e3a4749d81?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/9b181ae4395243dccaf1c3e3a4749d81?s=96&d=mm&r=g","caption":"admin"}}]}},"authors":[{"term_id":794,"user_id":2,"is_guest":0,"slug":"admin","display_name":"admin","avatar_url":"https:\/\/secure.gravatar.com\/avatar\/9b181ae4395243dccaf1c3e3a4749d81?s=96&d=mm&r=g","0":null,"1":"","2":"","3":"","4":"","5":"","6":"","7":"","8":""}],"amp_enabled":true,"_links":{"self":[{"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/posts\/8136"}],"collection":[{"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/comments?post=8136"}],"version-history":[{"count":20,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/posts\/8136\/revisions"}],"predecessor-version":[{"id":15678,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/posts\/8136\/revisions\/15678"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/media\/8146"}],"wp:attachment":[{"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/media?parent=8136"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/categories?post=8136"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/tags?post=8136"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/www.stratio.com\/blog\/wp-json\/wp\/v2\/ppma_author?post=8136"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}