By now, the regular readers of this website already know something around Big Data. In our one practical article, we have shown how IBM Watson can be used to analyze texts on Google Docs. From This Series on Approaches of Deep Learning We Will Learn Minimum Theories Around AI, Machine Learning, Natural Language Processing and Of Course, Deep Learning Itself. In today’s age of technology, people want to have self-learning machines, for example, self-driving cars. In computer science, these topics are commonly referred as artificial intelligence. Artificial intelligence is divided into several areas. This intelligence can be represented in computer engineering as machine learning. Previously, we briefly discussed the topic machine learning vs AI vs deep learning in plain language. As the name machine learning suggests, it deals with some algorithm which is learned automatically. An important method of machine learning is the topic of deep learning. This method has been around for a while, but so far it has not been implemented correctly. This requires large amounts of data and very high computing power. In times of big data, this is no longer will be a problem in future. Deep Learning is currently under development but is already being used in one or the other application. A good example of this is the speech recognition from Apple, Siri. There are several approaches to deep learning that are explicitly explained in this article. Cortana, Siri, Skype: Almost everyone today uses software based on deep learning. The topic of deep learning has been researched for some years now and is currently gaining momentum.
In our series of this article, we will first explain the basics of machine learning, as well as the basics of deep learning. Afterwards, we will explain various forms of Deep Learning and give examples. After explaining the basics as well as the application examples, we will explain the approaches of deep learning. The different types of deep learning then will be compared with each other on the basis of criteria and their advantages and disadvantages will be discussed.
|Table of Contents
Approaches of Deep Learning : Basics of Closely Related Matters
This sub-header explains the most important basics of artificial intelligence, machine learning, deep learning and artificial neural networks. These fundamentals are important for further understanding.
As for artificial intelligence, there is no uniform definition in literature. AI is a branch of computer science. One possible definition would be: AI deals with methods that allow a computer to solve such tasks that, when solved by humans, require intelligence. One of the earliest work that deals with AI is “Computing machinery and intelligence” by the English mathematician Alan Turing, 1950. In that work, Alan Turing poses the question of whether machines will ever be able to think. In particular, he does not focus on what intelligence is about, but presents an empirical test – the Turing test. The test is based on the following concept:
A person (C) is in communication with a computer (A) and another person (B). So that the person (C) does not immediately recognize with whom they are communicating, the connection is via a computer terminal. Person (C) now has the task to find out which of the two communication partners is human and which of them is computer. At the end of the test, person (C) has to be guided by the chat to decide which of the two partners was the computer. If person (C) decide wrong, so the computer is winner. That is Turing test. Easy and logical.
Machine Learning is nothing more than a machine’s ability to learn on its own. As for machine learning there are many possible definitions in the literature. One of the many definitions one is – Application and exploration of methods that enable computer systems to independently acquire and expand knowledge to better solve a given problem than before (Learning). Based on this definition, machine learning means that computer systems can generate knowledge by applying certain methods, and using this knowledge to solve problems better than before.
Learning, or intelligence, covers a very wide range of procedures. But what exactly does learning means? Learning is acquiring knowledge and knowledge or memorizing memory. Above all, it includes the process of passing through experience, insights, etc., to attitudes and behaviors which are governed by consciousness and consciousness. This gives many possible definitions of learning. So far, the term has been used only in humans and animals, which gain knowledge and skills through a study. It is different with machines: they are given the opportunity to “acquire” knowledge through algorithms and, following the findings of the knowledge gained, to make things better. They then recognize certain things, can make diagnoses, plan a course, and improve processes. These capabilities make machines that can learn, perfect for image processing, word processing and speech processing and so on.
An algorithm is – A precise, ie in a fixed language, finite description of a general procedure using elementary processing steps to solve a given task. Thus, algorithms can be implemented as a program. When solving algorithms, certain inputs are converted to specific outputs. In machine learning algorithms play a central role, without them, it is not possible to make decisions independently.
Basics of Deep Learning
This sub-header covers the basics of deep learning. The focus is on what constitutes deep learning, how it can be defined, what benefits the approaches of deep learning offer and what it is not.
Subarea of Machine Learning
A unified and scientifically accepted definition for deep learning yet does not exist. In this article, the following is taken as the basis for deep learning:
Deep learning is a part of machine learning, not least inspired by how the brain works, based on neural networks, with the goal of learning from existing information and learning with new content to learn from it again until the machine finally, based on it, forecasts and decisions can meet. These decisions are continuously questioned, then confirmed or changed. This all happens without the intervention of humans or firmly defined rules. A challenge in the field of machine learning is the scaling of learning algorithms to recognize patterns from very large databases and to model them. This problem can be solved with the deep learning algorithm.
Deep learning is based on multi-layered neural networks, gradually gaining a deeper understanding of information, image, or other elements by first identifying individual building blocks, then clusters, logically linking them together, and then recognizing the whole. This approach is also used by the human eye to identify and classify objects.
Deep learning is basically about machines learning to learn. Therefore, the goal is obvious that a machine or a software can learn and think independently. Learning is generally understood as the inference to information after observing a certain behavior. This results in empirical values, which in turn are linked with further information in a context and the further behavior is observed in order to draw further conclusions. Deep Learning adapts this approach by trying to identify patterns from given information, classify them, and make decisions about the meaning of this information in the current context. It tries to draw conclusions from this, similar to the human brain, and to take this information into account in forecasts and future decisions. Unlike humans, every information and every decision is questioned in deep learning. If the assumption is confirmed, this information link gets a more important meaning. If the information is revised, it gets a new lightweight, minor, linkage.
This concept comes from Convolutional Neural Networks (CNN). Simply, deep learning uses multi-layered information processing and linking. On the one hand you have an information input. On the other hand an output. What ultimately comes out at the output decides the so-called “middle class”, which in turn can consist of n-tiered layers. The input information can therefore be analyzed, processed and contextualized in several stages until it finally defines the output.
Differentiation With Machine Learning
To understand deep learning better, it is critical to question what deep learning does not mean. In machine learning, for example, there is the approach that feeds a machine with a lot of information. Then you train the machine. To do this you take about 80% of the available information and define fixed rules, which parameters you value and which conclusions the machine should draw. With tons of information and intensive training, one hopes for better predictions and analysis, which is also known as predictive analytics. Then you test the trained machine with the remaining 20% of the data and adjust the rules if necessary.
A business wants to proactively improve their customer relationships with the goal of reducing the churn rate by 25%. There are different approaches to achieving this goal. First, management may decide to visit customers more, enhance dialogue, or offer new services. These measures may sound plausible, but at the present time they are based only on assumptions, empirical values and gut feelings. Another way would be to analyze the history of customer activity to find out why customers have quit in recent years. And that’s where machine learning starts. The company collects all data that they think may be important to the machine. This could be data from the company-internal CRM system, or data from external systems, such as creditworthiness information. As a result, the company marks all customers who have terminated in the past. After that, the company decides which parameters are important for the analysis. This could be, for example, the contract period, a possible reason for termination, the customer advisor, the frequency of contact, etc. Based on the information and the predefined parameters, the machine analyzes the data and recognizes possible patterns and relationships between the customer, the time of termination and the previously defined parameters. On this basis, the machine prepares forecasts, which customers will terminate in the future with what probability and also mentions the possible reasons for termination. The company can intervene by confirming or revising forecasts, causing the machine to re-analyze the data and make changes in forecasts.
This machine learning is a precursor to deep learning. It does not equate to deep learning. Deep learning goes one step further towards self-reliant learning and away from the company’s intervention. It has to be considered in isolation. As soon as one intervenes in the thought pattern, in the way of working, in the machine, one no longer speaks of deep learning.
Let us stay with the above example.
The same company wants to lower the churn rate. It uses a machine / software that records and analyzes all activities related to customer relationship management (CRM). At the beginning there are still few dates. The analyzes and forecasts are therefore poor. Over time, the information accumulates. The machine begins to learn that a customer stays with the company for a long time because the consultant telephones the customer more frequently or visits him more frequently. The system then learns that the problems of the customer are dealt with in a timely manner by the employees. Apparently, this all depends on customer satisfaction. The machine continues to learn how to handle the customer, at what intervals he is contacted, what has been discussed, and what problems the customer has. All this works only because the company provides all the data to the machine. No one knows exactly what conclusions the machine draws from this. There are no fixed rules, no information boards. Human does not filter information and does not interfere with thought and decision patterns. Over time, and with increasing information, the machine learns and is able to make better predictions. It’s not about right or wrong. Tomorrow’s forecast can be wrong or outdated tomorrow, because with each additional information the machine re-evaluates existing information makes new decisions. That’s the big advantage of deep learning. The downside is that deep learning is basically a black box because we are not allowed to intervene by definition and therefore it is not clear which principles the machine is based on.
Conclusion of Part 1
The big difference between the two example variants is that in traditional machine learning, people interfere with the analysis and decision-making process, while in deep learning they only provide information and record all activities. The human being leaves the machine to the analysis, prognosis and decisions.
That’s deep learning. Deep learning is, so to speak, the letting go habit of human. Other human has no control and no influence on the result. For example, if you ask which formula or pattern the deep learning machine has made a decision, the only logical answer would be, “We do not know exactly!”. What can be the right formula for the machine today could be out of date tomorrow or next week. Accordingly, Deep Learning is synonymous with the process of gathering information -> learning -> questioning -> revising -> collecting information; like human actually think.