In today’s digital, increasingly interconnected and fast paced world, the boundaries between what people have been what computers can do for us. While computers have been better in computing tasks from simple mathematics faster than the human brain for several decades, the type and efficiency of how human brain identifies and correlates patterns is still unsuccessful, despite various researches. However, with the increased power of up-to-date hardware, programming advances, and improved algorithms, computers today can detect faces and emotions in them. Using Google Image Search to display information about a highly charged image. Also in the pattern recognition of the publication, on the manuscript to the handwriting, now there is great progress. In order to improve the pattern recognition and the underlying algorithms of machines in these areas, we can use the principles of machine learning. The aim of this series of article is to discuss application of machine learning in text recognition which can be used to achieve improvements in applications. After a short presentation and classification of the topic, as well as the clarification of basic terms from the field of machine learning and text recognition, various types of machine learning that can be helpful for the support and improvement of text recognition will be analyzed and evaluated.
Application of Machine Learning in Text Recognition : Revision of Basics
Machine Learning
The notion of learning is commonly defined as the process in which humans, or animals, gain knowledge, understanding, or abilities through study, guidance, or experience in a field. In machine learning, due to existing parallels to the way animals learn, animals are attempting to adapt the knowledge gained in animals about their learning behavior to that of machines. Interestingly enough, the parallels between the two types of learning also allow conclusions to be drawn about the other way round, i.e. we can gain insights from machine learning that broaden our knowledge of biological learning. Machines learn whenever possible, based on input and output, data, structure, or program, and as a result improve the expected future performance. Accordingly, text recognition is well suited as a field of application for the machine learning application.
---
Algorithmic Approaches
In machine learning, there are several practical implementations using algorithmic approaches to learning. In the following, five possible and partly consecutive Algorithmic Solutions will be briefly presented and explained by means of examples.
Supervised learning
In supervised learning, the goal is to learn a concept based on classifiable examples. An example would be pictures of means of transport with wheels: car and bicycle. After the learner has seen a number of classified examples, he should be able to independently formulate hypotheses about the examples he continues to present and to approach the target concept as well as possible. Possible associations in this example would be the presence of wheels, which alone have direct ground contact. So he could make a statement about a motorcycle, which will also be a means of transportation, while he could also rule out that, for example, a watermill is a means of transportation. If you transfer this type of learning to a machine-processable algorithm, Here, too, a “teacher” of the machine provides several inputs from a topic area. First, the machine is provided with a correct function value, i.e. in our example pictures of: car, bicycle = means of transport with wheels. According to some examples, after the learning phase, the algorithm should also be able to independently formulate hypotheses about submitted examples. Monitored learning is therefore well-suited to improve handwriting recognition of algorithms. While machine-made block letters require only the font and uppercase and lowercase letters to be learned, a human’s handwriting follows the same principles, but is very individual in design.
Unsupervised learning
In contrast to supervised learning, unsupervised learning does not provide correct function values or already classified outputs in order to create associations. Thus, in unsupervised learning, the algorithm also receives inputs, but does not receive any monitored expected outputs. Instead, it is left to the algorithm itself to find patterns in the obtained data and to make probable assumptions in order to create, classify, and output one or more models from the input data. In all other input data is then searched for the found patterns and classifications, or the existing patterns and classifications expanded. Unmonitored learning can thus be seen as a form of learning that creates a probabilistic model from the data provided. In models in which the timing and order of the data obtained are irrelevant, the algorithm can be used to monitor and detect outliers. In an example, thin of the sensor data of the temperature detectors of a data center. The algorithm could recognize a range of normal values as a pattern from all previous data and detect an anomaly in the case of outliers in the form of temperature differences of the amount.
Naive Bayes
Naive Bayes classifications are renowned for their efficiency despite their simplicity. The probabilistic model of naive Bayes classifications is based on the theorem of Bayes. “Naive” comes in this context from the assumption that the characteristics of an input data set are independent. While this assumption is often refuted, naive Bayes algorithms are nevertheless very successful in practice and often achieve better results than supposedly more powerful algorithms. The result and the success of this algorithm depends strongly on its field of application. An example for the application of a naive Bayes based algorithm would be the classification of an apple: As a parameter is a classification (trained by eg supervised learning):
Fruit: apple
Circumference: ~ 10 cm
Red color
Shape: round
In all other inputs, the algorithm will be able to make relatively quickly relatively correct classification, which is a clear advantage. Among the classifications could, however, possibly, for example, appear a tomato, since their parameters also corresponds to those of the learned classification, which in turn to be Disadvantage.
Nearest Neighbor Algorithm
The Nearest Neighbor Algorithm (KNN) uses trained training examples and classifications through supervised learning. If a new object with an unknown classification is entered, its closest neighbors will search for the most similar objects with the highest match and classify the new object according to the attributes of the known objects. This approach can be extended by not only considering the set of matching nearest neighbor attributes, but also weighting them. Thus, neighbors who are “further away”, or have a smaller amount, but higher weighting of attributes, can be used as a classification example.
Instance-Based Learning
In instance-based learning (IBL), to avoid confusion in terminology over other methods that have been referred to as examples, patterns, or instances, instances are spoken here. IBL has great similarities to supervised learning and derives from the nearest neighbor concept. Again with this method, the source for the classifications of an instance is an external input. However, unlike supervised learning, IBL algorithms do not expect a specific input pair, but passively use external processes or data that already exists. The instances are described by pairs of attribute values. This pair of attribute values and their similarities, or differences, is stored as a learned relationship and can be used for future predictions (classifications), or be extended by these. Compared to other approaches, this approach stands out for its general simplicity and simple way of analyzing attribute pairs. A consequent disadvantage may be a lack of missing relationships between attribute pairs, but they belong together. This has a negative effect on the precision of the output or prediction. An application example for the IBL can be found in handwriting recognition.
Conclusion of Part I
This article’s revision is probably helpful to the newbies with not much sound technical background. We learned the basic classification of ways of learning, which essentially based on human learning and which can be converted to logical thoughts i.e. programming. We would suggest the newbies to read our another series on deep learning. In the next part of this series, we will discuss about types of text recognition and application text recognition.
Tagged With machine learning text recognition , machine learning for text recognition , how to do speech to text recognition in machine learning , machine learning recognizing text , machine learning to detect if a pattern exists in a dynamically generated text , speech recognition