ARTIFICIAL INTELLIGENCE

TOOLKIT

The Application of AI made Easy!

Build and Apply Machine Learning Without Any Programming!
Supervised, Unsupervised & Reinforcement Learning.

MS Windows & Open Source Software
FREE FOR NON-COMMERCIAL USE!

AI TOOLKIT DOWNLOAD IT HERE! TRAINING VIDEOS
EXAMPLES OF APPLICATIONS OPEN SOURCE


Part of the The Intelligent Enterprise Group

The Intelligent Enterprise Group


AI Toolkit







Decision AI Professional
Software Toolkit for building and using state of the art Machine Learning models (easy Training, Testing and Inference) and for building Intelligent Systems (several AI models working together). Supervised Learning + Unsupervised Learning + Reinforcement Learning. Several built-in Tools and Apps for editing and transforming audio, images, large text files, Face Recognition, Speaker Recognition, Fingerprint Recognition, etc.
Read More
VoiceData
VoiceData can be used for generating data for training Automatic Speech Recognition (ASR) models in many languages. The generated data includes both the transcription files and the synchronized audio (the input text is read by a machine trained very human sounding synthethized voice; male or female). + Text Normalization + Text Recognition.

Read More
DocumentSummary
Can be used to create a short summary from any text document as simple text, PDF files, HTML files, etc. on your computer or on the internet. Uses Artificial Intelligence (AI) powered language models. Able to take into account specialized words specific to your discipline (law, medicine, chemistry, etc.).


Read More
VectorML
Bitmap to vector (svg) conversion (machine learning) and fast svg view with presentation mode. Combined GPU and CPU acceleration.




Read More
FacilityNetworkML
Process design aided by machine learning. Define your connected facilities (departments, work cells, service stations, etc.) and the software will guide you in sizing your network (number of servers/employees, waiting time, queues, etc.).


Read More
 AI-TOOLKIT Download

AI-TOOLKIT Training Video's

Open Source Software


VoiceBridge
VoiceBridge is an Open Source state of the art Speech Recognition C++ Toolkit
Read More







Knowledge

Table of Contents



Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.


@book{Somogyi_2021, doi = {10.1007/978-3-030-60032-7}, url = {https://doi.org/10.1007%2F978-3-030-60032-7}, year = 2021, publisher = {Springer International Publishing}, author = {Zolt{\'{a}}n Somogyi}, title = {The Application of Artificial Intelligence} }
The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

Learn About Machine Learning and AI


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

It is not always clear to people, especially if they are new to the subject, what we mean by machine learning and when and why we need it. A lot of people are aware of artificial intelligence (AI) from science fiction but they may not really understand the reality and the connection to machine learning. This article will explain in clear lay terms what machine learning and AI are, and it will also introduce the three major forms of machine learning: supervised, unsupervised and reinforcement learning. The aim is that after reading this article you will understand what, exactly, machine learning is and why we need it.

Machine learning is a process in which computers learn and improve in a specific task by using input data and some kind of rules provided to them. Special algorithms, based on mathematical optimization and computational statistics, are combined together in a complex system to make this possible. Artificial intelligence is the combination of several machine learning algorithms which learn and improve in several connected or independent tasks at the same time. At present, we are able to develop parts of a real artificial intelligence but we cannot yet combine these parts to form a general artificial intelligence which could replace humans entirely.

We could also say that learning in this context is the process of converting past experience, represented by the input data, into knowledge.

There are several important questions that arise: To which kind of tasks should we apply machine learning? What is the necessary input data? How can the learning be automated? How can we evaluate the success of the learning? Why don’t we just directly program the computer with this knowledge instead of providing the input data?

Let us start with answering the last question first. There are three main reasons why we need machine learning instead of just using computer programming:
  1. After a computer program is made it is difficult to change it every time the task changes. Machine learning adapts automatically to changes in the input data/task. As an example after software has been programmed to filter out spam e-mails, it cannot handle new types of spam without re-programming. A machine learning system will adapt automatically to the new spam e-mails.
  2. If the input is too complex, e.g. with unknown patterns and/or too many data points it is not possible to write a computer program to handle the task.
  3. Learning without programming may often be very useful.
Besides the above it is of course also a human desire to try to make an artificial intelligence, towards what we are evolving.

In order to be able to answer the other questions let us first look at a typical machine learning process.

First we need to decide which task to teach to a machine learning model considering the three reasons mentioned above. Next we need to decide which data and rules we need to feed to our machine learning model. Then we need to choose a machine learning model, train the model (this is when the learning takes place) and test the model to see if the learning is correct. Collecting the data, choosing the model, training and testing are all recursive tasks (note the arrows going back to former steps) because if the model cannot be adequately trained then we often need to change the input data, add more data or choose another machine learning model.

Machine Learning tasks can be classified into three main categories:
  1. Supervised Learning
  2. Unsupervised Learning
  3. Reinforcement Learning
We speak about supervised learning when the input to the machine learning model contains extra knowledge (supervision) about the task modeled in the form of a kind of label (identification). For example in the case of an e-mail spam filter the extra knowledge could be labeling whether each e-mail is spam or not. The machine learning algorithm then receives a collection of e-mails labeled spam or not spam and through this we supervise the learning algorithm. Or in the case of a machine learning based speech recognition system the label is a sequence of words (transcribed sentences). Or another example could be the labeling of a collection of images about animals for an animal identification task. With the extra knowledge of which picture contains which animal the learning algorithm is supervised.

There are already many real-world supervised learning applications and many more will be added in the future. Some of the existing applications are as follows:
  • E-mail spam detection based on a collection of messages labeled spam and not-spam.
  • Voice recognition based on a collection of labeled voice recordings. The labels identify the person who speaks.
  • Speech recognition (part of comprehension) based on a collection of labeled voice recordings where the labels are the transcription of sentences.
  • Automatic image classification based on a collection of labeled images.
  • Face recognition based on a collection of labeled photos. The labels identify which photo belongs to which person.
  • Determining whether a patient has a disease or not based on a collection of personal data (temperature, blood pressure, blood composition, x-ray photo, etc.).
  • Predicting whether a machine (auto, airplane, manufacturing, etc.) will break down (and when it will break down – for predictive maintenance) based on a collection of labeled data from past experience.
Remember that we speak about supervised learning when the input to the machine learning model contains extra knowledge (supervision) about the task modeled in the form of a kind of label. When we do not have this extra knowledge or label then we speak about unsupervised learning. The aim of unsupervised learning is the identification of this extra knowledge or label. In other words, the goal of unsupervised learning is to find hidden patterns in the data and classify or label unlabeled data and use this to group similar items (similar properties and/or features) together, and thus put dissimilar items into different groups. Another name for unsupervised learning is clustering (grouping). 

There are already many real-world unsupervised learning applications and many more may be added in the future. Some of the existing applications are as follows:
  • Grouping shoppers together based on past purchases and other personal properties; for example, as part of a recommendation system.
  • Market segmentation based on chosen properties, e.g., for marketing applications.
  • Segmentation of a social network or a group of people, e.g., for connecting people together (as on a dating site).
  • Detecting fraud or abuse (by applying unsupervised learning to better understand complex patterns in the data).
  • Grouping songs together based on different properties of the music, e.g., on streaming platforms.
  • Grouping news articles together depending on the contents or keywords, e.g., as part of a news recommendation application.
We could define reinforcement learning as a general purpose decision making machine learning framework used for learning to control a system. There are several important keywords in this definition which need some explanation. General purpose means that reinforcement learning can be applied to an unlimited number of different fields and problems; from very complex problems such as driving an autonomous vehicle to less complex problems such as business process automation, logistics, etc. Decision making means carrying out any kind of decision/action depending on the specific problem, for example, accelerating a car, taking a step forward, initiating an action, buying stocks, etc. Controlling a system means taking actions in order to reach a specific goal, where the specific goal depends on the problem (e.g., reaching a destination, having profit, being in balance, etc.).

There are currently many real-world reinforcement learning applications and no doubt more will be developed in the future. Some of the existing applications are as follows:
  • Self-driving cars. A control system based on reinforcement learning is used to adjust acceleration, braking and steering.
  • Automated financial trading. The reward is based on the profit or loss for each trade. The reinforcement learning Environment is built using historical stock prices.
  • Recommendation systems. The reward is given when, for example, the users click on an item. Real-time learning improves the machine learning model or recommendation systems are trained on historical data.
  • Traffic light control. 
  • Logistics and supply chain optimization.
  • Control and industrial applications, e.g., for optimizing energy consumption, efficient equipment tuning, etc.
  • Optimizing treatment policies or medication dosage in healthcare.
  • Advertising optimization.
  • Various types of automation.
  • Robotics.
  • Automated game play.
This article is a slightly modified excerpt from the book “The Application of Artificial Intelligence”. If you are interested in the subject then it is strongly recommended to read the book which contains much more details and real world case studies for several sectors and disciplines! The book explains several examples by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for info about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

@book{Somogyi_2021, doi = {10.1007/978-3-030-60032-7}, url = {https://doi.org/10.1007%2F978-3-030-60032-7}, year = 2021, publisher = {Springer International Publishing}, author = {Zolt{\'{a}}n Somogyi}, title = {The Application of Artificial Intelligence} }
The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

The Future of Artificial Intelligence

There are several important questions that arise if we think about Artificial Intelligence (AI) today. What is Artificial Intelligence? Will AI replace humans on their jobs? Is it dangerous? What about killer robots, ethical issues, etc.? There is a lot of news about AI lately but the information is often misleading or even wrong. The aim of this article is to answer all of these questions based on the experience and future vision of the author.

Google, who is one of the leading technology giants in the fields of AI, explained recently that they are able to make the brain of a mouse today, but I think that even that is exaggerating. It will still take many years to build an AI which mimics well the human brain and body. You will understand why after reading this article.

But the answer to the question whether the current state of AI is useful is of course yes, very useful. We can use the results of many years of AI research in endless useful applications in all business sectors. The current state of AI helps humans to do their jobs better and makes things possible which otherwise would not be possible such as developing new vaccines for deadly diseases, improving business processes, helping humans with disabilities, discovering new things, gaining new insides from all kinds of data and in all business sectors, etc. Current AI algorithms can interpret all kinds of data such as numbers, text, images, video, audio, etc. A whole new range of applications are possible and only your imagination is the limit for finding useful applications in your business sector.

In the next sections I will explain in simple terms the current state and my vision about the most important building blocks of a real human like AI system: the Memory, CPU, Sensors, the whole interconnected System and also how it will Learn and use the learned information.

We will use the term AI throughout this article because the aim is to predict the future but if we speak about today’s capabilities then we use the term ‘machine learning’, since we do not have an AI yet, but several types of machine learning models, the building blocks of an AI. Read the article ‘Learn About Machine Learning and AI’ first if you are novice to the subject!

Memory

First of all, let me begin with a simplified explanation about how the human body and brain works and why it will be extremely difficult to make an AI (robot) in the future, which can replace a human even partly. One of the mistakes AI researchers make today is that they do not take into account the enormous amount of different types of interconnected memory (and stored information) a human body and brain has. Even brain research is still guessing about many things, but there are more and more signs (proven by experiments) that we store different types of things in our brain (memories). We store images, smells, sounds, etc. There is an enormous amount of information in a well developed human brain and body. Yes body, because our whole body has different types of interconnected memory, just think about, for example, what we call muscle memory. One of the reasons why we are unable to make even an attempt to replicate a human is because of the lack of flexible, very fast and huge amount of memory. We are not talking about megabytes or gigabytes of memory but terabytes of interconnected memory. The technology to be able to have this kind of memory is still very far away. It is very likely that this types of memory will not be like the memory modules we use in our computers today, but it will be some kind of chemical and biological substance (like the human body and brain has). There is an active research going on in this field also.

So how will an AI, similar to the human body and brain, function and work? I think, that there will be a very simple logic which connects the different types of memory and sensors (see later), and the complexity will not come from the basic building blocks (which will be very simple), but from the very cleverly interconnected and coordinated whole system. The AI will store all necessary information, like the human brain and body (and possibly even much more), and will be able to recall that information (even many of them in parallel) according to some kind of stimuli.

Sensors

The second reason why it is extremely difficult to replicate a human today with an AI is because of the many types of interconnected sensors the human body has. Just think about the eyes (vision), the ears (hearing), nose (smells), touch (all over the human body), temperature, etc.

We are at a very good level with the vision capabilities but even that is much more complex in reality, because a human eye (and connected brain) can sense in several dimensions. High resolution is again a very important element because only one 3D image a human eye sees consumes a huge amount of memory and there are endless numbers of images (as a kind of video). We can mimic the human eye 3D capabilities with two cameras today, but one human eye can see already in 3D (or more) and the image is used by our brain to estimate depth, distances, etc. Does one human eye contain several ‘camera’s’? Most probably yes! Most of today’s AI algorithms are limited to 2D grayscale images, because of the computing power and memory needed to analyze these images! E.g. most AI algorithms, which can detect an object in an image or video, first convert the images to grayscale before processing.

All sensors are active in a human body in parallel all the time, we see, we hear, we smell, etc., and they are interconnected with a very clever system containing the nerves in our body (yes the whole body and not only the brain). Just think about an object you are closing to and how you decide if you like that object. You will take several things into account at the same time, the visuals, the smells, the sounds, etc. and you decide automatically in an instant if you like that object or not.

The human body and brain also stores many of the information it receives, which takes up even more memory and logic. For example, how a smell is stored and recalled in your brain? I do not think that there is anybody who can answer this question yet. The human brain must have some kind of very efficient ‘internal language’ which is used to describe these things. 

CPU & GPU

We have the capability today to use computers with several CPU’s containing separate computing units (cores). But how many and how fast ‘CPU units’ a human body and brain has, and how they work seamlessly in parallel? I think that you can imagine that this question is very similar to that of the memory issue above. The human body and brain process in parallel a huge amount of information (from many sensors in the human body, from memory, etc.) and all of this information is used instantly to make decisions. The information needs to be found in the different types of human memory, it needs to be processed and combined. We are very far away from the speed, storage capacity, parallel processing, logic, etc. to be able to replicate a human brain and body. The ‘wonders’ of nature and the human body have still a lot of secrets for us even after studying them for thousands of years!

One of the positive developments today is the recent appearance of extra computing power and extra AI capabilities in video cards led by NVIDIA Corporation. The special ‘cpu’ in a video card is called graphical processing unit (GPU). This is a completely logical evolution and I expect the appearance of the same capabilities e.g., in a separate audio processing unit (and other sensor systems). Computer systems will first grow in this way and just in the far future will they be integrated more closely and be much smaller in size. I also expect in the very far future that many things will be replaced by some kind of biological and chemical substances with human like capabilities.

But for the time being there is an urgent necessity for standardization of current and future developments in order to make these systems (CPU, GPU, memory, etc.) much better coordinated and easier to use (software development).

One last note about GPU’s. The introduction of GPU accelerated calculations (especially for image processing) is of course a very welcome and great evolution! I however think that in the far future CPU’s and GPU’s will evolve towards each other and finally they will become one unit which can handle all calculations. In the near future we may however still see the introduction of separate specialized processing units for different tasks.

The Whole System

A human body and brain contains an interconnected system which senses, processes, stores and recalls all of the information a human experiences. Many types of information is used at the same time to make decisions such as that a glass can be used to drink, that you can open a door, what to do if someone is calling you, what to say if someone is asking a question, etc. The basic building blocks of this interconnected system are very simple but the whole system together is extremely complex. Today AI systems do not take this into account because they are either over-complicating the basic building blocks, and/or they are incorrectly building the system as a whole e.g., not taking into account the distributed different types of memory and huge amount of stored information, stimuli and input from several sensors, etc. 

This is of course also because we are not yet at the technological level and knowledge which is necessary for building a real AI system and that AI research is trying to simulate human like AI patterns.

So how will an AI similar to the human body and brain function and work? I think that there will be a very simple logic which connects the different types of memory, sensors, CPU, etc. and the complexity will not come from the basic building blocks (which will be very simple), but from the very cleverly interconnected and coordinated whole system. The AI will store all necessary information like the human brain and body (and possibly even much more) and will be able to recall that information (even many of them in parallel) according to some kind of stimuli.

There are a lot of AI algorithms out there and many people ask the question that which algorithms is the best, which should they use and which algorithm is the future. I have given the answer to this question already with the statement above that ‘there will be a very simple logic which connects the different types of memory, sensors, CPU, etc.’. Most of the complexity in current algorithms and the diversity of the algorithms are needed because of the limited memory, CPU and our limited knowledge about the whole system. The most efficient way to work today is to select the appropriate algorithm for a specific task. One of the often made mistakes by AI experts and researchers today is that they try to use complex neural networks based AI algorithms for every task despite the fact that for many tasks there are other much simpler and faster algorithms which may even give a better accuracy.

Learning and Ethics

We know already a lot of things about how learning happens. We can learn from existing knowledge, what we may call supervised learning (e.g. someone tells us or shows us that we can drink from a glass). We can learn unknown things which we may call unsupervised or reinforcement learning (we learn with trial and error). Both types of learning are very important and we can not have a real AI system which can not combine both of these learning strategies. An AI could of course learn with a trial and error strategy only, but that would be very inefficient and even dangerous because of the unknown direction the AI would evolve. Learning in reality is of course more complex than just these two learning strategies and it involves a range of strategies which are some kind of combination of these two.

A real human like AI must thus combine several learning strategies and choose and use the one automatically which is most appropriate in the given circumstances.

There were a lot of failures in automated learning/training of AI systems recently. Several AI giants announced, launched and then very soon recalled such AI systems. The reasons for this are very simple and twofold, the current AI algorithms are not intelligent enough and as a child can learn bad things an AI algorithm can also learn bad things. For example, put a child in a wrong environment and she/he will learn how and what they speak in that environment, put a natural language processing AI system on the internet where it can be acceded and influenced by anyone and it will learn unexpected things.

And with this last thought we have also arrived to the topic of Ethics. Ethics is of course something that is invented by humans and it must be learned! The good news is that good ethics can be learned, as we can learn any other thing, but the bad news is that like anything else in this world all good things can be used with bad intention. An AI system can be used to save or improve lives but it can also be used to destroy them. This is the same question as asking whether to sell guns or computers to bad people. Asking the question whether we will be able to prevent that bad people will access highly intelligent AI systems in the future is the same as asking if we can prevent bad people accessing guns or the internet. The balance of the world we are living in depends completely on us and this is as much important as evolution. There can not be an evolution without a good balance. If evolution would go much faster than we can keep up with holding the right balance, the world would be destroyed by the technology invented by humans.

Conclusion

The answer to the question whether the current AI algorithms are useful is of course yes, very useful, but we are very far from an all-in-one real AI system which can replace a human. We can of course use the results of several years of AI research in endless useful applications in all business sectors. The current state of AI helps humans to do their job better and makes things possible, which would not be possible otherwise such as developing new vaccines for deadly diseases, improving business processes, helping humans with disabilities, discovering new things, gaining new insides from all kinds of data and in all business sectors, etc. Current AI algorithms can interpret all kinds of data such as numbers, text, images, video, audio, etc. A whole new range of applications are possible and only your imagination is the limit for finding useful applications in your business sector. AI systems will start to speed up evolution very soon! It is already started!


Would you like to learn more about machine learning and AI? If you are interested in the subject then it is strongly recommended to read the book “The Application of Artificial Intelligence” which contains much more details and real world case studies for several sectors and disciplines! The book explains several examples by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for info about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

AI in Root Cause Analysis

Are you still using techniques like 5 Whys, fish-bone diagram or even guessing for root cause analysis? Would you like to have a root cause instantly with high accuracy? If yes, then read further!

Detecting anomalies, and finding the root cause of the anomaly, is an important application in the field of machine learning in nearly all sectors and disciplines. An anomaly may mean different things in different applications, for example, fraudulent use of credit cards or suspicious transactions in the financial sector, a specific disease or the outbreak of a disease in healthcare, the signs of intrusion in a computer network, a fault in a production system or product in the manufacturing industry, an error in a business process, etc.

Machine learning discovers patterns in the data and therefore it is well suited for discovering unusual patterns which are the signatures of anomalies.

Anomaly detection can be combined with root cause analysis (RCA) in a machine learning (ML) model, or ML anomaly detection may assist traditional RCA techniques in finding the root cause. Machine learning automated RCA may also be useful for the following reasons:
  • To perform complex RCA several domain experts are often needed, who may not always be available or may be expensive or difficult to deploy. In this case ML automated RCA can also be considered as a knowledge management tool.
  • ML automated RCA may save a lot of time because traditional RCA projects may often take even several days.
There are three main types of machine learning models (methods) which can be used for anomaly detection and/or RCA:
  • Supervised,
  • Unsupervised and
  • Semi-supervised learning based method.
For supervised learning we need labeled (classified) data. In the case of anomaly detection, the data must contain data records labeled as ‘normal’ and data records labeled as ‘anomaly’. We can combine anomaly detection with root cause analysis by defining several types of anomaly labels which all identify a specific root cause. For example, in the case of the root cause analysis of a defective product in a production process, we could define root causes as “wrong material”, “handling error”, “machine error”, etc. instead of just using “wrong product”.

Supervised machine learning based anomaly detection and root cause analysis is a very powerful technique, but collecting and labeling the data is a lot of work and must be done by domain experts. You may also consider using a semi-supervised method (see later) in order to decrease the amount of work needed for labeling the data.

One of the most important parts of anomaly detection and root cause analysis is the data collection and feature selection phase. It is very important to select the right features which can be used to distinguish between normal and anomalous phenomena. In most cases (especially if root cause analysis is integrated) domain experts must design the data collection and feature selection process, and they also must take care of the labeling of the data records if supervised learning models are used.

Usually a traditional structured and systematic approach is used to investigate anomalies and their root causes (when appropriate) and in order to determine the best set of features used in the input data for the machine learning model.

It is often possible to simulate a real situation or environment and introduce all forms of potential anomalies and record the attributes (responses) of the system. It is much easier and faster to study the signatures of anomalies in this way than to monitor real systems. Try to think about a solution for modeling your real situation in some way and try to simulate potential anomalies.


This article is a slightly modified excerpt from the book “The Application of Artificial Intelligence”. If you are interested in the subject then it is strongly recommended to read the book which contains many more details and real world case studies for several sectors and disciplines! The book explains several examples step-by-step by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for information about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

AI in Human Resources

In this article, for beginners in machine learning, you will learn how to use the machine learning (ML) tools in the AI-TOOLKIT to make difficult HR decisions automatically. In this simple example we will train an ML model which can be used to predict if an employee will leave the company. We could use the same principles to predict the reason of leave or if it is worthwhile to offer a promotion to an employee. The article will also explain and compare some of the ML models available in the AI-TOOLKIT.

You can apply the same principles to any other sector or business case, for example, you could predict if a client will leave, why it will leave, or if it is worthwhile to offer a discount, etc.

The Dataset

The dataset contains 15,000 rows (records) and 10 columns (variables or features). You can download the data at the end of this article. If you are doing this in your company you should first study which variables influence the most the specific business case, in this example an HR problem, and select the variables accordingly. We call this step Feature Engineering. Selecting not enough or too many variables or features (not enough knowledge or unneeded noise) will result in a less useful or less accurate ML model. The accuracy of the trained ML model depends mainly on the input data (quantity and quality) and also on the parameters of the models.

The 10 columns (features) are as follows:
  • Satisfaction Level (0-1)
  • Last evaluation (0-1)
  • Number of projects (integer)
  • Average monthly hours (integer)
  • Time spent at the company (integer)
  • Whether they have had a work accident (0-no, 1-yes)
  • Whether they have had a promotion in the last 5 years (0-no, 1-yes)
  • Department name (text)
  • Salary (text: low, medium, high)
  • Whether the employee has left (0-no, 1-yes)
Depending on which variable (column) you choose as decision variable you can train a model for different purposes, for example, to predict whether the employee will leave in the future, whether it is worthwhile to offer a promotion, etc.

In this example we will choose the ‘Left’ (whether the employee has left) column as decision variable in order to predict if an employee will leave or not.

satisfaction
 level

last
evaluation

number
 of
projects

average
 monthly
hours

time
 spend
company

work
 accident

left

promotion
 last 5
years

sales

salary

0.38

0.53

2

157

3

0

1

0

7

1

0.8

0.86

5

262

6

0

1

0

7

2

0.11

0.88

7

272

4

0

1

0

7

2

0.72

0.87

5

223

5

0

1

0

7

1

...

...

...

...

...

...

...

...

...

...

Training the AI Model

There are different types of machine learning models available in the AI-TOOLKIT. Each model/algorithm has its advantages and disadvantages. Some algorithms are well suited for one type of data but not for another type of data. Neural network based models can be tuned so that they can be applied to all kinds of problems, but with the cost of complexity (often with many layers of different type and with many nodes) and processing speed (more layers and nodes mean more processing time and more computer resources). Furthermore, neural networks also need much more data than other types of machine learning models. Therefore, it is worthwhile to choose the machine learning model you want to use in a clever way!

Let us choose the SVM model for this example.

Support Vector Machine (SVM) model

You can easily import your numerical delimited data into the AI-TOOLKIT. The SVM model has several parameters, which can be automatically optimized by the built-in parameter optimization module.

Follow the next steps in order to train the ML model:
  1. Create a new AI-TOOLKIT project (Open AI-TOOLKIT Editor + New Project).
  2. Insert the SVM model template (Insert ML Template + choose Supervised Learning + Support Vector Machine).
  3. Save the project.
  4. Download the data (at the end of the article) and change the extension to ‘.tsv’. Import the data into a new AI-TOOLKIT database (On the DATABASE tab: Import Data Into Database + follow the instructions on the screen. It is important that you indicate the correct number of header rows (non-numerical) and the zero based index of the decision column (6 in this example)). Use as table name: ‘hr_data’.
  5. Save the database into the same folder as the project is saved. Use the name ‘hr.sl3’.
  6. Run the SVM parameter optimization module to find the optimal parameters (SVM Parameter Optimizer on the AI-TOOLKIT tab). You may stop the optimization earlier if you see a high enough accuracy or just skip the optimization and use the values shown below.
  7. Adjust the SVM model template as shown below (some of the unneeded parameters and comments are not shown). The optimal parameters are filled in.
model:
    id: 'ID-EFnMmvBNWr'
    type: SVM
    path: 'hr.sl3'
    params:
        - svm_type: C_SVC 
        - kernel_type: RBF 
        - gamma: 15.0 
        - C: 281.8 
    training: 
        - data_id: 'hr_data' 
        - dec_id: 'decision' 
    test: 
        - data_id: 'hr_data' 
        - dec_id: 'decision'
    input: 
        - data_id: 'input_data' 
        - dec_id: 'decision'
    output:
        - data_id: 'output_data'
        - col_id: 'decision'
  1. Save the project.
  2. Train AI model (AI-TOOLKIT tab).
After the training is ready you will see the performance evaluation results:

Performance Evaluation Results
 

  Confusion Matrix [predicted x original] (number of classes: 2):

  (0) (1)
(0) 11427 0
(1) 1 3571
     
Accuracy 99.99%  
Error 0.01%  
C.Kappa 99.98%  
     
  (0) (1)
Precision 100.00%  99.97%
Recall 99.99% 100.00%
FNR 0.01% 0.00%
F1 100.00% 99.99%
TNR 100.00% 99.99%
FPR 0.00% 0.01%
The accuracy of the trained model is very good (nearly 100%). The trained model only makes one mistake in 15,000 cases. In this example we will not go more in detail about all performance measures and discuss the so called generalization error (testing with unknown data) because this is not the aim of this simple example.

DeepAI Educational Neural Network Model

The deep neural network model in DeepAI Educational is based on a semi-automatic multi-layer and multi-node neural network implementation. The software designs the neural network semi-automatically, you only need to define the number of layers and nodes per layer (you can of course adjust some more parameters but this is most of the time not necessary). DeepAI Educational does not use complex state of the art neural network architectures and extensive model performance evaluation, but it often provides a good result. For real world problems use the machine learning models and tools in AI-TOOLKIT Professional.

DeepAI uses the SSV data file format (delimited text file). Adjust the settings in the ‘Settings/AI’ tab according to the following if needed:
  • Number of iterations: 10
  • Learning rate: 0.01
  • Regularization rate: 0.001
  • Batch size: 10
  • Activation Function: TANH
  • Regularization Function: NONE
  • Test data %: 10
  • Treat data as X-Y Classification / Regression
Download the data (at the end of the article). You can load an external training data file with the 'Load Data File (SSV)' command. The data must have the AI-TOOLKIT SSV data file format (.ssv), which is tab delimited, without a header row, contains only numbers and with the decision variable (classes in case of classification, continuous numbers in case of regression) in the first column. 

Since the decision variable must be in the first column for DeepAI we need to open the data file in MS Excel and move the decision column (‘Left’) to the first column. We also must remove the first header row! After you are ready save the file in tab delimited format and with ‘.ssv’ extension.

Use the ‘Load Data File’ command and load the above prepared data file. DeepAI will automatically design a neural network for the data file. This neural network will provide good results but let us add an extra layer (4 layers in total), adjust the number of nodes to 24 on the second layer and 10 on the third layer. The first and the last layers have a fixed number of nodes depending on the input functions and the output (1).

Change the number of iterations to 480 and start the training process with the Run command. After a while (2-3 min) the results will appear which indicate 98.3 % accuracy for the training data. You can still fine tune the model and obtain a higher accuracy but this is not a fast and simple process. Fine tuning a neural network is a tedious and often long lasting process (adjusting the number of layers, adjusting the number of nodes per layer, adjusting the learning rate, the activation function, etc.). It is also not sure that more layers and nodes will provide better results but you will need to find the optimal solution also depending on the other parameters.

You can use the trained ML model for making automatic and precise decisions about this HR problem.

References

  • The Application of Artificial Intelligence, Zoltan Somogyi.
  • HR Analytics Dataset: Attribution-Share Alike 4.0 International (CC BY-SA 4.0) license, Source: https://www.kaggle.com/ludobenistant/hr-analytics.
    You can download the dataset in MS Excel format here: HR_COMMA_SEP_U.XLS

Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

AI in Predictive Maintenance

In many industries the reliability of machines is very important. In aerospace, transportation, manufacturing, utilities, etc. complex machines containing many components undergo periodic inspection and repair (preventive maintenance). The main challenge is to schedule preventive maintenance and component replacement in an optimal way such that the machines can work reliably and the components are not replaced too early. Reliability, high asset utilization and operational cost reduction are, in short, the aims of each company in these industries.

By using machine learning and historical data we can train a model which can predict when the next failure will occur and thus when preventive maintenance should be scheduled. We call these kinds of machine learning models predictive maintenance machine learning (PMML) models. There are two main types of PMML models:
  • Regression models predict the remaining useful lifetime (RUL) of the machine or components.
  • Classification models predict the failure within a pre-defined time period (time window).
In order to build a useful PMML model we need to go through some important steps which are summarized hereunder:
  • Data collection
  • Feature Engineering
  • Data labeling
  • Defining the training and test datasets
  • Handling imbalance in the data
The input data may come from different sources and usually contains failure history, maintenance history, machine operating conditions and usage, machine properties and operator properties.

After we have collected all necessary data we must combine them into one synchronized dataset which can be fed into the machine learning model. We call this step feature engineering because we are building a dataset from features fabricated from the collected data. This is often a complex process in the case of a predictive maintenance model and the performance of the model will entirely depend on it.
The method for combining the collected data into the final dataset is usually very similar but of course business case and data dependent. Remember that the aim is to predict when the next failure of the machine will occur by using historical data.

There are two types of data, time series and static data. Static data can usually be simply combined with the other data by grouping them per machine ID. For example, if the maintenance history is defined with “time | machine ID | component”, and the machine properties are defined with “machine ID | property 1 | property 2…”, then we can simply add the static machine properties per machine ID as follows: “time | machine ID | component | property 1 | property 2…”

In the case of times series data we need to aggregate the data according to some pre-defined rules based on the business case.

We usually want to predict machine failures in a future time period (time window) based on a historical time period. The data may be collected with a frequency of seconds, minutes, hours, etc. and we need to aggregate it into a pre-defined time period based on the business case. The evolution of the features in the time window is captured by the aggregated values. The machine learning model will learn which aggregated values result in a failure in the next time window. For example, if we want to predict whether a machine will fail in the next 24 hour period, then we can use a time window of 24 hours and label the aggregated records which fall into a 24 hour window just before a failure occurs as FAILURE and all other records as NORMAL. It is of course business case dependent as to how long the time window should be. Sometimes 24 hours is appropriate but sometimes we need to use a longer period, for example, to allow for a longer period of supply of repair parts. If it takes one week to get repair parts then we need to predict a failure much earlier in time.


This article is a slightly modified excerpt from the book “The Application of Artificial Intelligence”. If you are interested in the subject then it is strongly recommended to read the book which contains many more details and real world case studies for several sectors and disciplines! The book explains several examples step-by-step by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for info about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

AI in Making Recommendations

Making a recommendation means that we recommend one or more items to potential users. The recommended items may be many different things, for example, physical products which are being sold (e.g., cars, smart phones, etc.) or articles, web pages, documents, etc. There are many recommender systems in use today, for example, for recommending books, movies, clothing, holiday destinations, etc., and usually they help to increase revenue and/or help users to find the most relevant, interesting and/or important information or product.

Recommendation machine learning models work with explicit and/or implicit feedback data collected from users while they are interacting with items (products, documents, etc.). 

Explicit feedback is when the user provides some kind of rating or like/dislike of the items he/she is interacting with. There are many types of rating scales, for example, the five-star rating in which one star means low appreciation and five stars mean a very highly appreciated product. All of these ratings can be expressed on a numerical scale (e.g., 1,2,3,4 and 5; or 0 and 1 for dislike and like).

Implicit feedback is when the user does not directly provide some kind of rating but we collect information about user actions, for example, buying a product, viewing a document or web page, etc.

The basic principle behind how recommender machine learning models work is that correlation exists between how different users appreciate similar items, how different items are appreciated by similar users and the combination of the two (joint correlation). The user’s appreciation is expressed with explicit or implicit feedback. These correlations or behaviors can be learned by a machine learning model based on the collected explicit and/or implicit feedback data from the users (user + item + feedback).

There are two main types of state-of-the-art machine learning recommendation models:
  • Collaborative filtering (CF),
  • Content-based (CB).
Collaborative filtering models may use explicit and/or implicit feedback data in the form of a triplet consisting of a user ID, an item ID and the feedback value. These triplets form a three dimensional space which can be represented in a matrix or table.

Collaborative filtering models use the collected feedback data from all users (collaborative), but it usually does not use content information (description of the items). When the predicted explicit (ratings) or implicit (user actions) feedback is known, a top-k number of recommendations can be made to any user.


This article is a slightly modified excerpt from the book “The Application of Artificial Intelligence”. If you are interested in the subject then it is strongly recommended to read the book which contains many more details and real world case studies for several sectors and disciplines! The book explains several examples step-by-step by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for info about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

AI in Biometrics Recognition

The Merriam-Webster dictionary describes biometrics as follows: “the measurement and analysis of unique physical or behavioral characteristics (such as fingerprint or voice patterns) especially as a means of verifying personal identity.”

There are many types of biometrics used today, for example, DNA matching, the shape of the ear, eye matching (iris, retina), facial features, fingerprinting, hand geometry, voice, signature, etc. Verifying personal identity may be very important in many applications for law enforcement, security and access control, and even in smart offices and homes where person dependent services may improve processes and everyday life for people.

Most biometrics identification systems work in a very similar manner and involve two main steps, feature extraction and feature (or pattern) matching. Feature extraction means that we analyze the chosen biometrics (a human face in this case) and extract a collection of features which are necessary to distinguish between different people. The aim is, of course, to limit the extracted information to the minimum amount necessary in order to optimize the machine learning training and prediction phases. Too much information would not only make everything much slower but it would also confuse the machine learning model, which should focus on the features that are really important for distinguishing different people. Feature matching is the process in which we use the extracted features in order to determine the identity of a person. We usually compare extracted features in a reference database to the input features for recognition.

The main steps of building and using a face recognition machine learning system can be divided into two major tasks:
  • Training a machine learning model for feature extraction, and
  • Performing face recognition with the help of the trained machine learning model.
The two major tasks explained above are further divided into several sub-tasks. First we need to train a machine learning model based on a huge number of input images (an image database) for feature extraction. The training of such a model may take several days or even weeks and may involve millions of images. The aim is that the ML model (a large scale convolutional neural network (CNN)) learns how to distinguish between the faces of different people. Deep inside of the system the CNN learns which face patterns are important in order to distinguish between different people.

As usual ML model training and testing are both important in order to arrive to a good final ML model.

The face recognition branch of the whole process involves the detection of face(s) in the input image, normalization of the extracted face image (we will see later how and why), feature extraction using the previously trained ML model and, finally, effective face recognition based on the extracted features.

After we have trained our CNN model we are ready to assemble a professional face recognition system.

As a first step we need to find automatically all of the faces in the input image and their exact location in order to extract the face images. Face detection is a complex problem because of the many possible face poses, rotations, scales, facial expressions, occlusions, etc.

Before we can perform face recognition we need to build a reference face recognition database with high quality frontal face images of people we want to recognize. The size of the face images should be similar to the size of the images we used during the training of the ML model for feature extraction. Face recognition systems usually extract and scale the face images automatically (AI-TOOLKIT) from selected input images.

The trained residual convolutional neural network (RCNN) can now be used to extract the feature vector from each detected and normalized face in an input image for recognition. Next we need to extract the feature vector from each image in the reference database. When we have all of the above feature vectors we can simply use a clustering algorithm in order to group (cluster) all feature vectors. If the detected image corresponds to one of the reference images, then both images will be grouped into the same cluster because the feature vectors are close to each other in the Euclidean space learned by the ML model if they are both face images of the same person. If the detected face (represented by its feature vector) is assigned to a cluster without any other face, then the face is an unknown face (it does not exist in the reference database).

Speaker recognition is similar to face recognition (feature extraction and identification) and based on some acoustic patterns (features) in human speech which are unique between individuals. The uniqueness of these acoustic patterns is due to the unique anatomy of humans (the shape and size of organs in the mouth called the vocal tract) and due to learned speech patterns and style.

The AI-TOOLKIT has built-in Apps which can be used for professional automatic face, speaker and fingerprint recognition.


This article is a slightly modified excerpt from the book “The Application of Artificial Intelligence”. If you are interested in the subject then it is strongly recommended to read the book which contains many more details and real world case studies for several sectors and disciplines! The book explains several examples step-by-step by using the AI-TOOLKIT. The book is going through the publishing process at the time of writing this article. You may use the contact form for info about pre-ordering the book.


Learn about the application of Artificial Intelligence and Machine Learning from the book "The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert", Springer 2020 (~400 pages) (ISBN 978-3-030-60031-0). Unique, understandable view of machine learning using many practical examples. Introduces AI-TOOLKIT, freely available software that allows the reader to test and study the examples in the book. No programming or scripting skills needed! Suitable for self-study by professionals, also useful as a supplementary resource for advanced undergraduate and graduate courses on AI. More information can be found at the Springer website: Springer book: The Application of Artificial Intelligence.

The Application of Artificial Intelligence | Step-by-Step Guide from Beginner to Expert

Contact

Have a general inquiry?

Contact our team.

Search This Website