ARTIFICIAL INTELLIGENCE

TOOLKIT

The Application of AI made Easy!

MS Windows & Google Cloud Computing Software
FREE FOR NON-COMMERCIAL USE!

AI TOOLKIT DOWNLOAD IT HERE! TRAINING VIDEOS
CLOUD COMPUTING EXAMPLES OF APPLICATIONS OPEN SOURCE

Join the LinkedIn group The Application Of Artificial Intelligence
to discuss about these and other related subjects with your network!

864
Weekly  Users
AI  T O O L S


AI Toolkit







Decision AI Professional
Artificial Intelligence (AI) Software Toolkit for easy Training, Testing and Inference of state of the art machine learning models and for Creating Machine Learning Flow (several AI models working seamlessly together).


Read More
VoiceData
Automatic Speech Recognition (ASR) Data Generator Toolkit. Text Normalization (Natural Language Processing). AI Text Recognition. Audio Editor. Transform Sample Rate, Channels, Suppress Noise, Cancel Echo, Change Tempo, Rate, Pitch Frequency, Remove Audio Without Human Voice.
Read More
DocumentSummary
Can be used to create a short summary from any text document as simple text, PDF files, HTML files, etc. on your computer or on the internet. Uses Artificial Intelligence (AI) powered language models. Able to take into account specialized words specific to your discipline (law, medicine, chemistry, etc.).
Read More
(C) 2016-present Zoltan Somogyi, AI-TOOLKIT, VoiceData, VoiceBridge, DocumentSummary are Copyright Zoltan Somogyi, All Rights Reserved.

LinkedIn:
The Application Of Artificial Intelligence

 AI-TOOLKIT Download

AI-TOOLKIT Training Video's

Open Source Software


VoiceBridge
VoiceBridge is an Open Source state of the art Speech Recognition C++ Toolkit
Read More







Knowledge

The Application of AI in Human Resources

The Application of AI in Human Resources
Artificial Intelligence in Human Resources, AI HR
You will learn how to use the AI tools included in the AI-TOOLKIT for making HR decisions easily. In this case we will train an AI model which can be used to decide/predict if an employee will leave or why he/she will leave or even if it is worthwhile to offer a promotion to an employee.
The article will also explain and compare the different algorithm/tool options available in the AI-TOOLKIT.

You can apply the same principles to any other sector or business case, for example you could predict if a client will leave, why it will leave, or if it is worthwhile to offer a discount, etc.

The Dataset

The dataset contains 15000 rows (records) and 10 columns (variables). You can download the data at the end of this article. In case you are doing this in your company you should first study which variables influence the most the specific business case, in this example an HR problem, and select the variables accordingly. Selecting not enough or too many variables (not enough knowledge or unneeded noise) will result in a less useful or less accurate AI model.

The 10 columns are the following:
  • Satisfaction Level (0-1)
  • Last evaluation (0-1)
  • Number of projects (integer)
  • Average monthly hours (integer)
  • Time spent at the company (integer)
  • Whether they have had a work accident (0-no, 1-yes)
  • Whether they have had a promotion in the last 5 years  (0-no, 1-yes)
  • Department name (text)
  • Salary (text: low, medium, high)
  • Whether the employee has left (0-no, 1-yes)

Depending on which variable (column) you choose as decision variable you can train a model for different purposes for example to predict whether the employee will leave in the future, whether it is worthwhile to offer a promotion, etc.

In this example we will choose the ‘Left’ (whether the employee has left) column as decision variable in order to be able to predict if an employee will leave or not.

Artificial Intelligence in Human Resources, AI HR

Training the AI Model

The different tools in the AI-TOOLKIT use different types of AI algorithms. Each algorithm has its advantages and disadvantages. Some algorithms are well suited for one type of data but not for another type of data. Neural network based AI algorithms can be tuned in such way that they can be applied to all kinds of problems but with the cost of complexity (several layers with often many nodes and even different types of layers) and processing speed (more layers and nodes mean more processing time). Therefore it is worthwhile to choose the tool you want to use in a clever way!

Support Vector Machine (SVM) model

You can easily import your numerical delimited data into the AI-TOOLKIT. The SVM model has several parameters which can be automatically optimized by the built-in parameter optimization module.
The optimal parameters for this problem are C = 100, Kernel Type = RBF and gamma = 10.
The accuracy of the trained model is above 99.9%.

DeepAI Educational Neural Network Model

The deep neural network model in DeepAI is based on a semi-automatic multilayer and multimode neural network implementation. The software designs the neural network semi-automatically, you only need to define the number of layers and nodes per layer (you can of course adjust some more parameters but this is most of the time not necessary). DeepAI Educational does not use very complex state of the art neural network architectures but provide in many cases a good result.

DeepAI uses the SSV data file format (delimited text file). Adjust the settings in the ‘Settings/AI’ tab according to the setting shown in the image below in case they are different. Use the ‘Load Data File’ command and load the data file. DeepAI will automatically design a neural network for the data file. This neural network will provide good results but let us add an extra layer (4 layers in total) and adjust the number of nodes to 24 on the second layer and 10 on the third layer. The first and the last layers have a fixed number of nodes depending on the input functions and the output (1).
Change the number of iterations to 480 and start the training process with the Run command. After a while (2-3 min) the below results will appear which indicate 98 % accuracy for the training data. You still can fine tune the model and obtain a higher accuracy but this is not a fast and simple process. Fine tuning a neural network is a tedious and often long lasting process (adjusting the number of layers, adjusting the number of nodes per layer, adjusting the learning rate, the activation function, etc.). It is also not sure that more layers and nodes will provide better results but you will need to find the optimal solution also depending on the other parameters.

DeepAI AI-TOOLKIT Artificial Intelligence Toolkit Neural Network

Comparison of the results

Below you will find the comparison of the trained model accuracy with the different tools in the AI-TOOLKIT and from other sources. You can use the trained models to predict if an employee will leave.

Model
Accuracy
AI-TOOLKIT:
Numerical model (SVM based)
99.9 %
DeepAI Educational:
Neural Network (simple, not optimized)
98.0 %
Analyses results from other sources:
Gradient Boosting
99.2 %
Random Forest
98.8 %
Support Vector Machine (SVM)
97.4 %
Decision Tree

Source: https://www.kaggle.com/ahujam/using-machine-learning-to-predict-attrition-in-hr
97.2 %

Using the trained AI models

You can use the trained AI models for making automatic and precise decisions about this HR problem.

Conclusion

The different tools in the AI-TOOLKIT use different types of AI algorithms (SVM, Random Forest, Neural Network, etc.). Each algorithm has its advantages and disadvantages depending on the input data. Training Neural Network based AI models is more work than training the other types of AI models. Therefore choose your AI model wisely! The accuracy of the trained AI model mainly depends on the input data and also on the parameters of the models. The quality of the data is very important therefore always check the data before training your AI model. In case the accuracy of the trained AI model is very low then it is very probable that there is something wrong with the data!

Download the Dataset

You can download the dataset in MS Excel format here: HR_comma_sep_u.xls

References
  • HR Analytics Dataset: Attribution-Share Alike 4.0 International (CC BY-SA 4.0) license, Source: https://www.kaggle.com/ludobenistant/hr-analytics






The Future of Artificial Intelligence

The Future of Artificial Intelligence
The future of artificial intelligence
There is a lot of news about Artificial Intelligence (AI) lately but the information is often misleading and wrong. There are many articles which are spreading fear and saying that AI will replace you on your job, that AI is very dangerous because it is cleverer than humans, killer robots, ethical issues, etc. In this article I would like to clarify my vision about the future of AI and I also would like to tell you why most of these articles are wrong and why it will still take hundreds of years to build an AI which mimics well the human brain and body.

Google, who is one of the leading technology giants in the fields of AI, explained recently that they are able to make the brain of a mouse today, but I think that even that is hugely exaggerating. You will understand why after reading this article.

But the answer to the question whether the current state of AI is useful is of course yes, very useful. We can use the results of many years of AI research in endless useful applications in all business sectors. The current state of AI helps humans to do their jobs better and makes things possible which otherwise would not be possible as for example developing new vaccines for deadly diseases, improving business processes, helping humans with disabilities, discovering new things, gaining new insides from all kinds of data and in all business sectors,… etc. Current AI algorithms can interpret all kinds of data as for example numbers, text, images, video, audio, etc. A whole new range of applications are possible and only your imagination is the limit for finding useful applications in your business sector.

In the next sections I will explain in simple terms the current state and my vision about the most important building blocks of a real human like AI system: the Memory, CPU, Sensors, the whole interconnected System and also how it will Learn and use the learned information.

Memory

First of all let me begin with a simplified explanation about how the human body and brain works and why it will be extremely difficult to make an AI (robot) in the future which can replace a human even partly. One of the mistakes which AI researchers make today is that they do not take into account the enormous amount of different types of interconnected memory (and stored information) a human body and brain has. Even brain research is still guessing about many things but there are more and more signs (proven by experiments) that we store different types of things in our brain (memories). We store images, smells, sounds, etc. There is an enormous amount of information in a well developed human brain and body. Yes body because our whole body has different types of interconnected memory, just think about for example what we call muscle memory. One of the reasons why we are unable to make even an attempt to replicate a human is because of the lack of flexible, very fast and huge amount of memory. We are not talking about megabytes or gigabytes of memory but thousands of terabytes of interconnected memory. The technology to be able to have this kind of memory is still very far away. It is very likely that this types of memory will not be like the memory modules we use in our computers today but it will be some kind of chemical and biological substance (like the human body and brain has). There is an active research going on in this field also.

So how will an AI similar to the human body and brain function and work? I think that there will be a very simple logic which connects the different types of memory and sensors (see later) and the complexity will not come from the basic building blocks (which will be very simple) but from the very cleverly interconnected and coordinated whole system. The AI will store all necessary information like the human brain and body (and possibly even much more) and will be able to recall that information (even many of them in parallel) according to some kind of stimuli.

Sensors

The second reason why it is extremely difficult to replicate a human today with an AI is because of the many types of interconnected sensors the human body has. Just think about the eyes (vision), the ears (hearing), nose (smells), touch (all over the human body), temperature, etc.

We are at a very good level with the vision capabilities but even that is much more complex in reality because a human eye (and connected brain) can sense in several dimensions. High resolution is again a very important element because only one 3D image a human eye sees consumes a huge amount of memory and there are endless numbers of images (as a kind of video). We can mimic the human eye 3D capabilities with two cameras today but one human eye can see already in 3D (or more) and the image is used by our brain to estimate depth, distances, etc. Does one human eye contain several ‘camera’s’? Most probably yes! Most of today’s AI algorithms are limited to 2D grayscale images because of the computing power and memory needed to analyze these images! E.g. AI algorithms which can detect an object in an image or video are first converting the images to grayscale before processing.

All sensors are active in parallel in a human body all the time, we see, we hear, we smell, etc. and they are interconnected with a very clever system also containing the nerves in our body (yes the whole body and not only the brain). Just think about an object you are closing to and how you decide if you like that object. You will take several things into account at the same time, the visuals, the smells, the sounds, etc. and you decide automatically in an instant if you like that object or not.

The human body and brain also stores many of the information it receives which takes up even more memory and logic. For example how a smell is stored and recalled in your brain? I do not think that there is anybody who can answer this question yet. The human brain must have some kind of very efficient ‘internal language’ which is used to describe these things. 

CPU

We have today the capability to use computers with several CPU’s also containing separate computing units (cores). But how many and how fast ‘CPU units’ a human body and brain has and how they work seamlessly in parallel? I think that you can imagine that this question is very similar to that of the memory issue above. The human body and brain process in parallel a huge amount of information (from many sensors in the human body, from memory, etc.) and all of this information is used instantly to make decisions. The information needs to be found in the different types of human memory, it needs to be processed and combined. We are very far away from the speed, storage capacity, parallel processing, logic, etc. to be able to replicate a human brain and body. The ‘wonders’ of nature and the human body have still a lot of secrets for us even after studying them for thousands of years!

One of the positive developments today is the recent appearance of extra computing power and extra AI capabilities in video cards led by NVIDIA Corporation. This is a completely logical evolution and I expect the appearance of the same capabilities e.g. in a separate audio processing unit (and other sensor systems). Computer systems will first grow in this way and just in the far future will they be integrated more closely and will be much smaller in size. I also expect in the very far future that many things will be replaced by some kind of biological and chemical substances with human like capabilities.

But for the time being there is an urgent necessity for standardization of current and future developments in order to make these systems (video card, CPU, memory, etc.) much better coordinated and easier to use (software development).

The Whole System

A human body and brain contains an interconnected system which senses, processes, stores and recalls all of the information a human experiences. Many types of information is used at the same time to make decisions as for example that a glass can be used to drink, that you can open a door, what to do if someone is calling you, what to say if someone is asking a question, etc. The basic building blocks of this interconnected system are very simple but the whole system together is extremely complex. Today AI systems do not take this into account because they are either overcomplicating the basic building blocks and/or they are incorrectly building the system as a whole e.g. not taking into account the distributed different types of memory and huge amount of stored information, stimuli and input from several sensors, etc. 
This is of course also because we are not yet at the technological level and knowledge which are necessary for building a real AI system and that AI research is trying to simulate human like AI patterns.

So how will an AI similar to the human body and brain function and work? I think that there will be a very simple logic which connects the different types of memory, sensors, CPU, etc. and the complexity will not come from the basic building blocks (which will be very simple) but from the very cleverly interconnected and coordinated whole system. The AI will store all necessary information like the human brain and body (and possibly even much more) and will be able to recall that information (even many of them in parallel) according to some kind of stimuli.

There are a lot of AI algorithms out there and many people ask the question that which algorithms is the best, which should they use and which algorithm is the future. I have given the answer to this question already with the statement above that ‘there will be a very simple logic which connects the different types of memory, sensors, CPU, etc.’. Most of the complexity in current algorithms and the diversity of the algorithms are needed today because of the limited memory, CPU and our limited knowledge about the whole system. The most efficient and best way to work today is to select the appropriate algorithm for a specific task. One of the errors made by AI experts and researchers today is that they try to use neural networks based AI algorithms for every task despite the fact that for many tasks there are other much simpler and faster algorithms which even give a better accuracy in many cases.

Learning and Ethics

We know already a lot of things about how learning is done. We can learn from existing knowledge what we may call supervised learning (e.g. someone tells us or shows us that we can drink from a glass). We can learn unknown things which we may call unsupervised or reinforcement learning (with trial and error we find out things). Both types of learning are very important and we can not have a real AI system which can not combine both of these learning strategies. An AI could of course learn with a trial and error strategy only but that would be very inefficient and even dangerous because of the unknown direction the AI would evolve. Learning in reality is of course more complex than just these two learning strategies and it involves a range of strategies which are some kind of combination of these two. Current AI algorithms can not combine these strategies effectively yet.
A real human like AI must thus combine several learning strategies and choose and use the one automatically which is most appropriate in the specific circumstances.

There were a lot of failures in automated learning/training of AI systems recently. Several AI giants announced, launched and then very soon recalled such AI systems. The reasons for this are very simple and twofold, the current AI algorithms are not intelligent enough and as a child can learn bad things an AI algorithm can also learn bad things. For example put a child in a wrong environment and she/he will learn how and what they speak in that environment, put a natural language processing AI system on the internet where it can be acceded and influenced by anyone and it will learn unexpected things.

And with this last thought we have also arrived to the topic of Ethics. Ethics is of course something that is invented by humans and it must be learned! The good news is that good ethics can be learned as we can learn any other thing but the bad news is that like anything else in this world all good things can be used with bad intention. An AI system can be used to save or improve lives but it also can be used to destroy them. This is the same question as asking whether to sell guns or computers to bad people. Asking the question whether we will be able to prevent that bad people will access highly intelligent AI systems in the future is the same as asking if we can prevent bad people accessing guns or the internet. The balance of the world we are living in depends completely on us and this is as much important as evolution. There can not be evolution without a good balance. If evolution would go much faster than we can keep up with holding the right balance the world would be destroyed by the technology invented by humans.

Conclusion

The answer to the question whether the current AI algorithms are useful is of course yes, very useful, but we are very far from an all-in-one real AI system which can replace a human. We can of course use the results of several years of AI research in endless useful applications in all business sectors. The current state of AI helps humans to do their job better and makes things possible which would not be possible otherwise as for example developing new vaccines for deadly diseases, improving business processes, helping humans with disabilities, discovering new things, gaining new insides from all kinds of data and in all business sectors,… etc. Current AI algorithms can interpret all kinds of data as for example numbers, text, images, video, audio, etc. A whole new range of applications are possible and only your imagination is the limit for finding useful applications in your business sector. AI systems will start to speed up evolution very soon! It is already started!

AI automated Root Cause Analysis

AI automated Root Cause Analysis

Are you still using techniques like 5 Whys, fish-bone diagram or even guessing for root cause analysis? Would you like to have a root cause instantly with high accuracy? If yes then read further!

By using Artificial Intelligence (AI) software you can teach the AI all possible root causes depending on complex input data and ask the AI later for the root cause any time you need it! And you can do this in any sector and in any discipline. You can even automate this process and predict if a problem will occur in the future by continuously monitoring the input parameters and feeding them to the AI for prediction. The only thing you need is an AI software and data.


Download the AI-TOOLKIT for free (fully functional version for non-commercial purposes – no registration is needed!) and make your own AI automated root cause analysis!

How to start your AI Root Cause Analysis?

  1. Download the AI-TOOLKIT for free (fully functional version for non-commercial purposes – no registration is needed!).
  2. Install the software.
  3. Decide on the process or phenomenon for which you want to make a root cause analysis.
  4. Collect the necessary data. The data should contain a sufficient number of records. Each data record should contain a number of parameters and a root cause (decision variable) resulting from the specific state of the parameters.
  5. Choose an AI-TOOLKIT algorithm (SVM, Random Forest, Neural Network, etc.) and feed the data to the AI e.g. in a delimited text file format.
  6. Train the AI. Check the accuracy of the AI model. In case the accuracy is not acceptable then you may need to add more training data, introduce extra parameters or use an other built-in algoritm.
  7. If you are satisfied with the accuracy of the AI model save the model and use it for the prediction of future root causes. You can also automate the prediction process with the AI-TOOLKIT.

You can repeat this procedure for any number of processes or phenomenon’s and like this develop several AI models and use them for all automatic root cause analysis in your company!

More info


 AI-TOOLKIT Download

 Software Feedback Form



Learn about Artificial Intelligence (AI)

Learn about Artificial Intelligence (AI)
There is a lot of hype about AI (Artificial Intelligence) today but what is true from all of these stories and myths? I have collected a number of very interesting articles and real applications of AI which I would like to share with you. I will regularly update this list so keep coming back for more interesting stories and facts!

  1. This is a very high level explanation of AI and many of its applications. It also tries to tell the truth about what AI is and what it is not. It is not a comprehensive list of AI and/or machine learning because it only focuses on neural networks (often called deep learning). Most of the scientists today think that deep learning is the future of AI.
      
  2. Introduction to the application of artificial intelligence (AI)
    This is a more in depth explanation of AI and machine learning.
      
  3. Behind the scenes at Google’s AI Team
    A few weeks ago The New York Times published a long (at least an hour to read it) and intimate article about how Google evolved the last five years to one of the leading companies in Artificial Intelligence. Big things in life are made not by luck but by hard work and curiosity!
      
  4. Case study: artificial intelligence in healthcare business process improvement
    The main aim of this case study is to demonstrate the different applications of Artificial Intelligence (machine learning) in business process improvement specific to the Healthcare sector.
      
  5. An artificial intelligence platform for the multihospital collaborative management of congenital cataracts
    A very interesting article appeared recently about AI (using deep learning of images) for diagnostics, risk stratification and treatment suggestions, accurately diagnoses and provides treatment decisions for diseases (often better than a specialist)!
     
  6. How Deep Learning is Reinventing Hearing Aids
    This article is from Nvidia. Nvidia is a manufacturer of graphic computer hardware accelerating AI applications in several fields. Mercedes (car manufacturer) has just signed a cooperation agreement with Nvidia for accelerating AI applications. "AI deep learning to separate speech from noise. How deep learning hearing aid technology could also improve speech recognition on cellphones, help workers on noisy factory floors or equip soldiers so they can hear each other amid the the cacophony of battle."
     
  7. Case study: artificial intelligence in the financial sector
    The Application of Artificial Intelligence (AI) in the Credit Screening of Clients in a Financial Institution.
     
  8. How Deep Learning Changes Market for Solar-Powered Homes
    "AI deep learning-based analysis of a household’s likelihood to embrace solar, and its prospects for getting good solar production. So far, the company has trained two networks, both of which rely on analysis of satellite data: One determines whether a house already has solar panels; a second determines whether  vegetation is crowding the roof and could get in the way of an installation."
     
  9. Can AI End Checkout Lines?
    "AI Lets Shoppers Avoid Long Waits at Checkout  - two artificial intelligence startups aim to make checking out of grocery stores and company cafeterias a walk in the park."
     
  10. KLM Provides Faster Customer Service with AI Tool
    “We’re unlocking the intelligence value of historical data while helping customer service agents deliver a faster and more accurate experience for their consumers,”
     
  11. Case study: THE APPLICATION OF AI IN HUMAN RESOURCES
    You will learn how to use the AI tools included in the AI-TOOLKIT for making HR decisions easily. In this case we will train an AI model which can be used to decide/predict if an employee will leave or why he/she will leave or even if it is worthwhile to offer a promotion to an employee.
     
  12. THE FUTURE OF ARTIFICIAL INTELLIGENCE
    The current state and my vision about the most important building blocks of a real human like AI system.


Image partly courtesy of drpnncpptak and ratch0013  at FreeDigitalPhotos.net

Case Study: Artificial Intelligence in the Financial Sector

Case Study: Artificial Intelligence in the Financial Sector

The Application of Artificial Intelligence (AI) in the Credit Screening

of Clients in a Financial Institution.

The Application of Artificial Intelligence (AI) in the Credit Screening of Clients in a Financial Institution
The need for Credit Screening may occur in several circumstances in a Financial Institution (banking, insurance, investment banking, etc.). For example when a private person wants to borrow money, when a business wants extra credit, as part of a recruitment process, etc. Credit Screening means that the financial institution performs a background check on the applicant in order to decide whether to approve or reject e.g. the credit request. Such credit screening involves the collection of a number of attributes which are relevant for making such a decision. Depending on the value of these attributes the financial institution can decide whether to approve or reject the application. Such attributes are e.g. the annual income of the applicant, owned cash and properties, existing loans, former applications history, etc.
This case study will show you how to use an AI model to make credit screening decisions fast and accurately for credit card applications.

The dataset used in this case study contains data collected in a Japanese bank for 653 credit card applications [1]. Each record in the dataset corresponds to an APPROVE or REJECT credit card applicant. A part of the dataset can be seen in the image here under.

Credit Screening dataset for DecisionAI, Artificial Intelligence in Finance

Please note that the names and some of the values of the attributes are changed to symbols in order to protect the confidentiality of the bank.

The type and values of the different attributes:
  • A1 - Text data type with values: A, B.
  • A2 - Number data type with values in the range of: 13.75 – 76.75
  • A3 - Number data type with values in the range of: 0 - 28
  • A4 - Text data type with values: U, Y, L, T.
  • A5 - Text data type with values: G, P, GG.
  • A6 - Text data type with values: C, D, CC, I, J, K, M, R, Q, W, X, E, AA, FF.
  • A7 - Text data type with values: V, H, BB, J, N, Z, DD, FF, O.
  • A8 - Number data type with values in the range of: 0 - 28.5
  • A9 - Text data type with values: T, F.
  • A10 - Text data type with values: T, F.
  • A11 - Number data type with values in the range of: 0 - 67
  • A12 - Text data type with values: T, F.
  • A13 - Text data type with values: G, P, S.
  • A14 - Number data type with values in the range of: 0 - 2000
  • A15 - Number data type with values in the range of: 0 - 100000
  • A16 - This is the decision variable or class (Text data type) with values: APPROVE, REJECT
You can download the dataset here: Japanese Credit Screening dataset.

After collecting the data the training of the AI model is a very simple process. By feeding this data to the AI the Japanese bank could train an AI model which could be used for making the credit card application approval very fast, reliable and simple.

There are of course many other applications where an AI model can also be used in the Financial sector, as for example in decision making processes similar to the credit card application process, or in other types of risk analyses, in making buy/sell decisions on the financial market, etc.

References
1. Japanese Credit Screening dataset, Chiharu Sano


In case you are also interested in some more BUSINESS PROCESS IMPROVEMENT Cloud computing tools:
 Time Study BPI Timesheet Tools, Google Sheets add-on BPI Cloud Computing Tools, Google Sheets add-on

Case Study: Artificial Intelligence in Healthcare Business Process Improvement

Case Study: Artificial Intelligence in Healthcare Business Process Improvement
Artificial Intelligence in Healthcare Business Process Improvement
The main aim of this case study is to demonstrate the different applications of Artificial Intelligence (machine learning) in business process improvement specific to the Healthcare sector. However, many of the principles and ideas applied in this case study are also applicable in many other sectors!
In case you would like to try the in this case study modeled AI or you would like to apply the principles learned from this case study in your own project then you can do so by using the free Decision AI Google Sheets Add-on (part of the BPI Tools package; used in the Post-Operative Patient Care Process case). The Decision AI Google Sheets Add-on is developed in cooperation with Google.

If you want to read more about how the AI works then look at here: INTRODUCTION TO ARTIFICIAL INTELLIGENCE MODELING.

For the second case study (Breast Cancer Diagnosis Process) you need the AI-TOOLKIT Professional software package which is a MS Windows desktop software. The AI-TOOLKIT can handle a lot of data, it is much faster than the Google Sheets Add-on, it has no AI training time limit, it has automatic model parameters detection and it has more AI modeling options (which are e.g. needed for the second case). 

Case 1: Post-Operative Patient Care Process

Introduction

The aim of this case study is to improve the post-operative patient care process in a hospital. After an operation, according to the current post-operative patient care process, patients need to be examined by a medical doctor in order to determine where the patients should be sent from the postoperative recovery area. The possibilities are the following:
  • The patient may go home,
  • The patient needs to go to the general care hospital floor (GC),
  • The patient needs to be transferred to intensive care (IC).
In order to be able to improve this process (make the process much faster and more reliable) the hospital needs to collect all necessary data which is needed for making this decision for many patients and use this data to train an AI model. After the AI model is successfully trained a hospital employee (e.g. a nurse) can simply feed the specific patient data to the AI model and the AI will tell instantly what should happen with the patient. This is much faster because the waiting time for the medical doctor is eliminated, in many cases the decision is more reliable because the AI does not get tired or confused by external factors, the medical doctor or specialist can do other important things and as last but not least the patient will be more satisfied with the faster process! Several important reasons to implement such a process improvement!

The AI training data

A subset of the data which is chosen to train the AI about the post-operative patient care process can be seen in the table here under. The data is real patient data collected in a hospital (see reference at the end [1]). The different attributes/parameters are explained below the table.

Post-Operative Patient Care Process Decision AI data set

Most of the numerical attributes, as for example the temperature, are grouped and converted to textual classes. This is one of the tricks which can be used while training an AI model. You may of course also use numerical values but this decision may influence the choice of the AI model! In many situation it is sufficient to group the data into well chosen textual ‘classes’. Decision AI can handle both textual and numerical attributes and therefore we could call it a mixed attribute AI.

The collected attributes and groupings are the following:

1. L-CORE: the patient's internal temperature:
  • high > 37°,
  • mid >= 36° and <= 37°,
  • low < 36°
2. L-SURF: the patient's surface temperature:
  • high > 36.5°,
  • mid >= 36.5° and <= 35°,
  • low < 35°
3. L-O2: the oxygen saturation:
  • excellent >= 98%,
  • good >= 90% and < 98%,
  • fair >= 80% and < 90%, poor < 80%
4. L-BP: the last measurement of blood pressure:
  • high > 130/90,
  • mid <= 130/90 and >= 90/70,
  • low < 90/70
5. SURF-STBL: the stability of patient's surface temperature:
  • stable,
  • mod-stable,
  • unstable
6. CORE-STBL: the stability of patient's core temperature:
  • stable,
  • mod-stable,
  • unstable
7. BP-STBL: the stability of patient's blood pressure:
  • stable,
  • mod-stable,
  • unstable
8. COMFORT: the patient's perceived comfort at discharge, measured as an integer between 0 and 20.
9. DECISION the discharge decision: 
  • Home: the patient needs to be prepared to go Home,
  • GC: the patient must be sent to the General Care hospital floor.
  • IC: the patient must be sent to Intensive Care,
The data can be downloaded here: Post-Operative Patient Care Process Decision AI data set. The data file has a simple tab separated format which can also be read by Google Sheets or any other spreadsheet software.

Please read the Decision AI Google Sheets Add-on webpage for more info about how to use the software.
You can try yourself training the AI with the free Decision AI Google Sheets Add-on! The Add-on contains a fully functional AI but the AI model training time is limited to maximum 6 minutes (enough for the processing of several thousands of input data).

After feeding the data to Decision AI (in a simple Google spreadsheet) and training the model, the AI indicates a model accuracy of around 93%. This means that the model/problem learned by the AI fits 93% of the input data or with other words the AI will make a good decision in 93% of the cases. This is a quite good accuracy, especially taking into account that there are only a relatively small number of records, but the accuracy could still be improved by adding more data or even more attributes!

Decision AI Google Sheets Add-on AI model
source: Decision AI Google Sheets Add-on

A subset of the AI model visualized by Decision AI (Google Sheets Add-on) can be seen here above. How the AI develops this model is explained in the document here: INTRODUCTION TO ARTIFICIAL INTELLIGENCE MODELING.

The reason for building such a hierarchical tree model is that the AI then can search for the requested answer very quickly by traversing the tree from top to bottom.

A simple visual form could be presented to a nurse who must enter the appropriate attributes and push the Ask AI button to get the answer from the AI. The AI in the Decision AI Google Sheets Add-on can answer several questions (presented by an attribute set) entered into a Google spreadsheet at once (this option is also available in the desktop version).

As you can see, training the AI and asking the AI a question is very simple. Anybody can operate the AI with some basic computer knowledge! The complex machine learning algorithm is hidden and most of the parameters are selected automatically by the AI. An other important advantage of Decision AI is that it can learn any type of problem from any discipline or business sector! Nothing needs to be changed.

The importance of the collected data

The capabilities of the AI as e.g. the accuracy of making a decision depend completely on the input data used to train the AI! The careful selection of the appropriate attributes and data records is therefore very important.

If you look at the data you will see that there is only 1 case included where the patient has to be transferred to intensive care (IC). This is of course something that could cause a problem. You should make it sure that from each decision there are a representative number of occurrences in the data set! Remember that the AI needs to learn a specific problem or phenomena and in order to learn something well there is enough information needed! What is enough is sometimes obvious (all possible combinations covered) but sometimes, in case of more complicated models, it needs to be decided by testing the AI model. Training and then testing the AI model are both very important steps in the AI learning process!

Case 2: Breast Cancer Diagnosis Process

Introduction 

The second case is a more complicated case. What we will improve in this case is the process to determine if a patient has breast cancer or not. The patient goes through several process steps from which one step is where a digitized image of a breast mass is created and analyzed by the computer and the so called cell nucleus characteristics are measured and recorded. By studying and comparing the characteristics of the cell nucleus in the cases of many patients, who have or do not have cancer, and feeding the collected data to an AI model, the AI can learn which characteristics result in cancer of the patient. The necessary AI training data attributes are decided by specialist and computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. 

Building and using an AI model in the decision process not only decreases process time significantly but it also makes the process more reliable because of the complicated attributes/measures used in the decision making process. An other advantage could be that the input data could automatically be fed to the AI and by this eliminating a very time consuming manual process step.

The AI training data

A subset of the collected data can be seen in the table here under. Two digitized images with the cell nucleus present can also be seen below.
Each record consists a series of attributes and the final diagnosis whether the patient with these attributes has cancer (malignant tumor) or not. The aim is to collect all possible combinations of the attributes in a way that the AI can be trained well and that it then can decide very accurately whether the patient has breast cancer or not.

Breast Cancer Diagnosis data set

The different attributes in the data are described here under:
  • Column 1
    Diagnosis: Malignant=1, Benign=2)
  • Columns 2-31
    Ten real-valued features are computed for each cell nucleus:
    a) Radius (mean of distances from center to points on the perimeter)
    b) Texture (standard deviation of gray-scale values)
    c) Perimeter
    d) Area
    e) Smoothness (local variation in radius lengths)
    f) Compactness (perimeter^2 / area - 1.0)
    g) Concavity (severity of concave portions of the contour)
    h) Concave points (number of concave portions of the contour)
    i) Symmetry
    j) Fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features.

Breast Cancer Diagnosis AI data set Breast Cancer Diagnosis AI data set
digitized images with the cell nucleus present, source [2]

The data can be downloaded here: Breast Cancer Diagnosis data set. The data file has a simple tab separated format which can also be read by Google Sheets or any other spreadsheet software.

Because of the large amount of continuous numerical data this case is better modeled with a numerical AI model present in the AI-TOOLKIT but not in the Google Sheets Add-on. You can however train an AI model with the Decision AI Google Sheets Add-on also by defining the numerical attributes as ‘Number’. Read the Decision AI web page for more info about how to do this.

In order to be able to use the fully numerical AI model all attributes need to be converted to numerical values. In our case there is only one non-numerical attribute and that is the Decision variable which is the Diagnosis whether the patient has breast cancer or not. The two possible options can be simply converted to Malignant=1, Benign=2.

After preparing the input data in the appropriate format (tab separated values) the AI model parameters need to be optimized. This can be done automatically by the software by executing the Optimize SVM Parameters command from the menu which will first read the input data prepared in the former step. The AI-TOOLKIT will then report the best parameter combination for the input data and type of AI model.

After entering the optimal model parameters in the settings the AI model can be trained. When the AI is ready learning the problem it will let you know the accuracy of the model (around 100%). The AI will be able to predict correctly whether the patient has breast cancer or not in most of the cases. This is a very good accuracy but do not forget that the model still needs to be tested with an appropriate number of attribute sets/data records in order to make sure that the AI is learned enough about the phenomena!

The trained AI model can be used manually for making decisions or, as it was already mentioned earlier, the input data could be fed to the AI model automatically and the results could also be collected automatically. The AI algorithm could even be integrated into different digital devices in order to have an all-in automatic analysis possible.

References:

1. Post-Operative Patient Data Set: Sharon Summers, School of Nursing, University of Kansas. Medical Center, Kansas City, KS 66160. Linda Woolery, School of Nursing, University of Missouri, Columbia, MO 65211.

2. Breast Cancer Wisconsin (Diagnostic) Data Set:
1. Dr. William H. Wolberg, General Surgery Dept. University of Wisconsin, Clinical Sciences Center Madison, WI 53792

Conclusion

As we have seen in the two cases an AI (machine learning) computer software model can be very useful in the improvement of business processes. The techniques explained in this paper can be used not only in the healthcare sector but in many other sectors too! There are two important considerations while using an AI model:
  1. The attributes and the data records (attribute sets) used to train the AI model are very important. The capabilities of the AI will depend on the data it gets for learning a specific phenomena. You can of course always add more data and/or attributes and re-train the AI.
  2. Extensively testing the AI is also very important in order to make sure that the AI is trained well in all aspects of the studied phenomena.
If you want to read more about how the AI works then look at here: INTRODUCTION TO ARTIFICIAL INTELLIGENCE MODELING.
The AI-TOOLKIT: https://ai-toolkit.blogspot.com


In case you are also interested in some more BUSINESS PROCESS IMPROVEMENT Cloud computing tools:

 Time Study BPI Timesheet Tools, Google Sheets add-on BPI Cloud Computing Tools, Google Sheets add-on

Cloud Computing
AI Tools

Cloud Computing BPI Tools

Decision AI
FREE Google Sheets Add-On

Read More

Contact

Contact Person

Address

Antwerp, Belgium

You may use LinkedIn (phone number and email available) or the contact form to the right.

Helpdesk

You may contact the AI-TOOLKIT helpdesk:

Search This Website