In terms of heart diseases, researchers from Asan hospital, Seoul represented AI which can concisely predict ventricular tachycardia before 1 hour from when it strokes.
This increased the viability of patients from heart attack but required the human effort to set an artificial neural network which was based on specific feature.
Furthermore, researchers from Vuno institution, developed DeepEWS, which used RNN (Recurrent Neural Network) deep learning system to learn 7 types of information from individual patient.
RNN was used for this system as researchers wanted to predict heart disease before 24 hours and concentrate on data variance and reducing false warning alert.
Similarly, Cardiogram applied deep learning to the heartbeat sensor in wearable device to identify atrial fibrillation and atrial flutter as heartbeat’s distinctive rhythm was able to recognize some irregular pulse.
Irregular pulse specificity of wearable device was evaluated with two criteria (Sequence F1- figure out arrhythmia on proper time, Set F1- detect arrhythmia from whole data).
As a result, deep learning was showing much higher specificity with both sequence F1 and set F1 when it compared to cardiologist.
This research identified that AI could easily classify two types of atrioventricular block which most professional cardiologists were having difficulty to identify.
If data aggregation (electrocardiogram, heartbeat rate…etc), analyzing, and classifying can be constantly processed through wearable device or apple watch, predicting heart diseases with high accuracy will be developed.
In these days, breath, temperature, heartbeat rate, blood sugar level, and other human body signals can be analyzed by AI and consequently indicate or predict our health status and diseases.
Sepsis, a common deceasing factor of patients, is the best example which can be accurately recognized by varied temperature, blood pressure drop, reaction reduction, and raised breath rate.
According to researchers from University of Ontario Institute of Technology, premature infants who are about to be subjected to sepsis displayed the reduction of heart rate variability.
In terms of intensive care patients, TREWScore (Targeted Real-Time Early Warning Score) was developed with EHR big data and this allowed predicting sepsis shock about 28.2 hours earlier.
Additionally, Medtronic and IBM Watson displayed an AI, Sugar IQ, which allowed diabetic patients to constantly manage and predict their blood sugar level.
Sugar IQ suggests insight to control blood sugar level after when it analyze every monitored blood sugar and insulin level variance.
It makes patients to improve their good habit by providing proper information for individuals after when all of informative patterns are identified.
Although medical data is complicated, it can be analyzed and display new insight through AI. This can predict future disease attack and decrease readmission rate or reduce medical expense.
The guideline derived from ACC/AHA, is not a perfect model to predict disease as it does not include any individual’s life style, habits, or risk factors.
As AI can display risk factors of disease without any prejudice or preoccupation, it allows more accurate results than previous standard guideline.
Researchers from Nottingham University, England, identified that analyzed data via artificial intelligence included ethnicity, mental disorder, and oral corticosteroid as a critical risk factor while previous model included diabetes.
When AI, big data, and deep learning are combined together, it will more accurately display result rather than human as people are having limitation of concisely analyzing data.
In our past medical society, some researchers expected AI will replace professional physicians as it is based on big data and derive more accurate results than human without any bias.
Machine learning, which learns data via math equations and setting rules like a human, is commonly used in filtering spam mails, face and voice recognition in our presented society.
Since AI is still developing on broad area and playing critical roles much faster and clearer than human, unemployment is becoming a serious issue.
Furthermore, deep learning has been developed since 2010 by using artificial neural network. It is similar to human brain as several layers and different algorithms can process learning in depth.
As an amount of high quality data has been increased and accumulated in medical field, deep learning became compatible with medical field.
Artificial intelligence can be classified to three types (narrow, general, super) and all of these are focused on solving problem which humans are having difficulty.
When general AI is compared to narrow AI, general AI can think and analyze information or data like human can do.
Moreover, super AI is not clearly developed yet but it will become available to develop itself even more than human can develop with their brain. This is the reason why super AI is controversial in our society.
In our presented medical society, professional physicians and researchers are more focused on developing pattern recognition, memorizing, and classifying knowledge of AI.
Best way for us to develop AI is combining with human ability and deriving new profit. If people can regulate and apply rules to AI in medical field, AI application can effectively work.
Google displayed the AI technology which can predict individual patient’s medical treatment results by analyzing their electronic health record (EHR) through deep learning system.
It was difficult to analyze EHR as several complicated data such as the quantity of patients, different type of diseases, prescriptions, and surgical operations are presented.
Moreover, since some data is not being presented in EHR, limitations are still remaining in usage of deep learning process.
To solve this problem, professional physician needed to select critical cases from whole EHR database to increase specificity of prediction model until deep learning process developed.
As deep learning learned the whole data of EHR and identified the important source by itself, it allowed physicians to predict patient’s health status while processing or after medical treatment and presented the diagnostic name when patients discharged from hospital.
Deep learning was able to recognize the important source of whole EHR data, and this led researchers to find out that AI regarded impatient mortality as the first critical factor.
It also “scale up” the prediction model as deep learning could emit every process such as preparing and adding supplemental academic data.
An AI technology which Google has been presented is very efficient to predict and prevent patient’s risk as deep learning can analyze complex medical data and provide prospective results.
Along with the study of evaluating diabetic retinopathy by applying deep learning to retinal fundus image, predicting cardiovascular risk factor also became focused in medical field.
As identified retinal fundus images can display distinctive features of optic disc or blood vessels, correlation between heart diseases and retinal disease, it allowed to predict the cardiovascular risk factors via deep learning.
Other parameters like age, gender, blood pressure, body mass index, glucose, and cholesterol levels can critically impact different phenotype of retinal images and suggest additional signals of the risk.
Every additional signals can be rapidly derived from various retinal images via spending cheap price (Very Efficient).
Ophthalmologists used the methods of highlighting different anatomical location of retinal by markers to identify and predict the risk factors. Blood vessels were highlighted to predict risk factors such as age, smoking, and SBP.
To predict HbA1c, perivascular surroundings were highlighted, and for gender prediction, optic disc was primarily highlighted.
For other predictions, such as diastolic blood pressure and BMI, the circular border of the image was highlighted and suggested that the signals will be distributed more throughout the image.
Before when deep learning was not displayed in ophthalmology, diabetic retinopathy was needed to be evaluated and prescribed through human effort.
IDx-DR is using the algorithm which applied two development operating set (high specificity & high sensitivity) to identify diabetic retinopathy and to evaluate patient’s common status.
Referable and non-referable diabetic retinopathies were evaluated by two clinical validation sets (EYEPACS-1 & MESSIDOR-2) and ophthalmologists identified if these sets worked properly.
Multiple grading processes of convolutional neural networks along with data sets are required to get high sensitivity and specificity of RDR.
An overall sensitivity was lower than expectation, but FDA allowed to save more patients who were suffered by diseases.
The reference standard which used for this study was derived from ophthalmologist graders, and this will not find subtle traits which most ophthalmologists would not identify.
01:54:25The most common trend in medical AI society is using deep learning technology which can analyze medical image data.
Deep learning is the process of training neural network which allows an algorithm to program itself by learning from a large set of models that perform the desired behavior.
It is an important source in our company as we are developing products which can detect cardiovascular risk factors through retinal fundus image analysis.
Professional physicians can save time and effort of identifying relationship between one disease the other and predict outcome after when they diagnose patients through deep learning.
Convolutional neural network (CNN) & Recurrent neural network (RNN) are the most well-developed form of artificial neural networks which can analyze specific image or varying data.
The concept of deep learning is derived from the idea of brain signal processes between one neuron to other. Every neurons becomes a connected form of neural network and save large data.
The connected form from artificial neural network is embodied as a weight. Learning processes occur when weights can strategically vary out and form a model which can explain specific data.
After when inserted data pass through activation function and yield results, these results are compared with the answer which needs to be properly displayed.
If error between derived results and official answer exist, those will confirm and change weights to produce more accurate result.
The most significant point of deep learning is the fact that human does not have to indicate any features beforehand as deep learning can learn every feature of data by itself.
In terms of image data classification, AlexNet specifically developed more on optical recognition by CNN from deep learning and displayed the least percentage of error (16.4%).
In medical field, deep learning displays more efficient interpretation result when it compares to professional physician’s interpretation as it is mostly based on CNN.
The most important factors when researchers determine the efficiency of deep learning application are test result accuracy, convenience to patients, and medical cost reduction.
The best representative example of deep learning application can be “Mammography” from Zebra Medical Vision which focused on evaluating breast cancer by interpreting X-Ray picture.
When researchers observed and compared the result of Mammography, AUC displayed the high accuracy of 0.922 and specificity/sensitivity were similar to professional radiologist’s evaluation.
This research proved the fact that the medical field could be replaced with AI based machine rather than remaining professional physicians.