Artificial Intelligence: 2017 A Health Odyssey
It seems as if Artificial intelligence (AI) has been imminent for the last 30 years – science fiction led us to believe it would be advanced and ubiquitous. The time of HAL from 2001 a space odyssey has been and gone, yet Amazon’s Alexa isn’t in any danger of refusing to open the pod bay doors anytime soon.
This image of the future of AI was dreamt up in Dartmouth College in the 1950s and is what we would refer to as general AI. Complex machines that would become companions for our everyday lives. It may have spawned some fantastic characters for TV and film, however in reality AI hasn’t developed to be anything like that.
Since 2012 we have witnessed an AI renaissance that’s captivated a portion of Silicon Valley. One of the huge breakthroughs of 2016 was Google’s Alpha Go’s stunning defeat of Korea’s Lee Sedol at the board game ‘Go’. So what, Deep Blue beat chess master Gary Kasparov in 97’ and that didn’t revolutionise AI. How does a computer that beats people at a 2,500 year old Chinese board game improve healthcare in the developing world?
Simple deep learning
A mysterious player named The Master started beating players online at Go. Including the world number one, Ke Jie. Twice. Effectively Alpha Go coached itself and self-improved by playing others. The same logic can be applied to millions of images. An image can be split up into tiny pieces and analysed just like any other piece of data and AI can assign a probability to the likely definition of that image.
For example, this my colleague’s cat, Tabs. Based on the analysis of this image the AI can assign the ‘probability vector’ of this being a cat at 75 per cent and a dog at 10 per cent, other at five per cent. The network architecture tells the AI that it is in fact right and Tabs is a cat. The AI has therefore improved its ability to asses if an image contains a cat. Google X Lab used millions of YouTube videos to improve this system by having an AI correctly describing images of cats. Effectively adding the big data element and putting the deep in deep learning. This is why your image searches through Google are now scarily accurate.
The same principles were introduced for medical diagnosis. Googles Deepmind is already being put to task, looking at millions of pictures of eyes to help diagnose early symptoms that may lead to blindness. This can be applied to any image MRI, CT’s, X-ray’s or an image from a camera phone.
You have a diagnostician that doesn’t sleep, doesn’t need paying and has proven at times to be more accurate than its human counterpart. Machine learning has the potential to deal with a huge proportion of the diagnostics, in an affordable and scalable way. Best of all with images there are no language barriers as it eliminates the need of translating medical information into every local language.
This principle is already being applied, with encouraging initial success. The overburdened nature of healthcare infrastructure is a key challenge for developing countries. In India, where the doctor patient ratio is 1:1681, Manipal Hospital have recruited the help of IBM’s Watson to help its Oncology department in diagnosis.
Another key challenge in developing countries is simply access in remote areas. Excelscope in Uganda are using a smartphone app combined with a few 3D printable mechanical parts to create a tool for remote clinics to effectively diagnose Malaria. It’s moving from triage to actually diagnosing the patients and providing cheap and reliable healthcare through AI. Empowering remote clinics and healthcare professionals to deliver top line diagnostics with a smartphone is truly incredible.
However, with the technology in its current state it is important to be aware of its shortcomings. What I’ve talked about so far play up to AI’s strengths in dealing with the hyper-acute – dealing with the same information over and over again. Where it currently struggles is with more broad scope diagnosis.
The health professional-patient relationship hasn’t changed much since Hippocrates. You go to the nurse or doctor, point at where it hurts and they tell you what to do about it. Any healthcare professional will tell you a patient’s own description of their symptoms can range from the ridiculous to the sublime. AI struggles with the range of things we can say about the subject matter.
Spend five minutes with SIRI or OK Google and it’s the first thing you notice; it may be able to do what you want but unless you ask within its own parameters it’s going to struggle. The same applies to healthcare diagnosis. Simply telling a chat bot your medical problems as a lay person is even trickier than asking Siri to play your running playlist. Combine this with the 20 local languages in your developing country context and IBM’s Watson doesn’t even stand a chance.
AI isn’t set to replace general diagnosis quite yet and may never replace the emotional element. It currently has the capability to cover two problems – access to information and triage. Organisations like your MD are looking to develop chat bots that do exactly that. You enter your symptoms, it asks you some further questions and it gives you the most likely condition and recommendations for treatment. Of course there is a danger in self-diagnosis but in the modern day most people google their symptoms before seeing a professional. Chatbots, developed correctly, can give people advice on how to alleviate their symptoms as well as what type of healthcare professional they need. In isolated communities this could be lifesaving triage, alleviating pressure on the oversubscribed infrastructure.
The development cost has come down massively with off-the-shelf programmes that assist with natural language processing (AI that understands the variety of human speech) development to develop your own chat bot. The interfaces themselves are cheap and familiar: Skype and Facebook messenger have native support, other services link the back end to SMS. You do not need the power of a super computer to develop a smart chat bot, you could run it from a relatively simple back end or through the magic of cloud computing make it relatively powerful.
The premise of AI may be 60 years old but the buzz around it feels new and exciting. The innovation currently surrounding it is huge. With the likes of Google and IBM pouring money into it, it’s a matter of time before it delivers real change to healthcare.
There is a large and growing gender gap in mobile usage as well as access. There are a number of different ...Read more
The mHealth Design Toolkit is a collection of insights, tools and key principles to increase adoption and ...Read more
Kilkari: A maternal and child health service in India – Lessons learned and best practices for deployment at scale
In January 2016, the Government of India launched a nation-wide mobile health programme designed by BBC Media ...Read more
Mobile enabled last mile solution to health referrals – the case of PSI Familia e-referral service in Tanzania
On Tuesday, 11 July 2017, the mHealth working group went for a field visit hosted by Population Services ...Read more
“In 2016 my wife was pregnant and I was following this service because I wanted to follow the right ...Read more