Going back just a few years, the general public’s perception of AI was the stuff of science fiction, now it’s an everyday reality, from facial recognition on selfies to automated online customer service we interact with it every day whether we know it or not.
The use of AI in daily life is all down to data; the sheer amount now available on consumer behavior, the accessibility of computational power and the advance in AI techniques and machine learning algorithms. With the amount, and variety, of data available to collect growing more rapidly than ever before, AI systems are becoming smarter, more intuitive, and more lifelike.
The Healthcare sector has always embraced technology and it has embraced the development in AI in the same way. From robotic-assisted surgery to wearable tech through to early detection, we are seeing the positive impact of AI on patient outcomes. For example, at HCA we have used the masses of data points we have to develop an algorithm for the early detection of sepsis.
Ultimately in healthcare we are in the business of improving the care and quality of human life, and in this AI has the potentially to be lifechanging, particularly as AI leaps from narrow/ weak AI that we see today to the to the general/strong AI that is self-learning with the potential to outperform and even replace humans at cognitive tasks.
With that leap, which is happening at a pace far exceeding expectations comes huge promise but also significant questions, notably where are we now in the development and control of the increasingly fast pace of AI technology advancement? Where does the responsibility lie?
These questions are particularly pertinent to healthcare where human compassion, judgement and ethics are so central.
It currently falls to the individual companies, researchers and developers to ensure that the development of AI is conducted with all ethical considerations in mind and places the duty on them to ‘Raise’ this technology responsibly. This is what’s called Citizen AI.
Why “Citizen” AI? The term citizen AI reflects the change in the development of AI from being a system that is programmed to a system that learns. Businesses are being encouraged to view the development of AI as one would raising a child. As Accenture puts it, AI is here and ready to work alongside us and needs to be recognized as a partner to our people in business. As AI capabilities grow so too does its impact on people’s lives further underlining the need for it to be “educated” on responsibility, fairness and transparency.
Raising responsible AI brings with it many of the same challenges faced in raising a child like teaching it right from wrong, recognising and avoiding bias and being able to make autonomous decisions in the context of the input around it. Which, as we have just addressed, is a more than complicated task. As with parenting of children, there is no rule book, and the variation in the way that a child is raised and the experiences and opportunities it is exposed to will define its character, actions and abilities later in life. With all the best intentions telling people they should be raising their child to observe and understand certain human virtue does not mean they have the ability to do so.
This leads to a big question, how will we as a society create the appropriate level of responsibility for businesses to build, evolve, and manage these systems which have such massive potential impact on our lives? Especially as we look to further leverage them in such key areas as healthcare research and delivery.
This of course remains to be seen, but as the development of AI continues to charge ahead it’s clear that this will become an increasingly important point of debate.