A Post-Pandemic Outlook on the Future of AI Research

The year 2020 is most likely to play an instrumental role in how Artificial Intelligence (AI) will evolve in the years to come. It won’t just be how it is perceived in the world’s view (be it good or bad), but also how the AI research community will change. The world right now is in a state of crisis, the COVID-19 pandemic, and we are yet to observe the end of this crisis and any lasting repercussions which might follow.

Where Are We?

A myriad of reports and articles are published on a daily basis which discuss the socio-economic implications of the COVID-19 pandemic, for example [1]. Additionally, this pandemic has brought forth the rise of AI in ways which are astonishing as well as disconcerting. Since the outbreak of COVID-19, AI has significantly contributed to the healthcare industry [2] through applications such as predicting new cases, drug discovery and more. But the pandemic has also inadvertently fuelled the world of surveillance [3] we live in today. Such a rapid development and controversial usage of AI raises concern regarding our privacy and security in the future. Yuval Noah Harari, an Israeli historian and the author of Sapiens (2011), discusses [4] the future of humanity as well as poses some daunting questions regarding our response towards the expected impact after COVID-19. The article discusses how governments are using the pandemic as an excuse to abuse the state of emergency by forcing upon “under-the-skin” surveillance and essentially, setting up a totalitarian regime.

It’s quite natural to find yourself in a state of confusion as to how this grim description of a post-pandemic world connects to the future of AI research. But it should be considered vital and even instrumental in shaping the future of AI research. 

We find ourselves in a thicket of strategic complexity, surrounded by a dense mist of uncertainty

– Nick Bostrom, Superintelligence: Paths, Dangers, Strategies

What Was and What Is?

The evolution of any industry is ever-changing, but certain trends and historical patterns have enabled us to forecast their future, at least to an extent. The AI industry is not stand-alone but rather closely associated with a multitude of other industries, such as healthcare, business, agriculture and more. There has been a significant rise in the adoption of AI by such industries in just the last decade and this has increased the need to forecast AI’s impact in the future and how it will inevitably shape our society. Books like Superintelligence (Nick Bostrom, 2014) and Human Compatible (Stuart J. Russell, 2019) have discussed this and presented their predictions. Stakeholders who fund the advancement of AI make decisions based on such predictions. This in turn drives the AI research community. 

But with the crisis we face today, the status quo is going to change. Seeing how AI research has been conducted since the start of COVID-19, the focus should not only be on the outcomes of a certain study but also the way such studies are conducted. For the sake of simplicity, this article focuses on AI in healthcare but also emphasizes that the presented arguments apply to AI research in general. 

In Focus: AI in Healthcare

The astounding increase in the number of cases during this pandemic pushed the AI community, both academia and industry, to divert their resources in providing any form of support which could be essential in this fight against the virus. AI research in medical imaging [5], specific to COVID-19, has enabled researchers as well as doctors to train and deploy predictive models which can contribute towards patient diagnosis. There is also an increase in drug discovery research which will help researchers to identify vaccines which can be tested and distributed to everyone. AI in drug discovery has been very favourable when compared to conventional clinical trials as AI speeds up the process of developing new drugs as well as drug testing in real-time [2].

So What’s the Issue?

Once the crisis is over, companies are expected to invest even more into AI research [6] and government bodies are expected to increase their involvement [7] to use AI to plan strategies against future pandemics as well as empower other industries which could benefit from AI. Dana Gardner [8] discusses in his podcast that the data collected over the pandemic will be a key factor of how AI will shape the post-pandemic world.

Despite the extensive amount of AI research in such a short time, they pose an inherent flaw of being products of AI black-box systems. Such systems spit out numbers and leave us humans to derive meaning out of it. A majority of research in AI is purely focused on the final results (such as how they perform on a benchmark dataset) rather than how they arrived at it, especially the research conducted by AI start-ups. Even though this attitude was due to the urgency of this crisis, it has been around for quite some time. There are multiple applications such as image recognition which require only numbers to make sense but when we enter human-centric applications such as healthcare, mere numbers are simply not enough. A high accuracy model is not guaranteed to attain high efficacy in such areas and we need to ask ourselves: Will this style of research, a pursuit to train a model with highest accuracy in the shortest amount of time possible, continue in a post-pandemic world? 

Explainable AI, commonly referred to as XAI, deals with research into developing interpretable systems which can provide explanations for its decision making in any given scenario. This is a step ahead of the current “black-box” AI systems.

This argument brings forth Explainable AI, an area of research which is still in its infant stage. What has been already done cannot be changed, but we can and should learn from these past few months. The amount of data in the future, especially in industries such as healthcare, is going to explode and how we handle this data and design new AI systems with it should be the crux of future AI research. Until this pandemic, the questions posed by most AI critics regarding the black box nature of high accuracy models were mostly hypothetical, for example, the trolley problem. But now, the decisions made by these AI systems during the pandemic affected very real humans. We are clearly outside the hypothetical debate and the need to tackle this issue is of utmost importance.

How Do We Move Ahead?

Besides the importance of AI systems being more explainable, at least to the involved stakeholders, new policies and legislations are required which can dictate the plan of action of AI research during a crisis mode, such as COVID-19. Such policies around AI research will introduce important standards and guidelines, similar to other areas like healthcare and environmental sustainability, to tackle problems which are not only ad hoc but also consider the societal as well as ethical implications of the research in question.

References:
[1] Nicola, M., Alsafi, Z., Sohrabi, C., Kerwan, A., Al-Jabir, A., Iosifidis, C., … & Agha, R. (2020). The socio-economic implications of the coronavirus pandemic (COVID-19): A review. International journal of surgery (London, England), 78, 185.

[2] Vaishya, R., Javaid, M., Khan, I. H., & Haleem, A. (2020). Artificial Intelligence (AI) applications for COVID-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews.

[3] Yuan, S. (2020). How China is using AI and big data to fight the coronavirus. Al Jazeera.

[4] Harari, Y. N. (2020). The world after coronavirus. Financial Times, 20.

[5] Bullock, J., Pham, K. H., Lam, C. S. N., & Luengo-Oroz, M. (2020). Mapping the landscape of artificial intelligence applications against COVID-19. arXiv preprint arXiv:2003.11336.

[6] Global companies will invest more in AI post-pandemic. Smart Energy International, Sept 18, 2020.

[7] Jad Hajj, Wissam Abdel Samad, Christian Stechel, and Gustave Cordahi. How AI can empower a post COVID-19 world. strategy&, 2020.

[8] Dana Gardner. How data and AI will shape the post-pandemic future. July 7, 2020.