What is AI and where did it come from?

Even though the term Artificial Intelligence is in nearly everybody’s everyday vocabulary, most people will not be able to explain it. And even amongst those who can explain what AI is, there are many who go silent if you asked for the history of AI.

If you are one of those people, do not worry and keep reading.

This is the first blog post in a blog post series about the history of AI. During the next month, we will publish a number of articles dealing with the biggest milestones in the history of AI. This is your chance to learn something about history which is not taught in school but has implications on our everyday life and the future. We will try our best to make this series understandable for readers with only limited knowledge of the concepts of Artificial Intelligence, introducing concepts as they appear.

The Definition of Artificial Intelligence

Looking into the most well-known encyclopaedias and dictionaries, Artificial Intelligence is usually defined as the ability of a machine (a computer, a robot etc.) to perform intelligent tasks which simulate human thinking or would require some form of intelligence to be performed.

This itself is not too surprising, as it is somewhat entailed in the term itself. But what is behind this definition? How does Machine Learning, Neural Networks and all the other increasingly popular terms fit into this image?

To understand where we are now, it might be a good idea to start at the beginning. And in the case of AI, the beginning is rather close but its foundations go back to the time of Aristotle.

Dartmouth Workshop Proposal Quote
The Dartmouth Workshop is now commonly referred to as the founding event of AI

The foundations of AI

There are a number of disciplines which are considered to form the foundation of AI. Without a doubt the oldest is Philosophy. In the time of the ancient Greek, Aristotle’s Syllogism formed the basis of a field which is now known as Logics. Next to Logics, also philosophical questions of what knowledge is and where it comes from are ways Philosophy contributed to Artificial Intelligence.

But the list of other disciplines which contribute noticeably to AI is long and can be found in Formal Science (Mathematics, Computer Science) as well as Social Science (Psychology, Economics, and Neuroscience, which is an interdisciplinary discipline itself). This diversity shows that Artificial Intelligence is a somewhat special discipline. It does not entirely fit into the natural sciences area but it also does not fit any other discipline.

The Beginnings

The term AI was first mentioned in 1956 in a two-month workshop proposal at Dartmouth College:

“We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.”


This proposal was preceded by several works which are now considered to be highly influential for the founding of AI as its own disciplines. Examples for this are the Turing Test (1950), by Alan Turing as well as his theory of computation (1948); Shannon’s information theory (1948 ) and Norbert Wiener Cybernetics (1948). Surprisingly enough, from today’s point of view, the mentioning of the first neural networks also fall in this time. Pitts and McCulloch showed how networks (which researchers later would call neural networks) can perform logical functions in their paper “A logical calculus of the ideas immanent in nervous activity” (1943).

The Dartmouth Workshop

“We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.” This last sentence from the above-quoted proposal will just be the beginning of a number of overestimations in the field of AI. Despite the high ambitions, the Dartmouth workshop did not lead to any breakthroughs. However, this workshop is now considered to be the founding event of Artificial Intelligence and for this, it earned its place in the history of AI.

After the Summer

Since the 1956, AI has seen ups and downs. The first initial hype ended with the AI winter when researchers came to realize that their high expectations could not be met. It went so far that the United Kingdom stopped AI programs at all but two universities. That this time is over now is obvious. If the time of overestimation of AI possibilities is over, however, is something yet to be discovered.

Next in the Series…

Are you interested in a deeper dive into the history of AI? The next blog posts in this series will take a deal the events preceding the Dartmouth college. What did Shannon’s information theory entail? What does the Turing test actually test? And how are Pitt’s and McCulloch’s networks related to today’s neural networks?
Afterwards, we will continue our journey through the history of AI, continuing our story with the Dartmouth conference and eventually reaching the state of the art.

Follow us on LinkedIn or Facebook to stay up to date with new Blog publications!





Russell, S. J., & Norvig, P. (2016). Artificial intelligence: a modern approach. Malaysia; Pearson Education Limited,.


Understanding the Importance of Attachment Theory in Social Robotics

A post-apocalyptic world where humans are nearly extinct and a humanoid robot is tasked with the mission of repopulating humans on the planet Earth. This is not a figment of my imagination but the plot of a Netflix movie, “I am Mother”. Discussing this intense thriller movie would be really engaging but unfortunately, it would be a bit tangential to the topic at hand. Instead, it is quite interesting to focus on this one aspect of the movie, which is the relationship between the humanoid robot and the human child.  

In the movie, the robot portrays the role of a sort of a surrogate mother and a caregiver of the newly born infant. This intriguing bond which is shared between them is the crux which will be explored in this article.

Attachment Theory and HRI

One of the defining characteristics of human beings which separates us from other animals on this planet is the social interaction amongst humans. A major aspect of survival depends on the social interaction one human has with another. Such interactions were pretty simple back in the prehistoric ages but in the modern world, they have evolved and taken up a complex form. And understanding social human interaction has been one of the major fields of neuroscience and psychology. 

Attachment Theory is one such study of social interaction which explores the attachment behaviour portrayed by humans. John Bowlby, the psychiatrist responsible for the conception of this theory, shifted the classical theory of associating human attachment shown in infants from a stimulus (for example, food provided by a human caregiver) to a more emotional connection with a human.  This theory was confirmed to a great extent by Harry Harlow in his work involving newly-born monkeys (McLeod, 2017). 

The need to understand human cognitive behaviour gave rise to the field of Social Robotics and Human-Robot Interaction (HRI). These fields are, in some sense, quite similar to each other as HRI can be considered as a subfield of social robotics with the main motivation of understanding human cognition via interaction of humans with robotics. Emerged around the 1990s, HRI has gained a lot of recognition in contribution of understanding human cognition via understanding and testing robotic systems which dynamically interact with humans.

An arousal-based model controlling the behaviour of a Sony AIBO robot during the exploration of a children’s play mat was designed based on the research in developmental robotics and attachment theory in infants. When the robot experienced new perceptions, the increase of arousal triggered calls for attention from its human caregiver. The caregiver could choose to either calm the robot down by providing it with comfort, or to leave the robot coping with the situation on its own. When the arousal of the robot has decreased, the robot moved on to further explore the play mat. Hence, the study presented the results of two experiments using this arousal-driven control architecture. In the first setting, it is shown that such a robotic architecture allows the human caregiver to influence greatly the learning outcomes of the exploration episode, with some similarities to a primary caregiver during early childhood. In a second experiment, it was tested how human adults behaved in a similar setup with two different robots: one needy, often demanding attention, and one more independent, requesting far less care or assistance.

Long Term Dyadic Robot Relations with Humans

In Attachment Theory, the caregiver-infant relationship (Bowlby, 1958) is widely popular due to the paradigm shift of knowing how infant attachment to their mothers or caregivers works and the factors which play a role in it. This relationship was explored with the use of a Sony AIBO robot where an arousal-based model is created for a robot to stimulate responses from human caregivers, (Hoile et al., 2012). The study was successful in showcasing that the robot running on the arousal-based model was able to elicit positive caregiving behaviour from the humans instead of being left to cope with the situation the robot at any particular time. The arousal-based model essentially turned the robot either needy or independent and the human caregiver responses were recorded for either of the behaviours portrayed by the robot. 

While the above study dealt mainly with this dyadic relation of human and robot, effects of long-term HRI and it’s association with the Attachment Theory was studied by exploring various factors such as attachment styles, formation and dynamics (McDorman et al., 2016). This study has thus proposed Attachment Theory as a somewhat generalised framework for understanding long-term HRI.

Influence of Human Attachment Patterns on Social Robotics

As mentioned before, the Sony AIBO robot experiment (Hoile et al., 2012) was successful in stimulating human caregiver responses but this showcased the human to be the response system in the human-robot relation whereas it is also important to understand how a robot might behave as a response system based on a human’s actions. This aspect was explored as well where EMYS type robots were set up to spend 10 days with humans with different attachment patterns and the robots’ operations were assessed based on their response to the various styles of attachment displayed by the humans (Dziergwa et al., 2018). The above two studies in a way represent the two sides of a coin as understanding the behaviours of a social robot playing the “infant” as well as the “caregiver” role might provide a more articulate knowledge of the Attachment Theory and its association with HRI. 

Importance of Attachment Theory in Social Robotics

Another study involving human interactions with the PARO robot (Collins, 2019) explored the Attachment Theory and HRI by drawing parallels with other forms of human interactions and bonds, such as with other humans, animals and objects. Although the results weren’t conclusive, it demonstrated how important Attachment Theory can be in understanding and developing HRI methodologies. 

The Wrap-Up

In conclusion, multiple studies have shown the importance of taking inspiration from Attachment Theory to better understand HRI and developing cognitive models which follow the norms such as increased attachment towards emotional stimuli and not simple, materialistic stimuli (for example, food). Advancements in HRI by considering the Attachment Theory shows great potential in more successful assistive robots which can display a personalised attachment behaviour towards humans. 

Although studies similar to Harlow have not been attempted on humans where they are isolated from other humans and placed in the care of only robots, it poses an interesting question whether prolonged interaction and attachment to a social robot might reduce a human’s ability to create as well as retain other attachments with humans.


Bowlby, J. (1958). The nature of the childs tie to his mother. International Journal of Psychoanalysis, 39, 350-371.

Hiolle, A., Cañamero, L., Davila-Ross, M., & Bard, K. A. (2012). Eliciting caregiving behavior in dyadic human-robot attachment-like interactions. ACM Transactions on Interactive Intelligent Systems (TiiS), 2(1), 3.

McLeod, S. A. (2017, Feb 05). Attachment theory. Simply Psychology. https://www.simplypsychology.org/attachment.html

McDorman, B., Clabaugh, C., & Mataric, M. J. (2016). Attachment Theory in Long-Term Human-Robot Interaction.

Dziergwa, M., Kaczmarek, M., Kaczmarek, P., Kędzierski, J., & Wadas-Szydłowska, K. (2018). Long-term cohabitation with a social robot: A case study of the influence of human attachment patterns. International Journal of Social Robotics, 10(1), 163-176.

Collins, E. C. (2019). Drawing parallels in human–other interactions: a trans-disciplinary approach to developing human–robot interaction methodologies. Philosophical Transactions of the Royal Society B, 374(1771), 20180433.

A world of pure imagination

VR Days Europe 2019 Lustrum Edition

“The eyes are the gates to the soul”
– is a commonly used phrase at the VR Days Europe 2019 in Amsterdam. On this 3-day event, people from all over the world – ranging from top CEOs to digital artists – have gathered to share ideas of- and become inspired by Virtual, Augmented, and Mixed Reality (VR, AR, XR). Besides the controversial belief whether something like a ‘soul’ truly exists, it is beyond doubt that visitors are indeed touched by the amazing technological advancements that are currently being made to make our wildest dreams come true. Examples may include 5K virtual worlds and augmented cities, but also gamified virtual rehabilitation where muscle memory is regained by virtually performing what is no longer possible in real-life. As a consequence, the line between the biological and digital world becomes blurred, which may result in an entirely new reality where humans are no longer restricted by the limits of nature.

Me in VR
Become a bird that soars through the city, or a turtle exploring the deep ocean grounds.


Part 1: the dream

Communicating an idea or feeling can be challenging. Storytelling is a powerful approach to explain abstract concepts. Stories are fun and can appeal to someone’s humanity such that story-listeners can truly get the core message. Along these lines, the creation of digital realities has brought about a whole different dimension to the experience of storylines, transforming passive listeners into active story-travelers. Just like that, the embodied narrative ‘The Line’ by Brazilian filmmaker Ricardo Laganaro would only make sense in VR. In this story about love and fear of change, the user is taken to the world of two miniature dolls – Pedro and Rosa – who are perfect for each other, but reluctant to break boundaries to overcome limitations and live out their love story. During his talk, Laganaro explained how the experience transforms the user into a child, and then mirrors the emotional curve of the narrative in the user’s body.

As various creations passed by, it became evident how the digital revolution has given rise to a never-ending list of possibilities in storytelling – some using Artificial Intelligence as their shaper. Whereas one designer provides a fully catered experience, letting users closely follow character evolution in a multisensory environment, another lets them give birth to little creatures that have to hear little, personal stories in order to live on (e.g. Lucas Rizzotto’s ‘Where Thoughts Go’). As a consequence, users learn to open up emotionally and experience feelings of intimacy.



Part 2: living it.

At the hall that expressed itself as ‘The Church of VR’, all discussed interactive narratives and games could be tried out by visitors. Unfortunately, The Line and some other works I was keen to try appeared extremely popular, and fully booked (!) for the rest of the event. Luckily, there was enough to play with.



In the middle of the Church hall, visitors could watch a top-20 selection of 360-degrees videos, which VR Days Europe claimed to be made by the best creators this industry has to offer. Based on my previous experience with 360-degrees movies, I did not expect the sun, the moon, or the stars. But against expectations, I got somewhat close: the amount of detail was astounding, and the way in which virtual people addressed me felt amazingly real. As if they truly saw me. I explored undiscovered worlds through space expeditions (‘2nd World’) and found peace in my mind with ‘Headspace’. The story that stood out most to me was ‘Conscious Existence – a journey within’ by Marc Zimmerman, where pretty words and dreams are whispered in your ear. I re-experienced feelings of childhood without worries, anger, or prejudice. Everything is special, wants to be discovered, and after breaking through the dust that makes you blind, you visit some of the most breath-taking scenes in the universe.

If I would have something like a soul – it was indeed grasped.

Me in VR
A screenshot of ‘Conscious Existence’ that transformed me into a child. The image does not do it justice – watch it in VR!


Part 3: the future

What makes VR technology special in comparison with regular movies (if done right) is the involvement of the user. The user – or story traveler – may become the main character, who experiences a seemingly tailor-made performance. Every perceived (virtual) action seems to be a logical effect of his/her presence. As a consequence, the brain registers VR as real. The illusion of altered identity and situation is what makes VR a successful tool for various ends, including revalidation, teaching, science, and gaming.

Nevertheless, despite all the advances, there is still much to be done. After some time, VR glasses get heavy on the head, and I am curious about integrating the technology into daily life. Could VR also make our life easier? For starters, I already heard some rumors about virtual make up, and Smart Mirrors (AR). All things considered, enough to stay excited for. See you next year at VR Days Europe, 2020.

Written by Thirza Dado

AI Synergies ’19 – Bringing Together All AI-Backgrounds

“United in Diversity”. According to Holger Hoos, professor at the University of Leiden, this is not only the motto of the European union but can also serve as a guideline for the development of AI. This year’s AI synergies conference showed that it’s possible to actually turn this statement to reality. From the 6th to 8th of November, AI-researchers from Benelux gathered in Brussel to present and share the latest developments in the ever-changing field.

A quick look in the program of the conference already shows that ‘diversity’ was more than just an empty promise. For the first time ever, the conference had dedicated tracks for business and academics, showing that AI will strive more if knowledge between both is shared. Like in the previous year, AI synergies also combined ML-centered BeNeLearn and more general AI conference BNAIC. As a result the conference covered topics to everyone’s heart’s desire: from Knowledge Representation to Deep Learning, from Robotics to Creative AI or Natural Language Processing. Read more about the conference highlights here!

Machine Learning/Deep Learning:

Talking nowadays about AI, there’s no way to avoid the topic of Machine Learning. After all, many advances in this field have been made. There were multiple sessions covering research related to Machine Learning during AI synergies. With dedicated tracks for “ML for Bioinformatics & Life Science”, “AI for Health and Medicine” and “Applied ML & ML for Medicine”, many talks were about deploying Machine Learning for the highly societal relevant area of healthcare. The talks about using algorithms for HIV-, breast- or skin-cancer-, or sepsis detection highlighted AI’s promising potential as a diagnostic tool.

Explainable AI:

Closely connected to the field of Machine Learning, but still worthy of its own track was the Explainability session during AI synergies. Following the Peter-Parker principle of “with great power comes great responsibility”, researchers have recognized that more ethical AI can only come with more understandable systems. Many talks were dedicated on how to make AI more transparent, demonstrated in the field of robotics or on the example of convolutional networks.

Moreover, a key-note talk was dedicated to this particular topic. In the presentation “Explainable AI: explain what to whom” Silja Renooij warned about just using white-box models like Bayesian Networks as a sufficient explanation to AI systems. Not everyone has the necessary background knowledge to actually understand the difference between correlation and implied causation, so we should be careful to assume that Bayesian Networks are inherently understandable. She therefore argued that we should adjust explanations to users’ general understanding of AI/statistics.

Agents, Multi-Agent Systems and Robotics:

Next to a number of research-talks about agents- and multi-agent systems, AI synergies included a key-note about this topic, given by Jeremy Pitt. In “Democracy by Design” he described how a simulated civilisation can generate new rules to reduce risks of tyranny or autocracy when being implemented on the principles of Ober’s basic democracy.

Focussing more generally on robotics, Ana Paiva gave another key-note. Predicting that more and more robots will be integrated in our society, she argued that we should strive for a harmonic collaboration between humans and machines. Presenting a case-study of robots teaming up with humans to play a card game, she showed which factors we need to consider when designing and evaluating  Human-Robot-Interactions.

Knowledge Representation:

One of the oldest and most traditional approaches to AI is the field of Knowledge Representation. While currently much more attention is put on more modern techniques, Marie-Christine Rousset demonstrated in her key-note talk how we can tackle modern challenges regarding data quality (like e.g. data inconsistency) with more classic first- or second order logic rules.

In the track “Knowledge Representation & Hybrid”, multiple speakers elaborated on this idea, showing e.g. how the improvement of Knowledge Representation languages open up new possibilities for combination between old and new Data Science approaches.

Speaker and listeneres gathered in the Halle Vitrée

Of course this overview by far is not enough to cover all the diverse and engaging talks given during the conference. Therefore, make sure to check out the conference (pre)proceedings for a detailed overview of all research topics. One last thing that you might notice there, is how student-friendly the AI synergies conference is. Not only Master, but even Bachelor students admitted their abstracts and presented them at the conference. Hopefully, this encourages you to submit your own work to the next AI-synergies confernce in Leiden 2020!

So, to conclude: can AI be truly “United in Diversity”? Looking at the varying expertise-levels of the speakers, the range of topics covered by the conference, and the combination of business and research, we are happy to say that AI-synergies fulfilled this mission.

Assistive robots in elderly care: social or anti-social?

An article by Maria Tsfasman

The percentage of elderly people in the world is growing, and we need more and more workforce in nursing homes. Robots are gradually replacing caretakers in different aspects of their jobs. This raises various ethical concerns on what it can lead to. One of the concerns is that robots can deprive elderly from human love and care by reducing contact with human nurses. In this blogpost, I will discuss different aspects of these concerns, and possible solutions to this problem.

Concerns about assistive robots

With ageing people tend to lose social connections, which leads to depression and growing feeling of loneliness. Not only does social interaction affect emotional state of senior people, but it also decreases the risk of dementia (Saczynski et al., 2006). When there are not enough caregivers in retirement homes to fulfil such needs, what option are we left with? We live in an age when robotic nurses for elderly people do not seem futuristic anymore. Intelligent machines can serve as medical tools, soothing toys, and even health care assistants themselves(Broekens, Heerink, and Rosendal, 2009). On the one hand, it might create a beautiful world where every elderly is carefully looked after and provided with a buddy to talk to. Robotic assistants can call 911 in the case of emergency, keep track of medication schedules, assist a person in the bathroom, and even initialize and maintain a conversation. On the other hand, robots cannot substitute human attention and love, at least in the present and near future state of the art. By delegating elderly care to the hands of intelligent machines we might deprive older generation from human care and attention.

In this blog-post I will discuss two concerns about robots in elderly care:

  1. the potential reduction in the amount of human contact
  2. the potential degradation of social skills caused by the first concern

Sharkey and Sharkey (2010) categorize eldercare robots in three groups by the type of care they provide: (i) Assistance; (ii) Monitoring, and (iii) Companionship. For now, I will disregard monitoring robots, as I focus on social aspects of robots and assume that monitoring robots do not usually include that component.

Why be concerned?

Sharkey and Sharkey (2010) formulate the main cause of concerns about assistive robots as ’objectification of elderly’ by robots producers and stakeholders. The problem is that when constructing assistive robots sometimes the aim is to reduce spends and workload of the nurses, rather than improving quality of patients’ lives. In my opinion, a more severe problem is the whole idea of trying to substitute nurses with the robots, which inevitably would lead to depriving elderly people from human contact as such. What I suggest, considering the objective of assistive robots, is delegating manual labor to robots to free nurses for emotional support of the elderly patients. If we do not substitute, but change responsibilities, the whole robot-support world can be nice to live and age in. The second cause of concerns is that if assistant robots are able to take care of the elderly, their relatives would feel less motivated in visiting(Sharkey and Sharkey, 2010). In other words, if elderly patients are well taken care of and do not need any help, why visit? However, this concern seems to me less robot-specific than any other. The same problem appears when a family puts their elderly members into a retirement home in general. Such action can give family a feeling of delegating their responsibility to the professionals, and therefore an illusion of help and attention being unnecessary.

What do numbers say?

Although the concerns seem to be reasonable, let’s take a look at how robot-companions affect elderly people and their perception of life. In general, robots show a positive effect on emotional and social state of elderly patients (Broekens, Heerink, and Rosendal, 2009). Tamura et al. (2004) have shown that robot toys help patients with dementia. In Kanamori, Suzuki, and Tanaka (2002) robot-seal Paro (see video) made patients of the Suisyoen retirement home less lonely and increased their sociability. The effect might be positive not only because of the robots’ capabilities but also because patients now had a common topic to talk about. Knowing this, we can assume that robots are more likely to improve elderly patients’ social skills than cause their degradation. It would be also very interesting to examine the effect of a robot able to maintain a conversation. However, such robots are not yet available in assistive care use.

Assistive robots: good or bad?

In conclusion, how can robots affect elderly people with respect to social interaction and skills? On the one side, robots can sufficiently reduce the amount of human contact for elderly, especially if all the care is performed by robots instead of human caregivers. Some people are also concerned about families being less motivated to visit their elder members as they would feel less needed. In my opinion, the main flow in these concerns is the idea of substituting nurses by robots. What it should be viewed like is delegating physical work to robots, freeing caretakers some time to emotionally assist elderly people, and organize various social activities for them. That would alleviate our concerns about reduction in the amount of human interaction and degradation of social skills in elderly people. Happy to say, the research conducted in the area so far gives promising results. Let’s hope that robots will continue bringing joy and happiness to the world rather than make humans anti-social!


Broekens, Joost, Marcel Heerink, and Henk Rosendal (2009). “Assistive social robots in elderly care: A review”. In: Gerontechnology 8, pp. 94–103. doi: 10.4017/gt.2009.

Kanamori, Masao, Mizue Suzuki, and Misao Tanaka (2002). “Maintenance and improvement of quality of life among elderly patients using a pet-type robot.” In: Nihon Ronen Igakkai zasshi. Japanese journal of geriatrics 39-2, pp. 214–8.

Saczynski, Jane S. et al. (2006). “The Effect of Social Engagement on Incident Dementia The Honolulu-Asia Aging Study”. In: American Journal of Epidemiology 163.5, pp. 433–440. doi: 10.1093/aje/kwj061.

Sharkey, Amanda and Noel Sharkey (2010). “Granny and the robots: Ethical issues in robot care for the elderly”. In: Ethics and Information Technology 14, pp. 27–40. doi: 10.1007/s10676-010-9234-6.

Tamura, T. et al. (2004). “Is an entertainment robot useful in the care of elderly people with severe dementia?” In: A Biol Sci Med Sci 59-1, pp. 83–95.

About Maria Tsfasman

Maria Tsfasman is currently obtaining a Masters Degree in Artificial Intelligence at Radboud University, Nijmegen.

WSAI19 – A Brake on the Hype Train

The point of a World AI Summit you would think, is to show how cool AI is, how far many techniques have come and how astonishing some of its applications are. This especially holds given that we are currently experiencing an AI revolution, where more and more AI is integrated in our everyday life and tasks that seemed impossible before (beating humans at Go, optimizing many computer vision tasks) are being solved with impressive performances.
However, it is also in these times that we need a devil’s advocate who is not afraid to look at all this hype from a different perspective. Someone who puts a brake on the current ‘hype train’ and invites us to reflect upon which direction we are heading to. At the AI summit there were not only one, but multiple of such avocats. Gary Marcus, author of the book ‘Rebooting AI: Building Artificial Intelligence We Can Trust’ argues that we should not put all our trust in currently popular deep learning methods. As he puts it, “deep learning is not a substitute for deep understanding”: we should start building more models with more transparent decision processes that have a more logical way of reasoning. Similarly, tech-company Accenture starts their talk with a video about the importance of Explainable and Ethical AI, highlighting that we cannot deploy AI-systems if trust in them is lacking. Cassie Kozyrkov, Chief Decision Scientist at Google, warns about using Machine Learning algorithms when we do not fully understand the data they are trained on or if the data is potentially biased.
By the end of the summit it had become clear to everyone: Yes, we need to rethink AI and we definitely need more ethical and more transparent AI systems. Systems we can understand and trust and that are different from the black-boxes we are using at the moment. However, a question that might be left unanswered is how to actually get there. It’s probably so hard to answer this question, because there is not a clear answer to it yet and research in this area has just started to bloom.
Nevertheless, there is no harm in at least touching the surface of current literature and understand how a more transparent and understandable (and therefore potentially ethical) AI system could look like. So et voilà, here are some examples of where the research fields of fair Machine Learning and explainable AI are currently heading:

Feature Importance Values:

Say we have trained a Machine Learning model on a set of résumés and a decision for each applicant whether they have been hired or not. The task of the model now is it to predict for the résumé of a new applicant whether they will get the job. Given the nature of the task and the potential sensitive data that the model has been trained on (like a person’s sex or nationality), it is desirable that the model can not only give an output, but also an explanation that goes along with it.
Techniques that can give explanations for the output for one input-instance are so called local explanation techniques. One method of providing local explanations are feature importance values, obtained by algorithms like LIME or SHAP. Like the name implies, this method assigns for each feature of an input an importance value, reflecting how important the feature was in the decision making progress. Looking at these values, it should then also be possible to assess whether a certain decision was fair or not. In case of a recruitment system, we would probably aim for a model that assigns very low feature importance values on factors like a person’s gender, but high importance values on features relating to a person’s education or skill set.
Methods like LIME and SHAP can even be used on image data. The figure below shows a famous example, of where the pixels of an image are highlighted that contribute most to possible labels for the image.

For those interested in finding more information about feature importance measures, can take a look at the papers the methods originated from [1, 2]. Both methods are also implemented in Python tool-packages and can easily be played around with1.


When trying to make AI-systems explain their decision processes it can be beneficial to look at the way humans explain decisions. An explainable AI-method that aligns well with the way humans reason about their behavior are so called counterfactuals. The use of counterfactuals is another type of local explanation method that show which input-features need to be changed in order for the model’s output to be changed as well. In case for the recruitment system it might e.g. show that an applicant wouldn’t be hired if they had one year less of work experience. Again the fairness of the model can be inspected, by reasoning about the nature of the counterfactuals: a model is e.g. hardly fair, if a counterfactual would show that a person wouldn’t be hired if their nationality is Polish rather than Dutch.
To gain more insights into counterfactuals you can take a look at e.g. [3]. Again multiple methods have been implemented in Python and are definitely worth checking out2!

Illusion of Control

So, can feature importance values or counterfactuals be solutions to the problems described during the many World Summit Talks? Yes possibly, but we need to remain cautious. John Danaher, author of Automation and Utopia: Human Flourishing in a World Without Work, warned during the Summit about the Illusion of control. If we just blindly trust the explanations provided for ML algorithms, we are not doing any better than we are doing now. After all, it is again machines that do the work. Do we know for sure that the techniques accurately explain the workings of the model? Are they enough to completely let us break out of the black-box? Or are the algorithms just a new black-box themselves? Who are the people who inspect the explanations and judge models’ fairness based on them?
Again, these questions show that we need to keep the discussion about a new route for AI going. Only if we manage to solve these issues we can lean back a bit and enjoy the hype AI brings.


[1] Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “Why should i trust you?: Explaining the predictions of any classifier.” In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pp. 1135-1144. ACM, 2016.

[2] Lundberg, Scott M., and Su-In Lee. “A unified approach to interpreting model predictions.” In Advances in Neural Information Processing Systems, pp. 4765-4774. 2017.

[3] Byrne, R. “Counterfactuals in explainable artificial intelligence (XAI): evidence from human reasoning.” In international joint conference on AI (IJCAI 2019). 2019.


1 For the LIME and SHAP packages take a look at: https://github.com/marcotcr/lime and https://github.com/slundberg/shap
2 For a Python tool-package check out: https://github.com/SeldonIO/alibi

“The AI brains are coming”

and this, of course, included Turning magazine.

Taets Park Zaandam, 10th of October, shortly before 9 o’clock. Pitch black event hall, all become silent when the screen in front starts a countdown from 10 downwards. With a grand introduction video, full of movie references to Artificial Intelligence and innuendo to responsible AI, the world summit of AI begins. It feels a bit like one might imagine an Apple Keynote, this small border between hype and cult.

The World Summit AI calls itself ‘The world’s leading AI summit for the entire AI ecosystem, Enterprise, Big Tech, Startups, Investors, Science. ’ For two days in October (9th-10th), the world’s AI brains gathered in Zaandam to discuss, network, learn and connect.  As broad as the field of AI is the selection of speakers, workshops and sponsors of the event. With the mission “to tackle head-on the most burning AI issues and set the global AI agenda.”, the summit offers a number of different tracks, focusing on individual aspects of AI.

Despite some warnings and critical comments about the future of AI during a number of talks, the future of AI seems bright (and not clouded as the sky over Zaandam)


Turning’s highlight track was for sure Deep Dive Tech Talks, where speakers from a wide range of fields gave deeper insight into their work. For some of our reporters, childhood dreams came a bit closer during a talk which combined AI and space travel. But also recent topics as environment and education and how AI can help us to save our planet or improve the way we learn were discussed. You will get more details about our favourite talks in the next couple of weeks. But as you see from this selection already, there was a talk for any taste.

Next to speakers representing global players such as Amazon, Google and Facebook and other leading AI brains, Gary Marcus and Stuart Russell (you might know him as the author of ‘AI, A modern approach’) inspired their audience and caused queuing during their book signings. Still in need of a good AI book as a nice chance to your mandatory university readings? Check out ‘Rebooting AI: Building Artificial Intelligence We Can Trust’ from Gary Marcus or ‘Human Compatible: Artificial Intelligence and the Problem of Control’ by Stuart Russell!

Stuart Russel during his interview about his latest book.

But talks and workshops were not all the summit had to offer. During those two days, several companies used the opportunity to present the way they incorporate AI in their business. Did you, for example, know that Huawei started a deep dive into Machine Learning? Or would you expect a company like Wolfram Alpha to be present at such an event?  We were surprised as well! And we were especially happy to see Machine2Learn. Believe it or not, the summit and Turning have a sponsor in common. But next to the global players, the summit also allowed a number of startups to present their ideas, ranging from AI applications supporting your companies finances, an application which helps you to sleep better or yet another example of AI and education.

World Summit AI 2020

Curious, what happened during the World Summit AI? Over the next couple of weeks, Turning will provide you with highlights of the summit, the people which inspired us most, the topics which touched us and the talks which made us curious about what else is out there.

Or maybe our small preview sparked your interest and you want to experience the Summit yourself next year? The presale of the tickets started already! Mark the 7th and 8th of October 2020 in your calendar and get your ticket at  https://worldsummit.ai/tickets/. Student tickets cost 199 Euro each. This is a lot of money for a student, we know. But in exchange, you get access to a great number of talks and events as well as a unique opportunity to network. And how often does it happen that all AI brains gather and you can meet them without leaving the Netherlands? Right, once a year during the Summit!

AI and arts: a painter and composer?

Here are the last two article previews of the first edition:

Automatically composing music is not such a new idea as many might think: automatically generated songs date back to the time of ancient Greek. Jeremy Börker, Bachelor student at Radboud University, writes about the persistent challenges of automatic music generation.

Upgrading a simple, black&white sketch to a fully-fledged coloured portrait. Find out how it’s possible and check out Yagmur Güçlütürk‘s article. Yagmur is currently an assistant professor at Radboud University.

The first edition is near! Read more about 2 upcoming and very interesting articles

To prove that science and arts do go together, Ngoc Đoàn, Andrius Penkauskas and Ecaterina Grigoriev paired up with a theater group. The AI-students from Tilburg University want to model group dynamics using data about theater performances.

Using ARtInfo in the Valkhof museum Nijmegen. ©Rein Wieringa

The apps RecolourAR and ARtinfo are guaranteed to make any museum trip more fun and interactive. Loes van Bemmel, Master student Artificial Intelligence at Radboud University, will tell you all about them.


Get excited with a sneak preview on 2 amazing articles!

The first edition of Turning Magazine publishes on September 1st! You want to know how AI can be utilized in the field of fashion? Then the following 2 articles will catch your interest:

A mirror that can give you fashion advice? Find out more about it in the interview with Alexey Chaplygin, data scientist at PVH corp. 

Read about technology inspired fashion in the article ‘Cyber Couture’. Anneke Smelik, researcher at Radboud University and author of “Ik cyborg. De mens-machine in populaire cultuur.”, will tell all about the futuristic clothing-designs of Iris van Herpen.