Hollywood movies Blade Runner, Terminator, and I, Robot all feature robots or replicants with human characteristics.
But how true to life are they? Can robots ever truly mimic humans?
A new Cambridgeshire festival will explore artificial intelligence (AI) and its ever-greater impact on how we communicate and interact.
Could artificial intelligence help us reach a more equitable and fair society?
Should chatbots and AI be built to care and have empathy?
If such machines are built, should we consider their moral and legal status?
Or are we giving up too much control to machines that are too stupid to handle the tasks they are already charged with?
These questions and more are set to be debated during a series of fascinating free online events that can be viewed by anyone anywhere in the world during the inaugural Cambridge Festival, which runs from March 26 to April 4.
The Cambridge Festival brings together the hugely popular Cambridge Science Festival and the Cambridge Festival of Ideas to host an extensive programme of over 350 events that tackle many critical global challenges affecting us all.
Coordinated by the University of Cambridge, the new festival features hundreds of prominent figures and experts in the world of science, current affairs and the arts, and has four key themes: health, environment, society and explore.
From the abused simulants of Ridley Scott movie Blade Runner starring Harrison Ford to the neglected child-robot David in Steven Spielberg's sci-fi drama A.I. Artificial Intelligence, we seem to have little difficulty in imagining artificial beings feeling pain and distress.
The kind of advanced AI we see in fiction and in Hollywood movies is, of course, far removed from the capabilities of real-world machines.
However, as the capacities of AI continue to improve, interest has grown concerning the question of whether and when artificial beings may reasonably come to possess or demand some form of moral status.
In Suffer the little robots? The moral and legal status of artificial beings, Dr Henry Shevlin, Research Fellow at the Leverhulme Centre for the Future of Intelligence, suggests that there is good reason for lawyers, politicians, philosophers, and scientists to start grappling with how we could identify suffering in beings radically different from ourselves, and considers how we should respond to the moral concerns of sentient AI as a society.
Dr Shevlin again explores the frontiers of artificial and biological intelligence during another talk, From bees to bots: exploring the space of possible minds.
He asserts that even with recent technologies the most advanced artificial systems still fall short of many of the capabilities of humans and even simpler animals like honeybees.
Dr Shevlin reviews some of the most striking differences between artificial and biological systems and suggests how these might help inform a broader scientific conversation about the nature and variety of minds, intelligence, and consciousness.
Science fiction has tended to focus on nightmarish scenarios where machines acquire superhuman abilities and wrest power from unsuspecting human beings.
These scenarios distract attention from the real problem, which is not that humans will unwittingly cede control to runaway superintelligence, but that they are already surrendering control to machines that are too stupid to handle the tasks they are charged with.
In A Citizen's Guide to Artificial Intelligence, Dr John Zerilli, Research Fellow at the Leverhulme Centre for the Future of Intelligence, sets out what his book hopes to achieve – the issues that AI and machine learning present to the average citizen, concerns such as privacy and data protection, transparency, liability, control, the future of work, and regulation of AI.
He focuses on one chapter in particular; discussing the nature of human control over AI systems.
Data science and AI have the potential to transform the discovery and development of medicines.
However, there is a lot of hype surrounding AI.
AI: Hype vs reality covers what has been achieved to date and what is realistic to expect from data science and AI in the next five to 10 years.
Chaired by Gareth Mitchell, BBC Digital Planet, speakers include Dr James Weatherall, vice president data science & artificial intelligence at R&D AstraZeneca, and Professor Mihaela van der Schaar, director of Cambridge Centre for AI in Medicine.
Over the last few years, smart speakers, virtual personal assistants and other forms of ‘conversational AIs’ have become increasingly popular.
In the context of health and social care, attention has begun to focus on whether an AI system can perform caring duties or offer companionship, particularly for the lonely in care homes.
Chatbots designed to support wellbeing and perform therapeutic functions are already available and widely used.
But, for all their growing skills they remain machines driven by algorithms and data.
In Empathetic Machines: can chatbots be built to care? Dr Shauna Concannon, Giving Voice to Digital Democracies research project, Centre for Research in the Arts, Social Sciences and Humanities (CRASSH), examines what it means to be empathetic.
Dr Concannon will also explore the ethical implications that arise when positioning AI systems in roles that require them to communicate with empathy.
AI is becoming an increasingly pervasive and invasive part of our everyday lives, from our use of cloud computing, smart phones, and streaming services to news consumption and even household appliances.
Virtually all these technologies rely on data. This data is often inherently equipped with different biases that reflect our own biases.
If dealt with blindly, these biases are not only reproduced, but magnified by technology with potentially severe effects.
In Artificial Intelligence and Unfair Bias: Addressing Gendered and Racialised Inequalities in AI, Dr Kerry Mackereth and Dr Eleanor Drage, Research Fellows from the Centre for Gender Studies and the Leverhulme Centre for the Future of Intelligence, break down some of the key ethical issues surrounding AI, gender, race, and bias.
They explain how and why biased AI exists, how this results in real-world harms, and offer steps forward with regards to policies that the AI sector could implement to address bias in AI.
Since the onset of the worldwide pandemic, it seems that our lives have become more intertwined with social media than ever before.
In a locked down world where physical proximity may not be even possible, social media often presents a lifeline to the rest of the world.
How much of an effect do our daily social media habits have in our everyday lives?
The apps that we check throughout the day influence how and where we receive news and information, who we spend time with, what we buy, and ultimately even how we think and feel.
But what if we have more control over these aspects than we might assume?
Tyler Shores, Jesus College Intellectual Forum, psychologist Dr Amy Orben, and sociologist Dr Mark Carrigan discuss what role everyday technology should have within our lives, our sense of self, and how we relate to the rest of the world during Is Social Media Changing Your Life?
In How digital interventions influence our health-related behaviours? behavioural scientist Dr Katerina Kassavou examines the evidence suggesting that digital health interventions, such as text messaging, the use of smartphone apps, wearables, and websites, are effective at changing behaviours related to health.
View the full programme via www.festival.cam.ac.uk from February 22.
Many events require pre-booking, so check the events listings on the festival website.
Comments: Our rules
We want our comments to be a lively and valuable part of our community - a place where readers can debate and engage with the most important local issues. The ability to comment on our stories is a privilege, not a right, however, and that privilege may be withdrawn if it is abused or misused.
Please report any comments that break our rules.
Read the rules here