Technology

Stephen Hawking inaugurates artificial intelligence research center at Cambridge

Professor Stephen Hawking gives the inaugural speech for Cambridge University’s newly-launched Centre for the Future of Intelligence.

The Leverhulme Centre for the Future of Intelligence (LCFI) kicked off yesterday at Cambridge University with opening remarks from professor Stephen Hawking and Artificial Intelligence (AI) pioneer professor Maggie Boden.

The LCFI’s mission is “to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal – to work together to ensure that we humans make the best of the opportunities of artificial intelligence, as it develops over coming decades.”

“Success in creating AI could the biggest event in the history of our civilization,” remarked Hawking, adding, “Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialization.”

Professor Hawking, who has warned that AI will either be the best or worst thing for humanity, will also be making the opening remarks next month at Microsoft’s “Future Decoded” conference in London.

New projects on the nature and impact of AI being initiated and tackled by the LCFI include:

  • Science, Value, and the Future of Intelligence (examines conceptions of value in the science of [artificial] intelligence)
  • Policy and Responsible Innovation (focuses on the collaborations between technologists, policymakers, and other stakeholders needed for the responsible development of AI)
  • The Value Alignment Problem (seeks to design methods for preventing AI systems from inadvertently acting in ways inimical to human values)
  • Kinds of Intelligence (draws on current work in psychology, neuroscience, philosophy, computer science, and cognitive robotics in order to further develop and critically assess notions of general intelligence used in AI)
  • Autonomous Weapons – Prospects for Regulation (aims to bring an interdisciplinary approach to the question of regulating autonomous weapons systems)
  • AI: Agents and Persons (explores the nature and future of AI agency and personhood, and its impact on our human sense of what it means to be a person)
  • Politics and Policy (examines the challenges that the future of AI poses for democratic politics, including questions of political agency, accountability and representation)
  • Trust and Transparency (developing processes to ensure that AI systems are transparent, reliable and trustworthy)
  • Horizon Scanning and Road-Mapping (aims to map the landscape of potential AI breakthroughs and their social consequences)

With the inauguration of the LCFI, the research center is the latest organization to tackle the global challenge of humans vs machines and best AI practices.

Research into the future of AI is exploding! OpenAI, the Partnership on AI, and the White House’s new report have all independently begun to explore the potentials of AI with very similar aims.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

With so many different organizations looking to not only research the science of AI, most are also looking towards philosophy, politics, and policy making.

One of the key variations to look for in the future is the question of Open Source vs Privately-held research. Whose research will be the one policy makers go to? Whose projects will governments wish to implement? And who will become the ethical and moral authority between private corporations, government think tanks, and academic institutions?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

AI and the Dopamine Trap: How Algorithms Are Rewiring Our Social Cravings

New research shows AI companions can lift mood and teach social skills, but only when…

1 day ago

Hate speech, deepfakes & false info undermine UN work: comms chief

Hate speech is a launching point for crackdowns on narratives that impede UN agendas: perspective…

1 day ago

Making Sense brings strategic insight to the SIM Hartford Chapter

On June 4, technology executives gathered at the SIM Hartford chapter presided over by Fariba…

3 days ago

‘Digital disinformation undermines trust in climate & health authorities’: WHO Pan European Commission on Climate & Health

The PECCH is an attempt to persuade us into believing that climate & health policies…

5 days ago

WEF ‘Summer Davos’ in China to tackle transhumanism, AI & One Health agendas

The program agenda for the World Economic Forum's (WEF) 16th Annual Meeting of the New…

1 week ago

10 design and architecture writers to put on your radar this year

It’s easy to get caught up in the visuals—perfectly styled rooms, dramatic before-and-afters, bold architectural…

1 week ago