Technology

Stephen Hawking inaugurates artificial intelligence research center at Cambridge

Professor Stephen Hawking gives the inaugural speech for Cambridge University’s newly-launched Centre for the Future of Intelligence.

The Leverhulme Centre for the Future of Intelligence (LCFI) kicked off yesterday at Cambridge University with opening remarks from professor Stephen Hawking and Artificial Intelligence (AI) pioneer professor Maggie Boden.

The LCFI’s mission is “to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal – to work together to ensure that we humans make the best of the opportunities of artificial intelligence, as it develops over coming decades.”

“Success in creating AI could the biggest event in the history of our civilization,” remarked Hawking, adding, “Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialization.”

Professor Hawking, who has warned that AI will either be the best or worst thing for humanity, will also be making the opening remarks next month at Microsoft’s “Future Decoded” conference in London.

New projects on the nature and impact of AI being initiated and tackled by the LCFI include:

  • Science, Value, and the Future of Intelligence (examines conceptions of value in the science of [artificial] intelligence)
  • Policy and Responsible Innovation (focuses on the collaborations between technologists, policymakers, and other stakeholders needed for the responsible development of AI)
  • The Value Alignment Problem (seeks to design methods for preventing AI systems from inadvertently acting in ways inimical to human values)
  • Kinds of Intelligence (draws on current work in psychology, neuroscience, philosophy, computer science, and cognitive robotics in order to further develop and critically assess notions of general intelligence used in AI)
  • Autonomous Weapons – Prospects for Regulation (aims to bring an interdisciplinary approach to the question of regulating autonomous weapons systems)
  • AI: Agents and Persons (explores the nature and future of AI agency and personhood, and its impact on our human sense of what it means to be a person)
  • Politics and Policy (examines the challenges that the future of AI poses for democratic politics, including questions of political agency, accountability and representation)
  • Trust and Transparency (developing processes to ensure that AI systems are transparent, reliable and trustworthy)
  • Horizon Scanning and Road-Mapping (aims to map the landscape of potential AI breakthroughs and their social consequences)

With the inauguration of the LCFI, the research center is the latest organization to tackle the global challenge of humans vs machines and best AI practices.

Research into the future of AI is exploding! OpenAI, the Partnership on AI, and the White House’s new report have all independently begun to explore the potentials of AI with very similar aims.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

With so many different organizations looking to not only research the science of AI, most are also looking towards philosophy, politics, and policy making.

One of the key variations to look for in the future is the question of Open Source vs Privately-held research. Whose research will be the one policy makers go to? Whose projects will governments wish to implement? And who will become the ethical and moral authority between private corporations, government think tanks, and academic institutions?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

‘I hope AI becomes a new religion because I benefit’: Eric Schmidt on Henry Kissinger at Harvard

Cui bono? If AI were to become a religion, then the Priests, Imams, Rabbis, and…

2 days ago

Why mandatory sustainability reporting is so much more than a compliance exercise: Finding the path to operational value 

Throughout the course of 2025, we’ve seen a huge uptick in the number of countries…

2 days ago

IARPA B-SAURUS program aims to identify, reverse engineer explosives, drugs & counterfeit materials in supply chains

The US spy community's research and development funding arm IARPA announces the B-SAURUS program to…

3 days ago

Vendavo partners with Ness, enters a new era of scalable value creation

About a decade ago, Software as a Service (SaaS) was disrupting the tech world from…

4 days ago

AI company Prezent reaches latest milestone following recognition as top software company in 2025

The Software Report is a comprehensive source for market research and insights, business news, investment…

4 days ago

Genesis Mission to unify US datasets on single platform to feed AI

The road to the Genesis Mission was paved by technocrats like Larry Ellison and Tony…

1 week ago