Technology

Stephen Hawking inaugurates artificial intelligence research center at Cambridge

Professor Stephen Hawking gives the inaugural speech for Cambridge University’s newly-launched Centre for the Future of Intelligence.

The Leverhulme Centre for the Future of Intelligence (LCFI) kicked off yesterday at Cambridge University with opening remarks from professor Stephen Hawking and Artificial Intelligence (AI) pioneer professor Maggie Boden.

The LCFI’s mission is “to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal – to work together to ensure that we humans make the best of the opportunities of artificial intelligence, as it develops over coming decades.”

“Success in creating AI could the biggest event in the history of our civilization,” remarked Hawking, adding, “Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialization.”

Professor Hawking, who has warned that AI will either be the best or worst thing for humanity, will also be making the opening remarks next month at Microsoft’s “Future Decoded” conference in London.

New projects on the nature and impact of AI being initiated and tackled by the LCFI include:

  • Science, Value, and the Future of Intelligence (examines conceptions of value in the science of [artificial] intelligence)
  • Policy and Responsible Innovation (focuses on the collaborations between technologists, policymakers, and other stakeholders needed for the responsible development of AI)
  • The Value Alignment Problem (seeks to design methods for preventing AI systems from inadvertently acting in ways inimical to human values)
  • Kinds of Intelligence (draws on current work in psychology, neuroscience, philosophy, computer science, and cognitive robotics in order to further develop and critically assess notions of general intelligence used in AI)
  • Autonomous Weapons – Prospects for Regulation (aims to bring an interdisciplinary approach to the question of regulating autonomous weapons systems)
  • AI: Agents and Persons (explores the nature and future of AI agency and personhood, and its impact on our human sense of what it means to be a person)
  • Politics and Policy (examines the challenges that the future of AI poses for democratic politics, including questions of political agency, accountability and representation)
  • Trust and Transparency (developing processes to ensure that AI systems are transparent, reliable and trustworthy)
  • Horizon Scanning and Road-Mapping (aims to map the landscape of potential AI breakthroughs and their social consequences)

With the inauguration of the LCFI, the research center is the latest organization to tackle the global challenge of humans vs machines and best AI practices.

Research into the future of AI is exploding! OpenAI, the Partnership on AI, and the White House’s new report have all independently begun to explore the potentials of AI with very similar aims.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

With so many different organizations looking to not only research the science of AI, most are also looking towards philosophy, politics, and policy making.

One of the key variations to look for in the future is the question of Open Source vs Privately-held research. Whose research will be the one policy makers go to? Whose projects will governments wish to implement? And who will become the ethical and moral authority between private corporations, government think tanks, and academic institutions?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Why these 10 digital engineering providers are leaders in the enterprise

In 2026, while many areas of the economy are contracting, the tech industry continues to…

1 day ago

The nostalgia wave of the 2020s revived the heritage debate 

In the archives of nearly every major heritage brand – Louis Vuitton, Mercedes-Benz, Coca-Cola or…

2 days ago

Imagine Your Life as a Game Controlled by Someone Else

You woke up this morning, made a series of choices, and ended up here reading…

3 days ago

The Internet’s Writing Problem Is No Longer Easy to Ignore

Writing sucked long before LLMs showed up. Sure, today's doomsayers love pointing at ChatGPT as…

3 days ago

DARPA O-Circuit program wants drones that can smell danger with ‘a new class of biologically inspired computer’

DARPA's O-Circuit program looks to build a new class of biologically inspired computer equipped with…

1 week ago

How a ten-day bootcamp is helping students at Delhi Public School hone their AI skills 

As AI races into classrooms worldwide, Google is finding that the toughest lessons on how…

1 week ago