Technology

Stephen Hawking inaugurates artificial intelligence research center at Cambridge

Professor Stephen Hawking gives the inaugural speech for Cambridge University’s newly-launched Centre for the Future of Intelligence.

The Leverhulme Centre for the Future of Intelligence (LCFI) kicked off yesterday at Cambridge University with opening remarks from professor Stephen Hawking and Artificial Intelligence (AI) pioneer professor Maggie Boden.

The LCFI’s mission is “to build a new interdisciplinary community of researchers, with strong links to technologists and the policy world, and a clear practical goal – to work together to ensure that we humans make the best of the opportunities of artificial intelligence, as it develops over coming decades.”

“Success in creating AI could the biggest event in the history of our civilization,” remarked Hawking, adding, “Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one—industrialization.”

Professor Hawking, who has warned that AI will either be the best or worst thing for humanity, will also be making the opening remarks next month at Microsoft’s “Future Decoded” conference in London.

New projects on the nature and impact of AI being initiated and tackled by the LCFI include:

  • Science, Value, and the Future of Intelligence (examines conceptions of value in the science of [artificial] intelligence)
  • Policy and Responsible Innovation (focuses on the collaborations between technologists, policymakers, and other stakeholders needed for the responsible development of AI)
  • The Value Alignment Problem (seeks to design methods for preventing AI systems from inadvertently acting in ways inimical to human values)
  • Kinds of Intelligence (draws on current work in psychology, neuroscience, philosophy, computer science, and cognitive robotics in order to further develop and critically assess notions of general intelligence used in AI)
  • Autonomous Weapons – Prospects for Regulation (aims to bring an interdisciplinary approach to the question of regulating autonomous weapons systems)
  • AI: Agents and Persons (explores the nature and future of AI agency and personhood, and its impact on our human sense of what it means to be a person)
  • Politics and Policy (examines the challenges that the future of AI poses for democratic politics, including questions of political agency, accountability and representation)
  • Trust and Transparency (developing processes to ensure that AI systems are transparent, reliable and trustworthy)
  • Horizon Scanning and Road-Mapping (aims to map the landscape of potential AI breakthroughs and their social consequences)

With the inauguration of the LCFI, the research center is the latest organization to tackle the global challenge of humans vs machines and best AI practices.

Research into the future of AI is exploding! OpenAI, the Partnership on AI, and the White House’s new report have all independently begun to explore the potentials of AI with very similar aims.

Read More: Partnership on AI vs OpenAI: Consolidation of Power vs Open Source

With so many different organizations looking to not only research the science of AI, most are also looking towards philosophy, politics, and policy making.

One of the key variations to look for in the future is the question of Open Source vs Privately-held research. Whose research will be the one policy makers go to? Whose projects will governments wish to implement? And who will become the ethical and moral authority between private corporations, government think tanks, and academic institutions?

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

WEF calls on stakeholders to ‘inoculate’ public against disinformation ‘super-spreaders’: report

Those who decry 'disinformation' the loudest almost never give any examples of what they're denouncing:…

14 hours ago

Shift left, ship fast: How software teams can offer speed without sacrificing quality (Brains Byte Back Podcast)

Even the biggest software companies understand that moving quickly is no longer a luxury; it's…

2 days ago

Extremists weaponize COVID, climate issues with conspiracy theories about state & elite control: RAND Europe

The RAND Europe authors are so stuck in their own echo chamber they don't realize…

6 days ago

Digital ID, vaccine passports are expanding to pets & livestock: UN AI for Good report

Humans, animals & commodities alike are all to be digitally tagged, tracked-and-traced equally: perspective The…

1 week ago

Teaching with tech: What’s changing and why It Matters (Brains Byte Back Podcast)

Teaching has changed a lot over the years, from chalkboards to laptops, from printed worksheets…

1 week ago

‘Enormously intrusive’ collaborative sensing is beneficial to society: WEF podcast

The massive city-wide surveillance that collaborative sensing requires is a tremendous temptation for tyrants: perspective…

2 weeks ago