AI Safety

 

The challenge

Artificial intelligence is becoming part of our daily lives, as complex algorithms deliver us search results, allocate police resources, and give us advice. While these current levels of task-specific AI do pose some concerns (especially when the data they are trained on is biased), the biggest rewards and dangers will come from smarter-than-human general AI (AGI). Also known as ‘Superintelligence‘, such an AI is:

  • Inevitable: Barring a globally enforced permanent ban (which is unrealistic), there is nothing to stop us from building more and more capable AI to the point where it surpasses us in all tasks. Surveys of AI scientists put expected the arrival date anywhere between 2025 and 2100.
  • Necessary: Along with the bewildering level of complexity in modern society, we face a number of existential risks (asteroids, supervolcanoes, bioengineered pandemics, climate change and the combination thereof) that will be almost impossible for our limited human brains to overcome without the help of AGI.
  • Existentially dangerous: AGI would be a highly complex engineering system. As such, it will be prone to programming errors and poorly thought through instructions. The danger is that given the capability of such as system – it could outsmart every human and commandeer global resources in pursuit of the goal it is given – a mistake could mean extinction. The Future of Life Institute has an excellent summary.

The challenge is twofold:

  1. Technical – how do we safely build AGI?
  2. Governance – who builds it? how is it used? how are its many benefits to be shared?

What is being done

The first serious discussions on AI safety began around 2014, and the focus until now has been on:

  • Technical research: Organisations like OpenAI are working to design safety algorithms and strategies that can be used by AI developers.
  • Policy development: Top think tanks such the University of Oxford’s Future of Humanity Institute are developing policies to govern AI.
  • Public awareness: Big names like Elon Musk and Stephen Hawking are making regular media appearances to publicise the issue, although more needs to be done.
  • Global coordination: The Partnership On AI, an industry-led initiative was created in 2016 and brings together companies and NGOs to collaborate on AI safety. However there have been no public talks between nations or defence establishments.

What is lacking: local and international political action.

What we do

Our goal is to empower people with the knowledge and democratic tools they need to support effective political action. As such, we organise events where people can learn about the issues, and will soon be launching petitions and other advocacy campaigns to create political momentum.