26 letters.
26 experts.
26 lessons in moving AI faster and breaking fewer things.

AI is one of the most important inventions in history.

It’s 2022.


Enterprise adoption of AI is happening at the same time that corporate labs are releasing v1.0 of Artificial General Intelligence (AGI).

The world will soon be full of generalist AI agents capable of robust skills, transferring learnings from one task to another, and performing well under new circumstances with little or zero retraining. We are very close to synthetic thought, and an accompanying pace of technical and social change measured in seconds, not years.


And that says nothing of the overwhelming tide of change inbound as major companies worldwide adopting AI.


To the best that we can tell, there are two paths forward with where society and AI interface.

The first path is toward unprecedented languishing.

Today, AI technologies already exacerbate existing structural inequality; including sex and race.

 

Systems that control critical decisions in our lives — who lives and dies, who is granted economic inclusion — behave capriciously.

Ethnic cleansings have been exacerbated by algorithmic optimization.

 

We may see another jobless recovery in 2024 as jobs automate in 2022–2023 in response to market pressure.

 

There’s an oft-hidden, massive ecological impact of raw materials and energy that goes into building and training AI systems.

Critical infrastructure is increasingly unstable, unusable, and vulnerable to disruption and attack.

 

We’re losing fundamental rights to privacy, the sanctity of our personal data, and autonomy itself.

All of these harms are real, today.

 

They hurt the companies building the AI systems that create these outcomes.

 

They hurt all of us.

The second path is towards unprecedented flourishing.

In Path #2, we align the incentives, ownership, and returns of AI towards dignified and sustainable global development.

 

AI technologies are designed, tested, and deployed with fairness as an accountable success criteria, not an afterthought.

Critical systems can provide clear, acceptable explanations for their decisions and predictions.

 

Our algorithms, small and large, are beneficent and well-aligned to our notions of human wants and human rights.

 

The benefits of labor displacement are weighed critically against the tangible harms, and care is taken to protect the importance of – and dignity of – good work.

 

AI development is harmonious with our actions reversing the climate catastrophe.

 

The very infrastructure of our digital world is secure, stable, predictable, and robust to failure or attack.

 

Our fundamental rights to privacy, the sanctity of our personal data, and human autonomy itself are not only protected, but enhanced by the presence of AI in our lives.

The ABCs of Responsible AI is our crash course to Path #2.

Learn more about how Mission Control accelerates AI success:

The Trust Layer in your AI Stack.

Mission Control is a product from The AI Responsibility Lab Public Benefit Corporation.

© AIRL 2023-2042.