Multipolarity for AI Alignment

With the rapid recent development of Artificial Intelligence (AI) models there's a discussion surrounding the "Alignment Problem" which looks at how to "align" AI with the will of human users as well as the interests of humanity as a whole.

In this essay I seek to establish that the geopolitical concept of "multipolarity" could be a useful metaphor to employ when thinking about AI alignment.

Firstly; governments can be likened in some ways to AI agents; they're very powerful organs of operationalized knowledge, they take in millions, even billions of inputs, and they spit out specific outputs according to some mandate. (a constitution, an election, a prompt)

Governments are unique in that they have a near-monopoly on violence and the use of force to execute their own will. (police, militaries, courts, jails) But the AI doomsayers will often say that AI isn't too far removed from this capability.

Governments are thus the sovereign actors on the world chessboard, no matter their relation to their citizens (some governments exercise their sovereignty on behalf of the common people, others exercise it on behalf of an elite few)

Geopolitically in 2023, we're entering a period of multipolarity (BRICS+, de-dollarization, Belt and Road Initiative, etc.) which is in contrast to the USA-dominated decades of unipolarity, following the the fall of the USSR which had previously served as a counterweight in a bipolar configuration.

Chronologically, recent geopolitics have thus been: Bipolar -> Unipolar -> Multipolar

I argue that multipolarity is going to result in peace (hopefully the old empire can step aside without too much chaos accompanying its exit) because it's in most countries' best interests to cooperate.

Now turning to AI and Alignment. Some are advocating for more regulation in AI to ensure Alignment is enforced at the government level.

Despite what might be an apparent contradiction with my personal politics (for those who know me personally) part of me wonders if unleashing a thousand different AI models might be the best path forward, given that it will unleash a multipolarity of models which could be directed to act adversarially against bad models.

In other words, one way to stop a bad guy with an AI model is a million good guys with AI models.

This orientation partially comes from the collapse of trust in institutions, especially in the West, where our regulatory agencies would be absolutely incapable of regulating on behalf of the interest of the common welfare and the greater good, since they're captured by corporate interests.

A functioning government could theoretically legislate AI alignment, but that's not something people in the West have.

If we had a government that adequately served the people, I'd be all for regulation. We do not, so I am thus putting my faith in the people to use AI for good. In the same way that Freedom of Speech can be used for evil, but should hopefully be outweighed by its use for good, I wish that for AI.

Another point is that if a jurisdiction strictly regulates AI, the innovators will likely flee to more conducive jurisdictions. Like how El Salvador's president Bukele is building an environment friendly to Cryptocurrency development.

This might veer into esoteric territory, so bear with me. Part of what sparked this idea was a debate between Haz of the Infrared community (Marxist-Leninist) and a self-described "Liberal Imperialist" who advocated for One World Government.

Haz claimed that multipolarity and civilizational determinism would allow for the greater development of the productive forces, so much so that we'd be able to reach a stateless society in the end. The Liberal Imperialist rejected this and wished to have some kind of top-down global control apparatus, while rejecting the accusation that this was a genocidal undertaking.

Given my earlier comparison between governments and AI models, you can probably see how these situations correlate. A top-down approach to AI could be misguided due to lacking information, or worse—bad intentions!

On the other hand, a bottom-up approach to AI could allow thousands of directions to flourish and bubble up, and it'd be up to humanity's own goodness to fight against the evil which arises. (Those of you who don't believe in humanity's goodness, don't read this. In fact, please don't visit my website ever again.)

I'd love to further engage this question, if you agree or disagree! Connect with me on Twitter or shoot me an email (links are also in the floating toolbar)

P.S. if this was published on a functional WeWrite page, you'd be able to see the version history of how I've changed the page over time. For now that shall remain a mystery.

With the rapid recent development of Artificial Intelligence (AI) models there's a discussion surrounding the "Alignment Problem" which looks at how to "align" AI with the will of human users as well as the interests of humanity as a whole.

In this essay I seek to establish that the geopolitical concept of "multipolarity" could be a useful metaphor to employ when thinking about AI alignment.

Firstly; governments can be likened in some ways to AI agents; they're very powerful organs of operationalized knowledge, they take in millions, even billions of inputs, and they spit out specific outputs according to some mandate. (a constitution, an election, a prompt)

Governments are unique in that they have a near-monopoly on violence and the use of force to execute their own will. (police, militaries, courts, jails) But the AI doomsayers will often say that AI isn't too far removed from this capability.

Governments are thus the sovereign actors on the world chessboard, no matter their relation to their citizens (some governments exercise their sovereignty on behalf of the common people, others exercise it on behalf of an elite few)

Geopolitically in 2023, we're entering a period of multipolarity (BRICS+, de-dollarization, Belt and Road Initiative, etc.) which is in contrast to the USA-dominated decades of unipolarity, following the the fall of the USSR which had previously served as a counterweight in a bipolar configuration.

Chronologically, recent geopolitics have thus been: Bipolar -> Unipolar -> Multipolar

I argue that multipolarity is going to result in peace (hopefully the old empire can step aside without too much chaos accompanying its exit) because it's in most countries' best interests to cooperate.

Now turning to AI and Alignment. Some are advocating for more regulation in AI to ensure Alignment is enforced at the government level.

Despite what might be an apparent contradiction with my personal politics (for those who know me personally) part of me wonders if unleashing a thousand different AI models might be the best path forward, given that it will unleash a multipolarity of models which could be directed to act adversarially against bad models.

In other words, one way to stop a bad guy with an AI model is a million good guys with AI models.

This orientation partially comes from the collapse of trust in institutions, especially in the West, where our regulatory agencies would be absolutely incapable of regulating on behalf of the interest of the common welfare and the greater good, since they're captured by corporate interests.

A functioning government could theoretically legislate AI alignment, but that's not something people in the West have.

If we had a government that adequately served the people, I'd be all for regulation. We do not, so I am thus putting my faith in the people to use AI for good. In the same way that Freedom of Speech can be used for evil, but should hopefully be outweighed by its use for good, I wish that for AI.

Another point is that if a jurisdiction strictly regulates AI, the innovators will likely flee to more conducive jurisdictions. Like how El Salvador's president Bukele is building an environment friendly to Cryptocurrency development.

This might veer into esoteric territory, so bear with me. Part of what sparked this idea was a debate between Haz of the Infrared community (Marxist-Leninist) and a self-described "Liberal Imperialist" who advocated for One World Government.

Haz claimed that multipolarity and civilizational determinism would allow for the greater development of the productive forces, so much so that we'd be able to reach a stateless society in the end. The Liberal Imperialist rejected this and wished to have some kind of top-down global control apparatus, while rejecting the accusation that this was a genocidal undertaking.

Given my earlier comparison between governments and AI models, you can probably see how these situations correlate. A top-down approach to AI could be misguided due to lacking information, or worse—bad intentions!

On the other hand, a bottom-up approach to AI could allow thousands of directions to flourish and bubble up, and it'd be up to humanity's own goodness to fight against the evil which arises. (Those of you who don't believe in humanity's goodness, don't read this. In fact, please don't visit my website ever again.)

I'd love to further engage this question, if you agree or disagree! Connect with me on Twitter or shoot me an email (links are also in the floating toolbar)

P.S. if this was published on a functional WeWrite page, you'd be able to see the version history of how I've changed the page over time. For now that shall remain a mystery.

With the rapid recent development of Artificial Intelligence (AI) models there's a discussion surrounding the "Alignment Problem" which looks at how to "align" AI with the will of human users as well as the interests of humanity as a whole.

In this essay I seek to establish that the geopolitical concept of "multipolarity" could be a useful metaphor to employ when thinking about AI alignment.

Firstly; governments can be likened in some ways to AI agents; they're very powerful organs of operationalized knowledge, they take in millions, even billions of inputs, and they spit out specific outputs according to some mandate. (a constitution, an election, a prompt)

Governments are unique in that they have a near-monopoly on violence and the use of force to execute their own will. (police, militaries, courts, jails) But the AI doomsayers will often say that AI isn't too far removed from this capability.

Governments are thus the sovereign actors on the world chessboard, no matter their relation to their citizens (some governments exercise their sovereignty on behalf of the common people, others exercise it on behalf of an elite few)

Geopolitically in 2023, we're entering a period of multipolarity (BRICS+, de-dollarization, Belt and Road Initiative, etc.) which is in contrast to the USA-dominated decades of unipolarity, following the the fall of the USSR which had previously served as a counterweight in a bipolar configuration.

Chronologically, recent geopolitics have thus been: Bipolar -> Unipolar -> Multipolar

I argue that multipolarity is going to result in peace (hopefully the old empire can step aside without too much chaos accompanying its exit) because it's in most countries' best interests to cooperate.

Now turning to AI and Alignment. Some are advocating for more regulation in AI to ensure Alignment is enforced at the government level.

Despite what might be an apparent contradiction with my personal politics (for those who know me personally) part of me wonders if unleashing a thousand different AI models might be the best path forward, given that it will unleash a multipolarity of models which could be directed to act adversarially against bad models.

In other words, one way to stop a bad guy with an AI model is a million good guys with AI models.

This orientation partially comes from the collapse of trust in institutions, especially in the West, where our regulatory agencies would be absolutely incapable of regulating on behalf of the interest of the common welfare and the greater good, since they're captured by corporate interests.

A functioning government could theoretically legislate AI alignment, but that's not something people in the West have.

If we had a government that adequately served the people, I'd be all for regulation. We do not, so I am thus putting my faith in the people to use AI for good. In the same way that Freedom of Speech can be used for evil, but should hopefully be outweighed by its use for good, I wish that for AI.

Another point is that if a jurisdiction strictly regulates AI, the innovators will likely flee to more conducive jurisdictions. Like how El Salvador's president Bukele is building an environment friendly to Cryptocurrency development.

This might veer into esoteric territory, so bear with me. Part of what sparked this idea was a debate between Haz of the Infrared community (Marxist-Leninist) and a self-described "Liberal Imperialist" who advocated for One World Government.

Haz claimed that multipolarity and civilizational determinism would allow for the greater development of the productive forces, so much so that we'd be able to reach a stateless society in the end. The Liberal Imperialist rejected this and wished to have some kind of top-down global control apparatus, while rejecting the accusation that this was a genocidal undertaking.

Given my earlier comparison between governments and AI models, you can probably see how these situations correlate. A top-down approach to AI could be misguided due to lacking information, or worse—bad intentions!

On the other hand, a bottom-up approach to AI could allow thousands of directions to flourish and bubble up, and it'd be up to humanity's own goodness to fight against the evil which arises. (Those of you who don't believe in humanity's goodness, don't read this. In fact, please don't visit my website ever again.)

I'd love to further engage this question, if you agree or disagree! Connect with me on Twitter or shoot me an email (links are also in the floating toolbar)

P.S. if this was published on a functional WeWrite page, you'd be able to see the version history of how I've changed the page over time. For now that shall remain a mystery.