stag hunt example international relations

To be sustained, a regime of racial oppression requires cooperation. Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. In international relations, countries are the participants in the stag hunt. If either hunts a stag alone, the chance of success is minimal. 695 20 Before getting to the theory, I will briefly examine the literature on military technology/arms racing and cooperation. [7] Aumann concluded that in this game "agreement has no effect, one way or the other." xref [21] Moreover, racist algorithms[22] and lethal autonomous weapons systems[23] force us to grapple with difficult ethical questions as we apply AI to more society realms. The United States is in the hunt, too. The Stag-hunt is probably more useful since games in life have many equilibria, and its a question of how you can get to the good ones. In testing the game's effectiveness, I found that students who played the game scored higher on the exam than students who did not play. Another example is the hunting practices of orcas (known as carousel feeding). [1] Kelly Song, Jack Ma: Artificial intelligence could set off WWIII, but humans will win, CNBC, June 21, 2017, https://www.cnbc.com/2017/06/21/jack-ma-artificial-intelligence-could-set-off-a-third-world-war-but-humans-will-win.html. In this article, we employ a class of symmetric, ordinal 2 2 games including the frequently studied Prisoner's Dilemma, Chicken, and Stag Hunt to model the stability of the social contract in the face of catastrophic changes in social relations. 2 Examples of states include the United States, Germany, China, India, Bolivia, South Africa, Brazil, Saudi Arabia, and Vietnam. [49] For example, see Glenn H. Snyder Prisoners Dilemma and Chicken Models in International Politics, International Studies Quarterly 15, 1(1971): 66103 and Downs et al., Arms Races and Cooperation., [50] Snyder, Prisoners Dilemma and Chicken Models in International Politics., [51] Snyder, Prisoners Dilemma and Chicken Models in International Politics.. Finally, in the game of chicken, two sides race to collision in the hopes that the other swerves from the path first. The most important role of the U.S. presence is to keep the Afghan state afloat, and while the negotiations may turn out to be a positive development, U.S. troops must remain in the near term to ensure the possibility of a credible deal. Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. Interestingly enough, the Stag Hunt theory can be used to describe social contracts within society, with the contract being the one to hunt the stag or achieve mutual benefit. [4] Nick Bostrom, Superintelligence: Paths, Dangers, Strategies (Oxford University Press, 2014). [56] look at three different types of strategies governments can take to reduce the level of arms competition with a rival: (1) a unilateral strategy where an actors individual actions impact race dynamics (for example, by focusing on shifting to defensive weapons[57]), (2) a tacit bargaining strategy that ties defensive expenditures to those of a rival, and (3) a negotiation strategy composed of formal arms talks. Payoff matrix for simulated Stag Hunt. Table 4. Donna Franks, an accountant for Southern Technologies Corporation, discovers that her supervisor, Elise Silverton, made several errors last year. 0000001840 00000 n If a hunter leaps out and kills the hare, he will eat, but the trap laid for the stag will be wasted and the other hunters will starve. hVN0ii ipv@B\Z7 'Q{6A"@](v`Q(TJ}Px^AYbA`Z&gh'{HoF4 JQb&b`#B$03an8"3V0yFZbwonu#xZ? Stag Hunt is a game in which the players must cooperate in order to hunt larger game, and with higher participation, they are able to get a better dinner. At key moments, the cooperation among Afghan politicians has been maintained with a persuasive nudge from U.S. diplomats. This is the third technology revolution., Artificial intelligence is the future, not only for Russia, but for all humankind. For example, it is unlikely that even the actor themselves will be able to effectively quantify their perception of capacity, riskiness, magnitude of risk, or magnitude of benefits. As an advocate of structural realism, Gray[45] questions the role of arms control, as he views the balance of power as a self-sufficient and self-perpetuating system of international security that is more preferable. .more Dislike Share Noah Zerbe 6.48K subscribers As a result of this, security-seeking actions such as increasing technical capacity (even if this is not explicitly offensive this is particularly relevant to wide-encompassing capacity of AI) can be perceived as threatening and met with exacerbated race dynamics. In recent times, more doctrinal exceptions to Article 2(4) such as anticipatory self defence (especially after the events of 9/11) and humanitarian intervention. Civilians and civilian objects are protected under the laws of armed conflict by the principle of distinction. [41] AI, being a dual-use technology, does not lend itself to unambiguously defensive (or otherwise benign) investments. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. At the same time, a growing literature has illuminated the risk that developing AI has of leading to global catastrophe[4] and further pointed out the effect that racing dynamics has on exacerbating this risk. In the long term, environmental regulation in theory protects us all, but even if most of the countries sign the treaty and regulate, some like China and the US will not forsovereigntyreasons, or because they areexperiencinggreat economic gain. War is anarchic, and intervening actors can sometimes help to mitigate the chaos. genocide, crimes against humanity, war crimes, and ethnic cleansing. Press: 1992). If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. The remainder of this subsection briefly examines each of these models and its relationship with the AI Coordination Problem. Under this principle, parties to an armed conflict must always distinguish between civilians and civilian objects on the one hand, and combatants and military targets on the other. In recent years, artificial intelligence has grown notably in its technical capacity and in its prominence in our society. For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. The game is a prototype of the social contract. This democratic peace proposition not only challenges the validity of other political systems (i.e., fascism, communism, authoritarianism, totalitarianism), but also the prevailing realist account of international relations, which emphasises balance-of-power calculations and common strategic interests in order to explain the peace and stability that characterises relations between liberal democracies. Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. Use integration to find the indicated probabilities. Despite the damage it could cause, the impulse to go it alone has never been far off, given the profound uncertainties that define the politics of any war-torn country. As such, Chicken scenarios are unlikely to greatly affect AI coordination strategies but are still important to consider as a possibility nonetheless. How does the Just War Tradition position itself in relation to both Realism and Pacifism? 0000003027 00000 n This can be facilitated, for example, by a state leader publicly and dramatically expressing understanding of danger and willingness to negotiate with other states to achieve this. Despite the large number of variables addressed in this paper, this is at its core a simple theory with the aims of motivating additional analysis and research to branch off. Additionally, the defector can expect to receive the additional expected benefit of defecting and covertly pursuing AI development outside of the Coordination Regime. The paper proceeds as follows. Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. Game Theory 101: The Complete William Spaniel shows how to solve the Stag Hunt using pure strategy Nash equilibrium. An example of norm enforcement provided by Axelrod (1986: 1100) is of a man hit in the face with a bottle for failing to support a lynching in the Jim Crow South. While each actors greatest preference is to defect while their opponent cooperates, the prospect of both actors defecting is less desirable then both actors cooperating. The real peril of a hasty withdrawal of U.S. troops from Afghanistan, though, can best be understood in political, not military, terms. What should Franks do? endstream endobj 12 0 obj <>stream As of 2017, there were 193 member-states of the international system as recognized by the United Nations. The familiar Prisoners Dilemma is a model that involves two actors who must decide whether to cooperate in an agreement or not. Overall, the errors overstated the companys net income by 40%. Landing The Job You Want Through YourNetwork, Earth Day: Using game theory and AI to beat thepoachers, Adopting to Facebooks new Like Alternative. In 2016, the Obama Administration developed two reports on the future of AI. Scholars of civil war have argued, for example, that peacekeepers can preserve lasting cease-fires by enabling warring parties to cooperate with the knowledge that their security will be guaranteed by a third party. 0 But who can we expect to open the Box? In so doing, they have maintained a kind of limited access order, drawing material and political benefits from cooperating with one another, most recently as part of the current National Unity Government. Formally, a stag hunt is a game with two pure strategy Nash equilibriaone that is risk dominant and another that is payoff dominant. David Hume provides a series of examples that are stag hunts. Uneven distribution of AIs benefits couldexacerbate inequality, resulting in higher concentrations of wealth within and among nations. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. Is human security a useful approach to security? publications[34] and host the worlds most prominent tech/AI companies (US: Facebook, Amazon, Google, and Tesla; China: Tencent and Baidu). A person's choice to bind himself to a social contract depends entirely on his beliefs whether or not the other person's or people's choice. On the other hand, real life examples of poorly designed compensation structures that create organizational inefficiencies and hinder success are not uncommon. The matrix above provides one example. The Stag Hunt The Stag Hunt is a story that became a game. Rabbits come in the form of different opportunities for short-term gain by way of graft, electoral fraud, and the threat or use of force. This table contains an ordinal representation of a payoff matrix for a Prisoners Dilemma game. Finally, Table 13 outlines an example payoff structure that results in a Stag Hunt. In the context of international relations, this model has been used to describe preferences of actors when deciding to enter an arms treaty or not. From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. As new technological developments bring us closer and closer to ASI[27] and the beneficial returns to AI become more tangible and lucrative, a race-like competition between key players to develop advanced AI will become acute with potentially severe consequences regarding safety. [2] Tom Simonite, Artificial Intelligence Fuels New Global Arms Race, Wired., September 8, 2017, https://www.wired.com/story/for-superpowers-artificial-intelligence-fuels-new-global-arms-race/. The first technology revolution caused World War I. [12] Apple Inc., Siri, https://www.apple.com/ios/siri/. Uncategorized, Mail (will not be published) An approximation of a Stag Hunt in international relations would be an international treaty such as the Paris Climate Accords, where the protective benefits of environmental regulation from the harms of climate change (in theory) outweigh the benefits of economic gain from defecting. Indeed, this gives an indication of how important the Stag Hunt is to International Relations more generally. Table 6 Payoff Matrix for AI Coordination Scenarios, Where P_h (A)h [D,D]>P_h (A)h [D,C]>P_h (AB)h [C,C]. The article states that the only difference between the two scenarios is that the localized group decided to hunt hares more quickly. In order to mitigate or prevent the deleterious effects of arms races, international relations scholars have also studied the dynamics that surround arms control agreements and the conditions under which actors might coordinate with one another. Read the following questions. THE STAG HUNT THE STAG HUNT T HE Stag Hunt is a story that became a game. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. Both nations can benefit by working together and signing the agreement. Furthermore, in June 2017, China unveiled a policy strategy document unveiling grand ambitions to become the world leader in AI by 2030. Why do trade agreements even exist? If the regime allows for multilateral development, for example, the actors might agree that whoever reaches AI first receives 60% of the benefit, while the other actor receives 40% of the benefit. The story is briefly told by Rousseau, in A Discourse on Inequality : If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would h ave gone off in pursuit . In the stag hunt, two hunters must each decide whether to hunt the stag together or hunt rabbits alone. In this section, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. Table 3. [27] An academic survey conducted showed that AI experts and researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years. For example, most land disputes, like the ongoing Chinese andJapanesedisputeover the Senkaku Islands, must be resolved bycompromisingin other areas of policy in order to achieve the goal. As such, it will be useful to consider each model using a traditional normal-form game setup as seen in Table 1. Intriligator and Brito[38] argue that qualitative/technological races can lead to greater instability than quantitative races. 75 0 obj <>stream [30], Today, government actors have already expressed great interest in AI as a transformative technology. [46] Charles Glaser, Realists as Optimists: Cooperation as Self-Help, International Security 19, 3(1994): 50-90. 0000002169 00000 n It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. Also, trade negotiations might be better thought of as an iterated game the game is played repeatedly and the nations interact with each other more than once over time. Finally, the paper will consider some of the practical limitations of the theory. These differences create four distinct models of scenarios we can expect to occur: Prisoners Dilemma, Deadlock, Chicken, and Stag Hunt. We find that individuals under the time pressure treatment are more likely to play stag (vs. hare) than individuals in the control group: under time constraints 62.85% of players are stag -hunters . These talks involve a wide range of Afghanistans political elites, many of whom are often painted as a motley crew of corrupt warlords engaged in tribalized opportunism at the expense of a capable government and their own countrymen. Payoff variables for simulated Stag Hunt, Table 14. Understanding the Stag Hunt Game: How Deer Hunting Explains Why People are Socially Late. [14] IBM, Deep Blue, Icons of Progress, http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/. Payoff variables for simulated Prisoners Dilemma. Leanna Litsch, Kabul Security Force Public Affairs. For example, can the structure of distribution impact an actors perception of the game as cooperation or defection dominated (if so, should we focus strategic resources on developing accountability strategies that can effectively enforce distribution)? I also examine the main agenda of this paper: to better understand and begin outlining strategies to maximize coordination in AI development, despite relevant actors varying and uncertain preferences for coordination. Let us call a stag hunt game where this condition is met a stag hunt dilemma. Each can individually choose to hunt a stag or hunt a hare. 8,H7kcn1qepa0y|@. Although the development of AI at present has not yet led to a clear and convincing military arms race (although this has been suggested to be the case[43]), the elements of the arms race literature described above suggest that AIs broad and wide-encompassing capacity can lead actors to see AI development as a threatening technological shock worth responding to with reinforcements or augmentations in ones own security perhaps through bolstering ones own AI development program. For example, Stag Hunts are likely to occur when the perceived harm of developing a harmful AI is significantly greater than the perceived benefit that comes from a beneficial AI . <>stream One example payoff structure that results in a Deadlock is outlined in Table 9. This means that it remains in U.S. interests to stay in the hunt for now, because, if the game theorists are right, that may actually be the best path to bringing our troops home for good. Within these levels of analysis, there are different theories that have could be considered. [6] See infra at Section 2.2 Relevant Actors. The ultimate resolution of the war in Afghanistan will involve a complex set of interlocking bargains, and the presence of U.S. forces represents a key political instrument in those negotiations. Perhaps most alarming, however, is the global catastrophic risk that the unchecked development of AI presents. 0 If participation is not universal, they cannot surround the stag and it escapes, leaving everyone that hunted stag hungry. The following subsection further examines these relationships and simulates scenarios in which each coordination model would be most likely. In their paper, the authors suggest Both the game that underlies an arms race and the conditions under which it is conducted can dramatically affect the success of any strategy designed to end it[58]. This technological shock factor leads actors to increase weapons research and development and maximize their overall arms capacity to guard against uncertainty. [11] In our everyday lives, we store AI technology as voice assistants in our pockets[12] and as vehicle controllers in our garages. In the event that both actors are in a Stag Hunt, all efforts should be made to pursue negotiations and persuade rivals of peaceful intent before the window of opportunity closes.

London Crime Families, Articles S