AI helps researchers model the impact of social norms

Hear from CIOs, CTOs, and other senior executives and leaders on data and AI strategies at the Future of Work Summit on January 12, 2022. Learn more


How do humans learn to cooperate? This is an interesting question, which behavioral anthropologists have been studying for decades. Social norms – that is, common understandings or informal rules, like dining etiquette and fashion sense – are thought to play a role, but it is difficult to measure how well they shape society and how they are affected by other factors.

Fortunately, this is where artificial intelligence (AI) comes in.

In an article recently published on the Arxiv.org preprint server (“Understanding the impact of partner choice on cooperation and social norms through multi-agent reinforcement learning”), scientists describe a system of AI trained using reinforcement learning – a technique that uses rewards to drive agents toward goals – to understand how an interaction within a company affects the overall outcome of the company.

“We first have stud[ied] the emergence of standards then the emergence of cooperation in the presence of standards ”, explain the authors of the article. “[Norms] have proven to have a significant impact on the collective results and progress of a company, [but] While it has been argued that normative behavior emerges from societal interactions, it is not clear what behavior is likely to emerge given a certain societal configuration.

The researchers modeled two social dilemmas as games: a cooperative game that exposes tensions between individual goals and the group goal, and a coordination game that examines compliance, with each agent having an observation. partial of its environment. Said agents – a group of 50 in total – were tasked with achieving the highest cumulative score while trying to maximize their individual scores. The emergence of standards was assessed by tracking the number of agents who converged on a particular point.

In the experiments, individual agents repeatedly interacted with others, either by choice or at random, and learned behavior based on their experiences. After 10,000 episodes of the coordination game, those with a choice of mate were able to maintain norms and resist change in the presence of a new type of agent – “influence” agents. – who played a fixed strategy. Meanwhile, around 5,000 episodes of cooperative play suggest that mate choice promotes collaboration in the presence of norms; using a low standard where agents were free to choose their partners, agents matched almost exclusively with other agents who had been cooperative in the past.

“[I]It becomes more difficult to influence or regulate the behavior of society through assimilation or supervision, where agents are free to choose with whom they can interact within the society, ”the researchers wrote. . “This is the key factor that stabilizes cooperation because untrustworthy agents are avoided and cooperative behavior can be reinforced as social norm is reinforced. “

They believe the results could be used as a basis for the design of future autonomous systems, and perhaps provide information on the emergence of cooperation in human and animal societies.

VentureBeat

VentureBeat’s mission is to be a digital public place for technical decision-makers to learn about transformative technology and conduct transactions. Our site provides essential information on data technologies and strategies to guide you in managing your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the topics that interest you
  • our newsletters
  • Closed thought leader content and discounted access to our popular events, such as Transform 2021: Learn more
  • networking features, and more

Become a member

Source link

About Marjorie C. Hudson

Check Also

What do we know and what are we doing to change them?

Persistent inequalities pose a significant threat to the achievement of development goals and poverty eradication …