(Don't) Incentivise Ethical Behaviour
In the ongoing project of rescuing useful thoughts off Xwitter, here's another hot take of mine, reheated:
"Being good for a reward isn’t being good---it’s just optimal play."
The quote comes from Luke Gearing and his excellent post "Against Incentive", to which I had been reacting.
My thread was mainly intended as a fulsome nodding along to one of Luke's points. It was posted in 2021, and extended in 2023 after Sidney Icarus posed a question to it. So it is two threads.
Here they are, properly paragraphed, hopefully more cleanly expressed:
+++
(Don't) Incentivise Ethical Behaviour
This is my main problem with mechanically rewarding pro-social play: a character's ethical choice is rendered mercenary.
As Luke Gearing puts it:
"Being good for a reward isn’t being good---it’s just optimal play."
Bear in mind that I'm not saying that pro-social play can't have rewarding outcomes for players. Any decision should have consequences in the fiction. It serves the ideal of portraying a living, world to have these consequences rendered diegetic:
The townsfolk are thankful; the goblins remember your mercy; pamphlets appear, quoting from your revolutionary speech.
What I am saying is that rewarding abstract mechanical benefits (XP tickets, metacurrency points, etc) for ethical decisions stinks.
+
A subtle but absolutely essential distinction, when it comes to portraying and exploring ethics / morality, in roleplaying games.
Say you reward bonus XP for sparing goblins.
Are your players making a decisions based on how much they value life / the personhood of goblins? Or are they making a decision based on how much they want XP?
Say you declare: "If you help the villagers, the party receives a +1 attitude modifier in this village."
Are your players assisting the community because it is the right thing to do, or are they playing optimally, for a +1 effect?
+
XP As Currency
XP is the ur-example of incentive in TTRPGs. It began with D&D's gold-for-XP, and has never strayed far from that logic.
XP is still currency. Do things the GM / game designer wants you to do? Get paid.
Players use XP to buy better mechanical tools (levels, skills, abilities)---which they can then in turn use to better perform the actions that will net them XP.
Like using gold you stole from goblins to buy a sword, so you can now rob orcs.
I genuinely feel that such systems are valuable. They are models that illuminate the drives fuelling amoral / unethical behaviour.
Material gain is the drive of land-grabbing and colonialism. Logger-barons and empires do get wealthier and more privileged, as a reward for their terrible actions.
+
If you want to present an ethical choice in play, congruent to our real-life dilemmas, there is value in asking:
"Hey, if you kill the goblins you can grab their treasure, and you will get richer. There's no reward for sparing their lives, except that they are thankful."
Which is another way of asking:
"Does your commitment to the ideal of preserving life outweigh the guaranteed material incentives for taking life?"
The ethical choice is the difficult choice, precisely because it involves---as it often does, in real life---sacrificing personal growth and gain. Doling out an XP bounty for doing the right thing makes the ethical choice moot.
"I as the player am making a mechanically optimal choice, but my character is making an ethical choice!"
A cop-out. Owning your cake and eating it too. The fictional fig-leaf of empathy over a calculated a decision to make profit.
+
Sidney Icarus asks a question which I will quote here:
"... those who hold to their beliefs of good behaviour don't feel rewarded, and therefore feel punished. And that's not a good feeling.
It's an unpleasant experience to play a game where the righteous players are in rags, and the mercenary fucks have crowns and sceptres.
So, what's the design opportunity? How do we make doing the right thing feel pleasant without making it mercenary? Or, like reality, do we acknowledge that ethical acts are valuable only intrinsically and philosophically?
I have no idea how to reconcile this."
I would suggest that the above dichotomy---"righteous players in rags, mercs in crowns"---is true if property is recognised as the only true incentive.
+
Friends As Property
Modern games try to solve the righteous-players-in-rags "problem" in various ways. Virtue might not net you treasure or XP, but may give you:
Contact or ally slots, which you can fill in;
Relationship meters you can watch tick up;
Favour points you can cash in later;
etc.
How different are these mechanical incentives from treasure or XP, really?
Your relationships with supposedly living, breathing beings are transformed into abilities for your character: skills you can train; powers you can reliably proc. Pump your relationship score with the orc tribe until calling on them for reinforcements becomes a once-per-month ability.
Relationships become contracts. Regard becomes debt. Put your friend in an ally slot, so they become a tool.
If this is what you want play to be---totally fine! As stated previously, games say powerful things when they portray the engines of profit and property.
But I personally don't think game designers should design employer-employee relationships and disguise these as instances of mutual aid.
+
Friends As Friends
In the OSR campaigns I'm part of, I keep forgetting to record money. Which is usually a big deal in such games, seeing as they are in the grand tradition of gold-for-XP?
In both games, my characters are still 1st-Level pukes, though it's been months.
I'm having a blast, anyway.
My GMs, by virtue of running organic, reactive worlds, have made play rewarding for me. NPCs / geographies remember the party's previous actions, and respond accordingly.
I've been given gills from a river god, after constant prayer;
I've befriended a village of monsters, where we now live;
I've parleyed with the witch of a whole forest, where we may now tread;
I've a boon from the touch of wood wose, after answering his summons.
I cannot count on the wood wose showing up. He is a character in the world, not a power I control. Calling on the wood wose might become a whole adventure.
Little of this stuff is codified my stats or abilities or equipment list. They are mostly all under "misc notes".
Diegetic growth. Narrative change that spirals into more play.
This is the design opportunity, to me:
How do we shape TTRPG play culture in such a way that the "misc notes" gaps in our games are as fun as the systemised bits? What kinds of orientation tools must we provide? What should we say, in our advice sections?
+
A Note About Trust
The reason why it is so hard to imagine play beyond conventional incentive structures has a lot to do with trust.
Sidney again:
One of the core issues is the "low trust table". I'm not designing just for myself but for my audience. For a product. How much can I ask purchasers and their friends to codesign this part with me?
Nerds love numbers and things we can write down in inventories or slots because they are sureties. We've learned to fear fiat or player discretion, traumatised as we are by Problem GMs or That Guys.
The reason why the poverty in Sidney's hypothetical ("righteous players are in rags") sounds so bad is because in truth it represents risk at the game table. If you don't participate in the mechanics legible to your ruleset (the XP and gear to do more game things), you risk gradually being excluded from play.
You have no assurance your fellow players will know how hold space for you; be considerate; work together to portray a living world where NPCs react in meaningful ways---in ways that will be fun and rewarding for everybody playing.
You are giving up the guarantee of mechanical relevance for the possibility of fun interactions and creative social play.
+
The "low trust table" is learned behaviour--the cruft of gamer culture and trauma.
When I game with folks new to TTRPGs, they tend to be decent, considerate. I think there's enough anecdotal evidence from folks playing with school kids / newcomers / etc to suggest my experience is not unique.
If the "low trust table" is indeed learned behaviour, it can be unlearned.
Which rules conventions, now part of the hobby mainstream, were the result of designers designing defensively---shadowboxing against terrible players and the spectre of "unfairness"?
How can we "undesign" such conventions?
Lack of trust is a problem that we have to address in play culture, not rulesets. You cannot cook a dish so good it forces diners to have good table manners.
+
This is too long already. I'll end with an observation:
Elfgames are not praxis, but doesn't this specific dilemma in the microcosm of our silly elfgames ultimately mirror real-world ethics?
To be moral is to trust in a better world; to be amoral / immoral is to hedge against the guarantee of a worse one.
+++
Further Reading
Some words from around the TTRPG community about incentive and advancement in games:
+
However, the reason there is a big debate about this is that behavioural incentives in games clearly do work, either entirely or at various levels. This applies outside gaming, as well. Why do advertising companies and retail business use "rewards" structures to convince people to buy more of their products? Why do people chase after "Likes" on social media?
A comment by Paul_T to "A Hypothesis on Behavioral Incentives"
from a discussion on Story-Games.com
+
the structure and symbolism of the D&D game align with certain structures and values of patriarchy. The game is designed to last infinitely by shifting goalposts of character experience in terms of increasing amounts of gold pieces acquired; this resembles the modus operandi of phallic desire which seeks out object after object (most typically, women) in order to quench a lack which always reasserts itself.
D&D's Obsession With Phallic Desire
from Traverse Fantasy
+
In short, my feeling is that rewarding players with character improvement in return for achieving goals in a specific way impedes some of the key strengths of TTRPGs for little or no benefit in return.
Incentives
from Bastionland
+
When good deeds arise naturally out of the players choices, especially when players rejected other options that were more beneficial to them, it is immensely satisfying. Far more than if players are just assumed to be heroic by default. It gives agency and meaning to player choice.
Make Players Choose To Be Kind
from Cosmic Orrery
+
Much has been made about 1 GP = 1 XP as the core gameplay loop driver of TSR D+D. But XP for gold retrieved also winds up being something of a de facto capitalistic outlook as well. Success is driven by accumulation of individual wealth -- by an adventuring company, even! So what's a new framework that can be used for underpinning a leftist OSR campaign?
A Spectre (7+3 HD) Is Haunting the Flaeness: Towards a Leftist OSR
from Legacy of the Bieth
+
Growth should be tied to a specific experience occurring in the fiction.
It is more important for a PC to grow more interesting than more skilled or capable.
PCs experience growth not necessarily because they’ve gotten more skill and experience, but because they are changed in a significant way.
Cairn FAQ
from Cairn RPG / Yochai Gal
+++
Thank you Ram for the Story-Games.com deep cut!
( Image sources:
https://knowyourmeme.com/memes/neuron-activation
https://en.wikipedia.org/wiki/Majesty:_The_Fantasy_Kingdom_Sim
https://www.economist.com/sites/default/files/special-reports-pdfs/10490978.pdf
https://varnam.my/34311/untold-tales-of-indian-labourers-from-rubber-plantations-during-pre-independence-malaya/
https://nobonzo.com/ )
+
PS: used with permission from Sandro, art by Maxa', a reminder to self:
247 notes
·
View notes
In the darkest chapter of German history, during a time when incited mobs threw stones into the windows of innocent shop owners and women and children were cruelly humiliated in the open; Dietrich Bonhoeffer, a young pastor, began to speak publicly against the atrocities.
After years of trying to change people’s minds, Bonhoeffer came home one evening and his own father had to tell him that two men were waiting in his room to take him away.
In prison, Bonhoeffer began to reflect on how his country of poets and thinkers had turned into a collective of cowards, crooks and criminals. Eventually he concluded that the root of the problem was not malice, but stupidity.
In his famous letters from prison, Bonhoeffer argued that stupidity is a more dangerous enemy of the good than malice, because while “one may protest against evil; it can be exposed and prevented by the use of force, against stupidity we are defenseless. Neither protests nor the use of force accomplish anything here. Reasons fall on deaf ears.”
Facts that contradict a stupid person’s prejudgment simply need not be believed and when they are irrefutable, they are just pushed aside as inconsequential, as incidental. In all this, the stupid person is self-satisfied and, being easily irritated, becomes dangerous by going on the attack.
For that reason, greater caution is called for when dealing with a stupid person than with a malicious one. If we want to know how to get the better of stupidity, we must seek to understand its nature.
This much is certain, stupidity is in essence not an intellectual defect but a moral one. There are human beings who are remarkably agile intellectually yet stupid, and others who are intellectually dull yet anything but stupid.
The impression one gains is not so much that stupidity is a congenital defect but that, under certain circumstances, people are made stupid or rather, they allow this to happen to them.
People who live in solitude manifest this defect less frequently than individuals in groups. And so it would seem that stupidity is perhaps less a psychological than a sociological problem.
It becomes apparent that every strong upsurge of power, be it of a political or religious nature, infects a large part of humankind with stupidity. Almost as if this is a sociological-psychological law where the power of the one needs the stupidity of the other.
The process at work here is not that particular human capacities, such as intellect, suddenly fail. Instead, it seems that under the overwhelming impact of rising power, humans are deprived of their inner independence and, more or less consciously, give up an autonomous position.
The fact that the stupid person is often stubborn must not blind us from the fact that he is not independent. In conversation with him, one virtually feels that one is dealing not at all with him as a person, but with slogans, catchwords, and the like that have taken possession of him.
He is under a spell, blinded, misused, and is abused in his very being. Having thus become a mindless tool, the stupid person will also be capable of any evil – incapable of seeing that it is evil.
Only an act of liberation, not instruction, can overcome stupidity. Here we must come to terms with the fact that in most cases a genuine internal liberation becomes possible only when external liberation has preceded it. Until then, we must abandon all attempts to convince the stupid person.
Bonhoeffer died due to his involvement in a plot against Adolf Hitler, at dawn on 9 April 1945 at Flossenbürg concentration camp - just two weeks before soldiers from the United States liberated the camp.
—Dietrich Bonhoeffer’s Theory of Stupidity
846 notes
·
View notes
"Major AI companies are racing to build superintelligent AI — for the benefit of you and me, they say. But did they ever pause to ask whether we actually want that?
Americans, by and large, don’t want it.
That’s the upshot of a new poll shared exclusively with Vox. The poll, commissioned by the think tank AI Policy Institute and conducted by YouGov, surveyed 1,118 Americans from across the age, gender, race, and political spectrums in early September. It reveals that 63 percent of voters say regulation should aim to actively prevent AI superintelligence.
Companies like OpenAI have made it clear that superintelligent AI — a system that is smarter than humans — is exactly what they’re trying to build. They call it artificial general intelligence (AGI) and they take it for granted that AGI should exist. “Our mission,” OpenAI’s website says, “is to ensure that artificial general intelligence benefits all of humanity.”
But there’s a deeply weird and seldom remarked upon fact here: It’s not at all obvious that we should want to create AGI — which, as OpenAI CEO Sam Altman will be the first to tell you, comes with major risks, including the risk that all of humanity gets wiped out. And yet a handful of CEOs have decided, on behalf of everyone else, that AGI should exist.
Now, the only thing that gets discussed in public debate is how to control a hypothetical superhuman intelligence — not whether we actually want it. A premise has been ceded here that arguably never should have been...
Building AGI is a deeply political move. Why aren’t we treating it that way?
...Americans have learned a thing or two from the past decade in tech, and especially from the disastrous consequences of social media. They increasingly distrust tech executives and the idea that tech progress is positive by default. And they’re questioning whether the potential benefits of AGI justify the potential costs of developing it. After all, CEOs like Altman readily proclaim that AGI may well usher in mass unemployment, break the economic system, and change the entire world order. That’s if it doesn’t render us all extinct.
In the new AI Policy Institute/YouGov poll, the "better us [to have and invent it] than China” argument was presented five different ways in five different questions. Strikingly, each time, the majority of respondents rejected the argument. For example, 67 percent of voters said we should restrict how powerful AI models can become, even though that risks making American companies fall behind China. Only 14 percent disagreed.
Naturally, with any poll about a technology that doesn’t yet exist, there’s a bit of a challenge in interpreting the responses. But what a strong majority of the American public seems to be saying here is: just because we’re worried about a foreign power getting ahead, doesn’t mean that it makes sense to unleash upon ourselves a technology we think will severely harm us.
AGI, it turns out, is just not a popular idea in America.
“As we’re asking these poll questions and getting such lopsided results, it’s honestly a little bit surprising to me to see how lopsided it is,” Daniel Colson, the executive director of the AI Policy Institute, told me. “There’s actually quite a large disconnect between a lot of the elite discourse or discourse in the labs and what the American public wants.”
-via Vox, September 19, 2023
199 notes
·
View notes