Should We Speak Up When We Have Something to Add?
/A large swath of social psychology research has shown that, in group discussions, group members tend to focus on the information already available to the group rather than trying to share new information (Stasser and Titus 1985, 1987; for a survey, see Wittenbaum, et al, 2004). In light of that research, many have recommended that individuals “speak up” when they disagree or have something new to add to a discussion. The assumption is that it’s better for the group if individuals contribute more of their private, non-shared information to the group than if they keep it to themselves. Arguments that people should contribute to group discussions in some ways rather than others can be found in many parts of organizational dynamics, management, psychology, law, and philosophy. Much of the work of Cass Sunstein (particularly Sunstein 2002) fits into this category as well. Sunstein and Hastie (2014), for example, give six bits of advice to groups to encourage better information sharing. These include that the group include a “red team,” who tries to adduce arguments against a certain proposal, and openly assign different roles to members of the group so information is more effectively elicited from the members in their roles.
Group discussions are complex phenomena. A common technique for investigating complex phenomena is to start with a simple model of the phenomenon to be explained that we use to guide our research. So suppose we’re trying to model the deliberation of a jury. What might that look like? A natural starting place would be to model deliberation with a simple network diffusion model. In such a model, we’d start with a collection of agents, each of which has some “neighbors” that they can communicate with. At the start of the model, some agents would have information that bears on the guilt or innocence of the defendant. The model then proceeds step by step, where if a juror has some information at one step, with some probability, the juror’s neighbors have the information at the next step. Information spreads across the network, and in a connected network, we’d expect everyone to get it eventually. In this model, the movement of information mirrors the movement of an infection.
It’s not clear that group deliberation can be fruitfully understood with this simple model though. Why think that information moves in networks like an infection? Maybe it moves like estimations of quantities, which are more naturally modeled as averaging? Or maybe it’s best modeled like genetic information transfer, which involves mutation and crossover? Simple models of communication like these have been explored by authors like Hegselmann and Krause (2002), Zollman (2007), and Weisberg and Muldoon (2009). (In a 2015 article in Philosophy of Science, Patrick Grim and I along with our co-authors show that these different forms of information transfer have very different dynamics and levels of fitness.)
Unfortunately, the idea that we should speak up when we have something to add can’t naturally be captured by any of those models. So here I’d like to introduce a new model, one that is a simple extension of the diffusion model from above and that might give us more traction in understanding the dynamics of group deliberation without much added complexity. The idea is to start with propositions that the jurors know. We can suppose those are about things like where the defendant was seen on the night of the crime or whether the weapon used in the crime was available to the defendant. Some collections of these propositions constitute arguments for thinking the defendant is guilty or not. And finally, each of the arguments has a strength at which it supports what it supports. Then, like in the simple diffusion model described above, we treat group deliberation as the exchange of propositions. Since the propositions are premises of arguments for or against positions on the main issue (guilt or innocence), we can think of the exchange of propositions as the jurors sharing why they think what they think.
Consider three simple ways jurors might share their information: First and most simply, the juror might share a piece of information at random. I’ll call this “random sharing.” Second, the juror might share the piece of information that adds the most to the conversation, in light of what has been already shared (i.e. it has the greatest impact on what’s to be believed based on the publicly available arguments). I’ll call this “influential sharing.” Finally, the jurors might act adversarially by sharing the piece of information that most influences what is supported by the publicly available arguments in the direction of what they already believe. I’ll call this “biased sharing.”
By abstracting away the contact network (assuming everyone can hear everyone), under a wide range of conditions, these different sharing methods do have a major impact on what the members of the group believe. We can measure their success in terms of what proportion of the group has the attitude about the main question that’s supported by all of the propositions. That data is displayed in Table 1. In the data displayed, we assumed that there are 25 agents, 100 propositions, and 100 subsets of size 1 to 3 of those propositions are designated as arguments. We also assumed that arguments are assigned a supported content at random, that the strengths of those supports are assigned by an exponential distribution with a mean of 1, that everyone starts out with 10 random (possibly different) propositions, and that at each round of the model one random agent is chosen to speak according to their sharing rule.
What we see is that groups of influential sharers do significantly better than groups of the other kinds. Adversarial groups do the worst, performing significantly worse than even groups that simply share information at random. This supports this idea that those who advocate for different sharing methods might be on to something—the model shows us how picking the right sharing method could make a big difference.
Notice that these results assume that each individual juror can remember all of the information they hear. Of course, humans have limited memories, so this is an unrealistic assumption. Does its unrealism pose a problem? Unrealism in models isn’t always a problem. Undampened simple harmonic oscillator models of spring movements are good models despite the fact that no real springs are undampened. That said, if the unrealistic assumptions make a big difference to the qualitative character of the model, it might be interesting to see the model with the unrealistic assumption relaxed.
So suppose we limit the memories of the individual juror. We then have to decide how they should manage their limited memories once they’re full. Suppose we have a juror who has a memory limit of 10 propositions (and assume that arguments do not take up any additional memory since they are constituted by propositions). Consider three ways they might deal with an incoming 11th proposition when they already have 10: First, the juror might just forget one of the 11 propositions at random and remember the rest. I’ll call this “random memory.” Alternatively, the juror might forget the proposition that contributes the least informational content to what they believe (i.e. they would forget the proposition whose inclusion in the memory contributes the least overall strength to either potential belief content). I’ll call this “weight-minded memory.” Another alternative is that the juror could place a premium on the coherence of their belief state and thereby forget a reason that goes against what they would all-things-considered believe on the basis of all 11 pieces of information. For precision, let’s assume they drop the piece of information that has the least information content that goes against what is supported by all 11 reasons. I’ll call this “coherence-minded memory.”
Table 2 shows the impact of the different memory management methods on what percent of the group has the attitude supported by all the propositions. What we see is that with limited memories, the relative influence of the saying methods change. For unlimited agents, biased sharing for information performed significantly worse than the other two methods. But when the jurors just forget things at random when their memory is full, agents who share information biasedly do significantly better than agents who use either of the two other systematic ways of sharing information (p < .05 in both cases for t-tests, KS-tests, and Wilcox Rank Sum tests). More generally, the methods that limited agents use to share information have a very muted impact on the group outcomes compared with unlimited agents. Whereas for unlimited agents, the sharing method made the difference between 63 percent of the group getting it right and 88 percent getting it right (a 25 percentage point difference), for these limited agents, the difference is less than 6 percentage points.
When we look at data from a much larger dataset (including 1,000 runs from each of the 288 different combinations of parameters), we see that the results described above are robust: For unlimited agents, which sharing method is used makes a significant difference to the outcome, and for limited agents, the significance of the sharing methods is much more muted. For all of the sharing rules, weight-minded memory-management is best, coherence-minded is second, and random rememberers do worst. There are no similarly consistent patterns to be found in holding fixed the memory methods and looking at the impact of different sharing methods. In general then, these data suggest that how agents manage their memory is a very significant factor in the overall performance of the group. This is further confirmed by comparing the sum of squared differences, where we see that the amount of variation attributable to the memory management method is much higher than the amount attributable to the sharing method: How agents forget the information explains 86 times as much variation in the outcome compared with how agents share their information!
Of course, what percentage of the group has the right attitude after 1,000 steps of the model isn’t the only metric of group performance we might be interested in. We might want to look at data from other steps, but we might also want to know how the sharing and memory rules affect whether and how quickly the groups formed a consensus, whether the group has the right attitude when it does converge, and whether and how quickly members of the group stop changing their belief (regardless of whether there’s a consensus). In each of these cases, the same general lesson as above applies: How individuals in a group manage their memory has an impact that’s at least roughly on par with the impact of how agents share their information.
So, should we be encouraging people to speak up when they have something to add to a group conversation? The answer is less than obvious to me. In real group discussions, like jury deliberations, we are limited in what we can effectively encourage group members to focus on. If we give jurors too much guidance, they’ll be overwhelmed and won’t be able to fully comply with it. What the above results suggest is that if we can only give juries advice about only one kind of thing, then maybe the advice should be about what to try to remember from the discussion, not how they should contribute to it. Of course, we might find that the result from the model doesn’t match the world, but we at least have a hypothesis to test.
Much the work described here was done in collaboration with Patrick Grim, Aaron Bramson, William “Zev” Berger, Karen Kovaka, and Jiin Jung. For more information about our group, see the website for the Computational Social Philosophy Lab.
- Daniel Singer
May 29, 2018