Moderation (whether automated or human) can potentially cause what we call “acute” harm: harm caused directly by individual pieces of content. But we need this new approach because there are also a bunch of “structures” problems—Problems such as discrimination, impaired mental health, and loss of civic trust — manifest in more ways on the product than through any single piece of content. A famous example of this kind of structural problem is Facebook’s 2012 Experiment “emotional contagion”this shows that user influence (their mood as measured by their behavior on the platform) varies measurable depending on the product version they have been exposed to.
In the frustration that followed after the results became public, Facebook (now Meta) ended this kind of deliberate experiment. But just because they stop measuring such impacts doesn’t mean product decisions won’t continue.
Structural problems are a direct result of product selection. Product managers at tech companies like Facebook, YouTube, and TikTok are encouraged to focus on maximizing time and engagement across platforms. And testing is still very much out there: almost every product change is rolled out to small test subjects through randomized controlled trials. To assess progress, companies perform rigorous management processes to advance their central missions (known as Objectives and Key Results, or OKRs), even use these results to determine bonuses and promotions. Responsibility for dealing with the consequences of product decisions is often placed on other groups, which are often below and less authority to address root causes. These groups are often able to respond to acute harms — but often cannot address problems caused by the product itself.
With attention and focus, this same product development structure can be shifted to the question of social harm. Consider Frances Haugen’s parliament testimony last year, along with media disclosure about Facebook’s alleged impact on youth mental health. Facebook answered respond to criticism by explaining that it studied whether adolescents feel whether the product has a negative effect on their mental health and whether that perception makes them less likely to use the product, not whether it actually had an adverse effect. While the answer may address that particular controversy, it illustrates that a study aimed directly at the question of mental health – rather than its impact on people’s engagement. use – won’t be a big deal.
Combining assessments of systemic harm will not be easy. We will have to categorize what we can actually measure rigorously and systematically, what we will ask of companies and what to prioritize in any assessment such as so.
Companies can implement protocols themselves, but their financial interests too often lead to meaningful restrictions on product development and growth. That is in fact a standard case for regulations acting on behalf of the public. Whether through a new regulatory authorization from the Federal Trade Commission or harm reduction guidance from a new government agency, the regulator’s job will be to work with the company’s product development teams. technology companies to design workable protocols. throughout the course product development to assess meaningful hazard signals.
That approach may sound cumbersome, but the addition of these types of protocols should be straightforward for the largest companies (the only ones that should adopt the regulation), because they’ve already built their own tests. randomized controlled trials into development to measure their effectiveness. Defining standards will be more time consuming and complex; The actual test implementation will not require the involvement of regulatory authorities. It will only require asking diagnostic questions along with those related to normal growth and then making that data accessible to external reviewers. Our upcoming article at ACM 2022 Conference on Equity and Access in Algorithms, Mechanisms and Optimization will explain this process in more detail and outline how it can be effectively set up.
As products that hit the tens of millions are tested for their ability to drive engagement, companies will need to ensure that those products — at least overall — adhere to the “don’t make a problem” principle. getting worse”. Over time, more aggressive standards can be established to eliminate existing impacts of approved products.