Image of player safety control panel.
Image of player safety control panel.
Image of player safety control panel.

Human-first moderation for Among Us 3D

Human-first moderation for Among Us 3D

Designing an moderator-centered player safety system from scratch in just four months

Designing an moderator-centered player safety system from scratch in just four months

With the launch of Among Us 3D (then Among Us VR) just months away, the game lacked any player safety infrastructure. I was tasked with designing a user reporting backend and moderator-facing panel from the ground up. Through rapid prototyping and iteration and looping in stakeholders early and often, we delivered a solution centered on reducing agent cognitive load that successfully processed hundreds of thousands of infractions post-launch and empowered moderators to make decisions that kept the game's community safer, faster.

Years

2022-2024

Disciplines

Design engineering

Product design

Project management

Research

Built in collaboration with

Mike T, Jennifer Rabbit, Charlie Amis, Laura Hall, Derek Ledford, Bryan Rode, Innersloth, Modulate, Keywords Studios, and all of the Among Us 3D team

Challenge

Among Us 3D was Schell Games’ first major online multiplayer experience at this scale. With a launch date looming, we faced a critical gap: no system to handle user reports, protect our players, or empower our support agents. The immediate question among us(!) was:

How do we design and develop the entire backend and front-end infrastructure from the ground up, with no pre-existing tools to leverage and few existing examples of player safety software to learn from - and on a tight deadline?

The first step was to interview internal stakeholders to define our immediate priorities and constraints. I also conducted both a competitive analysis of existing player safety solutions from other game studios as well as a series of interviews with our partners in player safety to identify best practices and common pitfalls. Through this research, a pattern of sentiments emerged:

"It's a struggle to keep up with the influx of reports"; "Trying to make the right decisions takes time"; "It's hard to deal with with the toxicity".

And with those sentiments, a new question to ask ourselves:

How do we prevent the inevitable burnout and decision fatigue that comes from sifting through a seemingly endless queue of potentially toxic user reports?

This question provided the foundation for our core success metrics that we kept in mind while designing the app - chief being time-to-decision. An endless queue of reports is a recipe for burnout, and moderators struggling with burnout risked slowing our player safety efforts to a crawl. Our core design objective became centered on creating a humanist experience for our moderators - one that allows them to efficiently analyze player data and make decisions while minimizing decision fatigue and exposure to toxicity.

Solution

Although our initial design concepts called for multiple complex views, the final design consisted of just two core views engineered to work together to create a simple, powerful, and sustainable workflow for our moderators.

The Player Card

The "Player Card" is a comprehensive dashboard that provides a holistic view of a player's history and behavior. The card consolidates all of a player's infractions and account details into a single, scannable view - including both AI-generated insights and notes written by other moderators.

This provides moderators with important context and eliminates the potential need for moderators to manually search for a player's history. It also allows moderators to take action on multiple incidents at once, greatly reducing time-to-decision per incident.

The Queue System

These cards are then sorted into the Backlog - a store of all Player Cards with open incidents. Moderators "checkout" a small, manageable batch of 15-25 player cards at a time, assigning them to that moderator's private Queue and removing them from the global Backlog. This allows Moderators to own the resolution process on all of the cards in their Queue from start to finish.

With this approach, we turned a seemingly infinite task into a series of achievable goals, which is proven to reduce cognitive load. We also ensure that multiple moderators don't happen to stumble upon the same Player Card.

Attached to each Card in the Backlog is a Priority Score - a number associated with the card based on:

  • the severity of open incidents (high-severity incidents are weighted as higher priority)

  • the combined age of open incidents against a player (the older the age of an incident, the higher the priority)

  • repeat offenders

  • manually flagged players

The Backlog is sorted based on these scores, allowing moderators to quickly take action on particularly toxic individuals or time-sensitive cases.

Impact

Our tool launched alongside Among Us VR's fifth patch in April 2023 and the results were immediate and significant. As of 2025:

  • Hundreds of thousands of infractions successfully processed post-launch.

  • Tens of thousands of bans issued to disruptive players, improving community health.

  • A ~50% reduction in project scope through iterative design, helping us get across the finish line.

Lessons Learned
  • Don't Be Afraid to Show Incomplete Work: Our commitment to quick feedback cycles and sharing early, rough designs was critical. It allowed us to quickly determine the crucial interactions of the app and consolidate them into just a couple of views, reducing bloat.

  • The Power of a Design Principle: Centering every decision around "reducing agent cognitive load" provided a clear north star. It helped us prioritize features, justify design choices, and ultimately build a more thoughtful and effective tool.