When people talk about content moderation, they typically reference debates about free speech and censorship. The people who actually do the moderating, however, tend to be an afterthought. Their work is largely invisible — hidden away in call center–type offices — as they filter the deluge of user-generated content on social media platforms. Even though users don’t see them, these employees are the internet’s frontline workers, facing the worst of human nature one disturbing picture or video at a time. Without them, social media companies — and their ad-driven business models — likely couldn’t exist as they do now. Yet in addition to the constant exposure to unpleasant content, these jobs tend to be poorly paid, contingent, and full of pressure to perform quickly and accurately. But do they have to be so bad?