Discord is used by over 200 million people every month for many different reasons, but there’s one thing that nearly everyone does on our platform: play video games. Over 90% of our users play games, spending a combined 1.5 billion hours playing thousands of unique titles on Discord each month. Discord plays a uniquely important role in the future of gaming. We are focused on making it easier and more fun for people to talk and hang out before, during, and after playing games.
Discord exists to give people the power to create space to find belonging — to talk regularly with the people they care about and build genuine relationships with friends and communities close to home or around the world.
We're looking for a Manager to lead our Scaled Abuse Countermeasures and Research (SCAR) team — Discord's first line of defense against scaled abuse. SCAR combines rapid incident response with deep threat research and signal generation across bulk fake account creation, login abuse, spam, scams, fraud, and other high-volume threats. This is an opportunity to reshape how the team operates: set a sharper strategy, stand up a structured research program, and lean heavily into ML and AI-powered automation to replace today's manual workflows. This role reports to the Director of Safety Automation.
What You'll Be Doing
- Lead and grow the SCAR team, a group of Scaled Abuse Scientists who serve as Discord's first line of defense against bulk fake account creation, login abuse, spam, scams, fraud, and other high-volume threats.
- Set a vision for the team that leans heavily into automation — scale SCAR's impact by partnering with Safety ML on ML-driven detection and building AI-powered incident response workflows that increase the team's leverage and reduce manual work.
- Define scaled abuse north-star metrics and build a roadmap and prioritization framework that keeps the team focused on the highest-impact problems.
- Stand up a research program where the team operates as scientists: surface the most important questions about attacker operations and signals, then systematically answer them through structured research and experimentation.
- Close the loop with Safety ML so that SCAR's signals feed directly into model features and pipeline upgrades, and tactical wins translate into long-term, automated countermeasures.
- Partner cross-functionally with Product, Safety ML, Data Science, Policy, Legal, Trust & Safety, and Revenue, influencing safety-by-design decisions upstream so abuse is prevented, not just mitigated.
- Coach, hire, and grow Scaled Abuse Scientists.
- Communicate attacker dynamics, economic incentives, and trade-offs clearly to senior leadership.
What you should have
- 2+ years of people management experience leading technical teams, including engineers, ML engineers, data scientists, or applied researchers, or an equivalent track record of leading, mentoring, or coordinating technical work in a way that has prepared you to step into this role.
- 4+ years of experience working on Trust & Safety, fraud, anti-abuse, or a closely adjacent adversarial domain at a consumer-scale platform.
- Demonstrated ability to drive ML and automation adoption within a Trust & Safety or operations context. You don't need to build models yourself, but you need to understand them well enough to evaluate their quality, direct their development, and push a team toward automated solutions.
- Strong analytical skills and fluency in SQL and Python for data investigation and pattern analysis (not full software engineering proficiency required).
- Ability to think from first principles and apply behavioral-economics reasoning to adversarial systems: why are attackers doing this, what are the incentives, and what is the most cost-effective way to break their economics.
- Excellent communication and cross-functional collaboration skills, with a history of partnering effectively across engineering, ML, data science, policy, legal, and product teams.
- A growth mindset: you seek feedback, reflect on decisions, and actively help your team do the same.
Bonus Points
- Experience leveraging LLMs or AI agents for incident response, investigation automation, or signal triage.
- Experience tackling scaled abuse problems at large social platforms, marketplaces, or other high-volume consumer products.
- Threat intelligence research background, including understanding of internet infrastructure and the tools and techniques attackers use.
- A strong passion for Discord and/or gaming, and an appreciation for the communities we serve.
- A relevant degree in Computer Science, Machine Learning, Statistics, or a related quantitative field, or equivalent practical experience.
Candidates must reside in or be willing to relocate to the San Francisco Bay Area (Alameda, Contra Costa, Marin, Napa, San Francisco, San Mateo, Santa Clara, Solano, and Sonoma counties). Relocation assistance may be available.
The US base salary range for this full-time position is $272,000 to $306,000 + equity + benefits. Our salary ranges are determined by role and level. Within the range, individual pay is determined by additional factors, including job-related skills, experience, and relevant education or training. Please note that the compensation details listed in US role postings reflect the base salary only, and do not include benefits.
Why Discord?
Discord plays a uniquely important role in the future of gaming. We're a multiplatform, multigenerational and multiplayer platform that helps people deepen their friendships around games and shared interests. We believe games give us a way to have fun with our favorite people, whether listening to music together or grinding in competitive matches for diamond rank. Join us in our mission! Your future is just a click away!
Discord is committed to inclusion and providing reasonable accommodations during the interview process. We want you to feel set up for success, so if you are in need of reasonable accommodations, please let your recruiter know.
Please see our Applicant and Candidate Privacy Policy for details regarding Discord’s collection and usage of personal information relating to the application and recruitment process by clicking HERE.
