We use automated scanning to detect some of the most egregious kinds of abuse on the platform: child sexual exploitation and abuse imagery (CSEAI) and terrorist and violent extremist content (TVEC). We scan based on robust hash matching techniques using the PhotoDNA tool. Our process also involves human review to confirm an image that is initially detected as a hit, and allows users to appeal an automated content moderation decision against them.