Content Safety
Complete Content Moderation Suite
Protect your platform with AI-powered content moderation. Detect harmful content across 110+ categories with industry-leading accuracy.
Image Moderation
Classify images across 110+ content categories with a single API call. Get detailed confidence scores and subcategory breakdowns.
Nudity & Adult Content25+ subcategories
Violence & Gore15+ subcategories
Hate Symbols10+ subcategories
Weapons & Drugs12+ subcategories
Self-Harm8+ subcategories
Spam & Scams10+ subcategories
Sample Response
{
"safe": false,
"categories": {
"nudity": {
"score": 0.12,
"subcategories": {
"partial": 0.08,
"explicit": 0.02
}
},
"violence": {
"score": 0.85,
"subcategories": {
"graphic": 0.72
}
}
}
}Text Analysis
{
"toxicity": {
"score": 0.89,
"label": "toxic"
},
"pii_detected": [
{
"type": "email",
"value": "***@***.com"
}
],
"spam_score": 0.15
}Text Moderation
Analyze user-generated text for toxicity, hate speech, PII, and spam. Multi-language support with context-aware analysis.
Toxicity detection
Hate speech analysis
PII detection
Spam classification
Intent analysis
Multi-language support