Job Description
Evaluate English prompts and responses alongside their translated counterparts. Assess quality across multiple linguistic and contextual dimensions. Use structured criteria (fluency, grammar, translation accuracy, relevance, and correctness) to score prompt and response translations. Identify issues in translations and suggest post-edits when ratings fall below threshold (≤2), ensuring that corrections maintain the original meaning, fluency, and cultural appropriateness. Follow defined annotation guidelines and metric definitions. Ensure all work aligns with project standards for linguistic quality and AI safety goals. Annotations are essential to building safer, more natural, and more inclusive AI systems, ensuring multilingual outputs retain meaning, accuracy, and relevance across languages and cultural contexts.
About CrowdGen
CrowdGen is seeking detail-oriented and skilled annotators to support the evaluation and improvement of multilingual prompt-response data.