Job Description
Seeking detail-oriented and skilled annotators to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards. Annotators evaluate English prompts and responses alongside their translated counterparts, assessing quality across multiple linguistic and contextual dimensions. Quality is rated using structured criteria like fluency, grammar, accuracy, relevance, and correctness. Annotators identify issues in translations and suggest post-edits when ratings fall below threshold, ensuring that corrections maintain the original meaning, fluency, and cultural appropriateness. High-quality annotations are essential to building safer, more natural, and more inclusive AI systems.