Seeking detail-oriented and skilled annotators to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.
Job listings
USD/year
USD/year
Join the CrowdGen team as an Independent Contractor for Project Babel to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.