Remote Data Jobs • QA

2 results

Job listings

$20–$20
USD/year

Seeking detail-oriented and skilled annotators to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.

$22–$22
USD/year

Join the CrowdGen team as an Independent Contractor for Project Babel to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.