CrowdGen
3 open remote positions
(3 of 3 jobs with salary data)
Salary DistributionAnalyze and compare pairs of advertising sentences, evaluate the relationship between original text (Input) and machine-generated content (Output), and determine whether the machineβs output is a valid paraphrase or enhancement of the original while preserving its intent and accuracy. Provide valuable feedback that contributes to the advancement of reliable, creative AI.
Seeking detail-oriented and skilled annotators to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.
Join the CrowdGen team as an Independent Contractor for Project Babel to support the evaluation and improvement of multilingual prompt-response data used in the training and assessment of large language models (LLMs). Your role will directly contribute to ensuring translations are accurate, contextually appropriate, and aligned with AI safety and ethical standards.