Help improve AI language identification models for one of the biggest AI -oriented companies worldwide. You will be responsible for identifying content-related flags and processing issues on a website. Annotators should have experience spotting errors, inconsistencies, or elements that may reduce a websiteβs credibility or overall quality.
Job listings
We are looking for AI Data Annotators with native or near-native speaker fluency in Uyghur, and English skills to understand written instructions. In this project, you will be responsible for identifying content-related flags and processing issues on a website. We are looking for detail-oriented annotators who have experience spotting errors, inconsistencies, or elements that may reduce a websiteβs credibility or overall quality.
Join a global network of linguists, language enthusiasts, and culturally aware contributors to shape safer, smarter AI by joining our talent community where youβll be first in line for flexible, remote projects in annotation, evaluation, and prompt creation. Note: This is not an active job opening.
Evaluate machine translations by assessing text and assigning semantic similarity scores. You'll be evaluating the accuracy, fluency, and overall quality of online messages that have been translated by machines. The texts you'll be working with are similar to what you'd find in online conversations or on social media as an independent contractor.
Weβre looking for an Icelandic Language Rater with an interest or background in the financial sector to join an exciting AI-related project. This is a fully remote, freelance opportunity offering flexibility. You will evaluate and rate AI-generated content in Icelandic. You must also assess linguistic accuracy, clarity, and relevance and apply your understanding of financial language and concepts.
Help shape the future of AI as a Data Annotator, contributing to improving the reliability of todayβs AI models. Responsibilities include reviewing, scoring, and improving AI-generated responses, evaluating prompts and responses across a wide range of topics, performing QA checks, and correcting responses. Full training is provided.
Contribute to improving the reliability of todayβs AI models. You'll review, score, and improve AI-generated responses and evaluate prompts and responses across a wide range of topics. Also, you'll perform QA checks on other annotatorsβ work and provide feedback, correct responses, and suggest improvements using natural language.
Contribute to Project Spearmint as an AI trainer, evaluating large language model (LLM) outputs. This project involves reviewing short datasets and assessing model-generated replies based on tone or fluency, using a five-point scale and providing rationales for extreme ratings. You will determine if replies are helpful, engaging, fair, and grammatically accurate.
Review short, pre-segmented datasets, evaluate model-generated replies based on Tone or Fluency, and provide short rationales for extreme ratings, using a five-point scale to rate each after reading a user prompt and two model replies.