Welo Data, part of Welocalize, is a global AI data company with 500,000+ contributors delivering high-quality, ethical data to train the world’s most advanced AI systems. They’re building smarter, more human AI with a diverse community in 100+ countries.
Review search results, evaluate their helpfulness and relevance.
Answer true/false questions about content quality.
Complete online tasks improving AI systems using guidelines.
Welo Data provides AI services and data validation. It appears they have a flexible and supportive culture, emphasizing the importance of individual contributions to improving AI technology.
Welo Data, part of Welocalize, is a global AI data company with 500,000+ contributors delivering high-quality, ethical data to train the world’s most advanced AI systems. They are building smarter, more human AI with a diverse community in 100+ countries.
Evaluate model-generated replies based on Tone or Fluency .
Read a user prompt and two model replies, then rate each using a five-point scale.
CrowdGen, by Appen, focuses on AI response evaluation. They are looking for native Javanese speakers to contribute to a multilingual AI response evaluation project where you review large language model outputs.
Evaluate AI-generated responses for accuracy, grammar, and cultural relevance.
Identify issues and provide refined, high-quality rewritten responses.
Create natural prompts and responses in Spanish to improve conversational datasets.
Welo Data, part of Welocalize, is a global AI data company with 500,000+ contributors delivering high-quality, ethical data to train the world’s most advanced AI systems. They're building smarter, more human AI with a diverse community in 100+ countries.
Help train and evaluate cutting-edge AI models using real legal expertise.
Complete AI training tasks such as analyzing, editing, and writing annotations, grounded in legal reasoning and professional practice.
Judge the performance of AI in performing legal tasks and improve cutting-edge AI models by providing expert feedback.
Prolific is building the biggest pool of quality human data in the world. Over 35,000 AI developers, researchers, and organizations use Prolific to gather data from paid study participants with a wide variety of experiences, knowledge, and skills.
Assess content relevant to your area of expertise.
Deliver clear feedback to improve the model's comprehension.
Handshake is recruiting College Career/Technical Education Professors to contribute to an hourly, temporary AI research project. In this program, you’ll leverage your professional experience to evaluate what AI models produce in your field.
Completing AI training tasks such as analyzing, editing, and writing in Mandarin
Judging the performance of AI in performing Mandarin prompts
Improving cutting-edge AI models
Prolific is building the biggest pool of quality human data in the world and is not just another player in the AI space. Over 35,000 AI developers, researchers, and organizations use Prolific to gather data from paid study participants with a wide variety of experiences, knowledge, and skills.
Object tagging and labeling across different content types.
RWS TrainAI focuses on improving AI-generated content. They embrace DEI and equal opportunity, committed to a discrimination-free work environment where employment decisions are based on business needs and qualifications.
Complete up to 10 short mannequin clips per participant.
Use the Appen Mobile app to submit recordings.
CrowdGen is part of Appen and focuses on providing data solutions. The company utilizes a community of independent contractors to contribute to various AI development projects, offering flexible, project-based opportunities.
Object tagging and labeling across different content types (audio, video, images, or collected data)
RWS TrainAI focuses on improving AI-generated content. They embrace DEI and are an equal opportunity employer committed to providing a work environment free of discrimination and harassment.
Maintaining the discount code and savings database.
Processing incoming emails and data.
Achieving high quality and accuracy of database information.
EverySaving aggregates discount codes and exclusive offers for online shoppers across over 20 countries. They operate platforms like RadarCupom in Brazil and GuteGutscheine in Germany. They are expanding rapidly.
Review search results and evaluate how helpful and relevant they are to the user’s query
Answer simple true/false questions about the quality of content
Rate whether search results meet the user’s needs using clear guidelines
Welo Data is an AI services company providing data validation. They are looking for English speakers to join a remote project as a Search Quality Rater.
Own end-to-end quality design for Prolific managed service studies.
Define, implement, and maintain quality measurement systems.
Build and deploy automated quality checks and launch gates using Python and SQL.
Prolific provides the high-quality, diverse data required to train the next generation of AI models. Through our platform, they empower researchers and companies to access a global, ethically curated participant base, ensuring cutting-edge AI research and training grounded in inclusivity and precision.
Building features related to our data pipelines, usage of LLMs, analytics APIs, etc.
Doing whatever is necessary to deliver value to our customers.
Creating notifications, alert emails, scheduled reports, connectors into third party systems, etc.
Scrunch helps marketing teams rethink how their products and services are discovered and surfaced on AI platforms like ChatGPT, Claude, Gemini. They have scaled rapidly since commercial launch and have more than 500 paying brands using the platform.
Review and clean up output from our machine learning platform for clients
Support the operations/customer success teams with various projects and client needs
Magellan AI provides an all-in-one platform for podcast advertising intelligence, media planning, and measurement. They have created the world's largest database of podcast advertising data, covering over 50,000 podcasts and have a tight-knit culture.
Evaluate AI-generated presentations for accuracy and visual quality.
Provide detailed feedback to improve future AI performance.
Collaborate with product, design, and content partners to refine criteria.
Blueprint is a technology solutions firm headquartered in Bellevue, Washington, with a strong presence across the United States. They solve complicated problems, using technology to bridge the gap between strategy and execution, powered by the knowledge, skills, and the expertise of their teams. They are bold, smart, agile, and fun.
Handshake is connecting students, new grads, and young professionals with job opportunities. They aim to close the opportunity gap and ensure everyone has equal access to meaningful employment.
Evaluate AI models' output in occupational therapy.
Assess content related to the occupational therapy field.
Provide clear feedback to improve AI understanding.
Handshake connects students with early talent recruiting. They provide opportunity to evaluate what AI models produce and deliver feedback that strengthens the model’s understanding of workplace tasks and language.
Delivering high-quality data and annotations for scenarios involving MACROHARD, and testing Computer Use Agents in digital environments.
Identifying subtle bugs, failure modes, and unexpected agent behaviors during testing sessions to help improve Computer Use models.
Assisting in designing and improving annotation tools tailored for MACROHARD data, agent evaluation, and QA workflows.
XAI's mission is to create AI systems that can accurately understand the universe and aid humanity in its pursuit of knowledge. The team is small, highly motivated, and focused on engineering excellence with a flat organizational structure, where all employees are hands-on and contribute directly to the company’s mission.