AI Technical Evaluator
- Review AI-generated responses and evaluate technical accuracy.
- Provide expert feedback to train AI systems to write better code.
- Work with various programming languages and coding challenges.
9 open remote positions
G2i connects subject-matter experts, students, and professionals with flexible, remote AI training work such as annotation, evaluation, fact-checking, and content review.
Review evaluations completed by data annotators assessing AI-generated JavaScript code responses. Ensure that annotators follow strict quality guidelines related to instruction-following, factual correctness, and code functionality. Validate code snippets using proof-of-work methodology.
Review and audit annotator evaluations of AI-generated R code. Assess if the R code follows the prompt instructions, is functionally correct, and secure. Validate code snippets using proof-of-work methodology.
Review and audit annotator evaluations of AI-generated Java code, assess if the Java code follows the prompt instructions, is functionally correct, and secure and validate code snippets using proof-of-work methodology.
Join a groundbreaking AI-driven research project where your frontend expertise will directly shape the way users interact with data and applications. In this role, you’ll design and build intuitive, user-facing features that make complex data clear, usable, and engaging. You’ll leverage modern JavaScript frameworks and CSS design systems to deliver professional, polished interfaces that enhance user experience.