Source Job

Global

  • Review and audit annotator evaluations of AI-generated R code.
  • Assess if the R code follows the prompt instructions, is functionally correct, and secure.
  • Validate code snippets using proof-of-work methodology.

R QA Debugging Testing Documentation

20 jobs similar to Code Reviewer (R)

Jobs ranked by similarity.

  • Leverage professional experience to evaluate AI models' output in your field.
  • Assess content and deliver feedback to strengthen the model’s understanding.
  • Work independently from anywhere, with flexible hours and no minimum commitment.

Handshake is a recruiting platform. They connect students and recent graduates with employers.

$104,000–$156,000/hr

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows.
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control.
  • Improve reliability, performance, and safety across existing Rust codebases.

Alignerr connects top technical experts with leading AI labs to build, evaluate, and improve next-generation models. They work on real production systems and high-impact research workflows across data, tooling, and infrastructure.

Global

  • Evaluate AI models' output in the financial risk field.
  • Assess content related to financial risk work.
  • Deliver structured feedback to improve AI understanding.

Handshake is connecting students with companies and opportunities. It seems they are a platform that allows students to find jobs, providing project opportunities across various fields of expertise.

$60–$90/hr
Global

  • Review and evaluate AI-generated JavaScript code for efficiency and ES6+ standards.
  • Develop high-quality JavaScript solutions to problems across varying difficulty levels.
  • Create human-readable explanations for code logic and problem-solving strategies.

Alignerr partners with leading AI research teams and labs to build and train cutting-edge AI models. The company seems to be a smaller organization focused on AI training and model development.

  • Evaluate AI model outputs related to the instructional field.
  • Develop prompts for AI models reflecting field expertise.
  • Provide clear, structured feedback to enhance AI understanding.

Handshake is recruiting Instructional Coordinator Professionals to contribute to an hourly, temporary AI research project—but there’s no AI experience needed.

$104,000–$156,000/hr

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
  • Improve reliability, performance, and safety across existing Rust codebases

Alignerr connects top technical experts with leading AI labs to build, evaluate, and improve next-generation models. They focus on real production systems and high-impact research workflows across data, tooling, and infrastructure.

  • Design, build, and optimize high-performance systems in Python supporting AI data pipelines and evaluation workflows.
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control.
  • Improve reliability, performance, and safety across existing Python codebases.

Alignerr connects top technical experts with leading AI labs to build, evaluate, and improve next-generation models. They work on real production systems and high-impact research workflows across data, tooling, and infrastructure.

Global

  • Evaluate AI model outputs related to your field.
  • Assess content relevant to your area of expertise.
  • Deliver clear feedback to improve the model's comprehension.

Handshake is recruiting College Career/Technical Education Professors to contribute to an hourly, temporary AI research project. In this program, you’ll leverage your professional experience to evaluate what AI models produce in your field.

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control
  • Improve reliability, performance, and safety across existing Rust codebases

Alignerr connects top technical experts with leading AI labs to build, evaluate, and improve next-generation models. They work on real production systems and high-impact research workflows across data, tooling, and infrastructure.

$60–$90/hr

  • Evaluate AI-generated code across the full stack.
  • Design and build full-stack tooling for AI data annotation and quality control.
  • Review complex system designs providing feedback on scalability and performance.

Alignerr partners with leading AI research teams and labs to build and train cutting-edge AI models. The company offers hourly contract positions, emphasizing innovation and collaboration within the AI field.

Global

  • Leverage professional experience to evaluate AI model outputs.
  • Assess content related to your field and provide structured feedback.
  • Strengthen the AI model's understanding of workplace tasks and language.

Handshake is recruiting professionals to contribute to an hourly, temporary AI research project. It appears they have a flexible and inclusive environment, allowing individuals to work independently and contribute to the development of AI in various fields.

Global

  • Evaluate what AI models produce in your field.
  • Assess content related to your field of work.
  • Deliver clear, structured feedback that strengthens the model’s understanding.

Handshake is connecting students to early talent opportunities. The company is focused on helping students find the right job and employers connect with the right candidate.

Indonesia

  • Review short, pre-segmented datasets.
  • Evaluate model-generated replies based on Tone or Fluency .
  • Read a user prompt and two model replies, then rate each using a five-point scale.

CrowdGen, by Appen, focuses on AI response evaluation. They are looking for native Javanese speakers to contribute to a multilingual AI response evaluation project where you review large language model outputs.

Global

  • Leverage professional experience to evaluate AI model outputs.
  • Assess content related to your field of work.
  • Deliver clear, structured feedback to strengthen the AI model.

Handshake is recruiting Civil Engineering Technologists and Technician Professionals to contribute to an hourly, temporary AI research project. In this program, they help strengthen the model’s understanding of workplace tasks and language.

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation , validation, and quality control
  • Improve reliability, performance, and safety across existing Rust codebases

Alignerr connects top technical experts with leading AI labs to build, evaluate, and improve next-generation models. They did not provide information about the size/employees and culture in the job description.

$150,000–$220,000/yr
US Unlimited PTO

  • Incorporating the best research work on agents and code generation into the OpenHands framework
  • Performing novel improvements in areas of interest to improve agent performance and efficiency
  • Running and implementing evaluations to ensure agent quality

OpenHands is building an open-source AI platform that empowers engineering teams to accelerate development, automate workflows, and integrate intelligent coding assistance into real-world software delivery. The company fosters a culture built on kindness, candor, autonomy, and learning.

$85,000–$225,000/yr
US Canada

This role validates Veeva AI Agents through evaluation. You will define strategies for new AI Agents. The role involves analysis of model behaviors to identify defects.

Veeva Systems is a mission-driven organization and pioneer in industry cloud, helping life sciences companies bring therapies to patients faster.

Global

  • Leverage professional experience to evaluate AI model outputs.
  • Assess content related to your field of work.
  • Deliver clear, structured feedback to enhance AI model understanding.

Handshake. Offering an opportunity to evaluate AI models using professional experience, assess content, and provide feedback. There is no information about their size, employee count, and culture from the job posting.

$80,000–$150,000/yr

  • Research, Document, Test, and Ideate: Explore the best ways to achieve our customers’ goals using LLMs and other AI tools.
  • Master Our Dialogue Platform: Become an expert, answer questions, and train others on prompting both within and outside of our platform.
  • Train Our AIs: Utilize prompting, knowledge-base creation, and fine-tuning to enhance our AI capabilities.

1mind is a platform that deploys multimodal Superhumans for revenue teams, combining a face, a voice, and a GTM brain. The company has a remote-first, fast-moving culture with ownership, autonomy, and impact from day one.