Alignerr

49 open remote positions

Alignerr partners with the world’s leading AI research teams and labs to build and train cutting-edge AI models. They are a remote company.

Salary Distribution 48 of 49 jobs

Open Positions

$83,200–$166,400/hr

  • Analyze credit risk models and validate underlying assumptions
  • Review PD/LGD/EAD frameworks for accuracy and completeness
  • Identify inconsistencies in risk scoring logic or segmentation criteria

$104,000–$156,000/hr

  • Design, build, and optimize high-performance systems in Python supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation , validation, and quality control
  • Improve reliability, performance, and safety across existing Python codebases

$50–$75/hr

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows
  • Develop full-stack tooling and backend services for large-scale data annotation , validation, and quality control
  • Improve reliability, performance, and safety across existing Rust codebases

$104,000–$156,000/hr

  • Design, build, and optimize high-performance systems in Rust supporting AI data pipelines and evaluation workflows.
  • Develop full-stack tooling and backend services for large-scale data annotation, validation, and quality control.
  • Improve reliability, performance, and safety across existing Rust codebases.

$30–$35/hr

  • Evaluate AI-generated Korean speech and text for linguistic accuracy, naturalness, and educational quality.
  • Assess learner speech and writing across proficiency levels from CEFR Pre-A1 through B2+.
  • Apply expert judgment to identify learner errors, unnatural phrasing, and pedagogical gaps.

$50–$150/hr

  • Translate informal mathematical proofs into Lean (and related proof systems) with an emphasis on clarity, structure, and correctness.
  • Analyze generic and domain-specific proofs, identifying gaps, hidden assumptions, and formalizable sub-structures.
  • Construct formalizations that test the limits of existing proof assistants—especially where tools struggle or fail.