Job Description
We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, thereβs a broad opportunity to expand or redefine this role based on impact and initiative.
The kind of problems youβll be tackling: Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification. Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors. Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.Β Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.
About Grafana Labs
There are more than 20M users of Grafana, the open source visualization tool, around the globe, monitoring everything from beehives to climate change in the Alps.