At Grafana, we build observability tools that help users understand, respond to, and improve their systems β regardless of scale, complexity, or tech stack. The Grafana AI teams play a key role in this mission by helping users make sense of complex observability data through AI-driven features. These capabilities reduce toil, lower the barrier of domain expertise, and surface meaningful signals from noisy environments.
We are looking for an experienced engineer with expertise in evaluating Generative AI systems, particularly Large Language Models (LLMs), to help us build and evolve our internal evaluation frameworks, and/or integrate existing best-of-breed tools. This role involves designing and scaling automated evaluation pipelines, integrating them into CI/CD workflows, and defining metrics that reflect both product goals and model behavior. As the team matures, thereβs a broad opportunity to expand or redefine this role based on impact and initiative.
Design and implement robust evaluation frameworks for GenAI and LLM-based systems, including golden test sets, regression tracking, LLM-as-judge methods, and structured output verification. Develop tooling to enable automated, low-friction evaluation of model outputs, prompts, and agent behaviors. Define and refine metrics for both structure and semantics, ensuring alignment with realistic use cases and operational constraints.Β Lead the development of dataset management processes and guide teams across Grafana in best practices for GenAI evaluation.