Job Description
With a strong background in Generative AI Inference and expertise in Speculative Decoding, you will design, implement, and optimize cutting-edge algorithms to enhance our production AI infrastructure and capabilities in post training, model evaluation, and operational performance. Collaborate with cross-functional teams to integrate your solutions into Groq’s production AI infrastructure. Work in a multi data center production environment and Kubernetes environment with Groq’s customer hardware, inference and compiler stack. Develop high-performance, scalable code primarily in C++ and Rust, ensuring efficient resource utilization and system stability. Ability to model performance of a distributed high performance system. Experience building production distributed systems involving multi process communication with technologies such as MPI, scheduling and working in a kubernetes environment. Stay up-to-date with the latest developments in generative AI and speculative decoding, and translate cutting-edge research into practical, production-ready implementations. Work closely with teams across software engineering, research, and operations to drive improvements in post training, model evaluation, and overall system performance. Provide technical leadership and mentorship to team members, fostering an environment of continuous learning and innovation. Champion code quality, maintainability, observability, monitoring and best practices, ensuring that all deliverables meet rigorous performance and security standards.
About Groq
Groq delivers fast, efficient AI inference and is headquartered in Silicon Valley and on a mission to make high performance AI compute more accessible and affordable.