Design, build, and maintain large-scale distributed training infrastructure for Ads ML models. Develop tools and frameworks on top of the Ray platform. Build tools to debug, profile, and tune distributed training jobs for performance and reliability. Integrate with object storage systems and improve data access patterns. Collaborate with ML engineers to improve model training time, efficiency, and GPU training costs. Drive improvements in scheduling, state management, and fault tolerance within the training platform to enhance overall performance.
Requires 3+ years in infrastructure/platform engineering or large-scale distributed systems and 2+ years hands-on experience with Ray platform.