Job Description
Play a key role in building the next generation AI cloud platform – a highly available, global, blazing-fast cloud infrastructure that virtualizes cutting-edge ML hardware (GB200s/GB300s, BlueField DPUs) and enables state-of-the-art ML practitioners with self-serve AI cloud services, such as on-demand + managed Kubernetes and Slurm clusters. This platform serves both our internal SaaS products (inference, fine-tuning) and our external cloud customers, spanning dozens of data centers across the world.
Design, build, and maintain performant, secure, and highly-available backend services/operators that run in our data centers and automate hardware management. Design and build out the IaaS software layer for a new GB200 data center with thousands of GPUs. Work on a global multi-exabyte high-performance object store, serving massive datasets for pretraining. Build advanced observability stacks for our customers with automated node lifecycle management for fault-tolerant distributed pretraining.
About Together AI
Together AI is building the AI Acceleration Cloud, an end-to-end platform for the full generative AI lifecycle, combining the fastest LLM inference engine.