Job Description
In this role you will design and implement the services that power inference for both internal and external customers. You’ll integrate and scale model back-ends, create sophisticated request-routing logic for high-throughput, low-latency workloads, and fortify our observability pipeline so the platform stays rock-solid as we charge toward the next growth leap. The work ranges from performance tuning and memory management to multi-tenant scheduling, with opportunities to hunt microseconds using CUDA, Triton, or other kernel-level profilers whenever the hardware demands it. We’re looking for developers who are fluent in Python, Go, or Rust; comfortable with asynchronous programming and distributed architectures; and practiced in API design, load balancing, caching, and queuing, all backed by clean, test-driven code and modern CI/CD.
About Nebius
Nebius is leading a new era in cloud computing to serve the global AI economy, creating the tools and resources customers need to solve real-world challenges.