Own the end-to-end lifecycle of ML model deployment—from training artifacts to production inference services.
Design, build, and maintain scalable inference pipelines using modern orchestration frameworks (e.g., Kubeflow, Airflow, Ray, MLflow).
Implement and optimize model serving infrastructure for latency, throughput, and cost efficiency across GPU and CPU clusters.
MARA is building a modular platform that unifies IaaS, PaaS, and SaaS which will enable governments, enterprises, and AI innovators to deploy, scale, and govern workloads across data centers, edge environments, and sovereign clouds. They are redefining the future of sovereign, energy-aware AI infrastructure.
Architect and implement scalable AI platform services for LLMs and other AI models.
Apply LLMs and AI technologies to build and enhance intelligent product features.
Develop robust APIs and backend systems for seamless integration of AI-powered features.
ClickUp is creating the first truly converged AI workspace, unifying tasks, docs, chat, calendar, and enterprise search, all supercharged by context-driven AI.
Partner with account executives to expand technical discovery and map capabilities to customer initiatives.
Architect and deliver demos and proof-of-concepts showcasing differentiation in Zero Trust access.
Support customers exploring AI/LLM use cases, including secure infrastructure for training and inference workloads.
Teleport delivers on-demand, least privileged access to infrastructure based on cryptographic identity and zero trust, with built-in identity security and policy governance.