Remote Software engineering Jobs · Terraform

Job listings

Leads the design, development, implementation, testing and analysis of software applications to meet enterprise-wide business and user needs. Consults with managers and directors to understand business needs and propose new and improved software applications. Prepares reports to provide recommendations, conclusions, and other data. Hires, supervises and evaluates staff. Performs related responsibilities as required.

Define the technical vision and lead the design and implementation of Docker AI Cloud’s distributed systems. Partner with principal engineers across the company to architect scalable, reliable, and secure infrastructure that supports millions of developers and thousands of enterprises. Define and drive the long-term technical strategy for Docker AI Cloud’s control and data plane services. Architect highly available, multi-region systems capable of operating seamlessly across multiple cloud providers.

As a Senior/Staff Backend Engineer, you’ll be key to scaling Rally’s core systems and services to power the next 100M+ participant records and billions of events we process. You’ll own critical systems like Email/SMS, data sync pipelines, search, workflow automation, and help lay the foundation for AI features. Our codebase is fully TypeScript, and while you don’t need experience with our exact stack, you should have strong JavaScript fundamentals and an eagerness to learn.

The Developer Platform Experience team works to create a seamless, efficient, and intuitive environment for Twilio engineers to build, test, and launch their applications. This role focuses on developing, testing, and deploying applications, collaborating with teammates, writing documentation, supporting internal users, and continuously improving Twilio’s internal developer platform.

$121,400–$173,300/yr

Green Dot Corporation is looking for a Senior Software Engineer to join the Platform Team and contribute to the development of scalable, secure, and resilient systems that support core platform services and infrastructure. The candidate will have strong technical expertise in platform engineering and cloud-native architectures, working closely with cross-functional teams to deliver high-quality solutions.

Develop and evolve complex backend services in Node.js (TypeScript); Design and optimize federated GraphQL APIs (Gateway + Subgraphs) and REST APIs; Implement scalable integrations and solutions using GCP (Cloud Functions, Firestore, Cloud Storage); Ensure quality and reliability through automated testing, monitoring (Grafana, Sentry) and good architectural practices; Automate deploys and provisioning with GitLab CI/CD and Terraform; Support the definition of technical standards and best development practices; Contribute with code reviews and continuous improvement of engineering processes; Work closely with product and frontend teams to ensure deliveries of high technical and business value.

$170,000–$200,000/yr

Build modern cloud-native apps, event-driven micro-services, and agent-integrated workflows. Participate fully in team ceremonies, continuous improvement, release operations, and knowledge-sharing, while proactively owning the developer-led QA for all releases. Responsibilities include creating user interfaces (React/Next, ACE) and backends (Python/FastAPI, Node.js ), integrating tightly with Kafka/EventBridge/Redis Streams for async agentic workflows.

$154,000–$193,000/yr

You will be working primarily on Grafana Cloud’s synthetic monitoring application, and may get involved in adjacent projects like Grafana Cloud k6 and k6 studio to deliver new features. You'll be leading and managing projects throughout their entire lifecycle, from the initial ideation and planning stages, through development and execution, to final delivery in a fully remote set up.

US Unlimited PTO

You will join the Data Acquisition (DA) team where you will help build tools that interact with external health data networks to collect information about our patients and load it into the Zus data stores at high volume, as well as services used by customers and internal stakeholders to request that data. You will work on data pipelines that operate on large scale data using a variety of AWS services.