Work alongside Caylent’s Architects, Engineering Managers, and Engineers to deliver AWS solutions.
Build solutions defined in project backlogs, writing production-ready, well-tested, and documented code across cloud environments.
Participate in Agile ceremonies such as daily standups, sprint planning, retrospectives, and demos.
Caylent is a cloud native services company that helps organizations bring the best out of their people and technology using Amazon Web Services (AWS). They are a global company and operate fully remote with employees in Canada, the United States, and Latin America fostering a community of technological curiosity.
Build modern, scalable data pipelines that keep the data flowing—and keep our clients happy
Design cloud-native infrastructure and automation that supports analytics, AI, and machine learning
Unify and wrangle data from all kinds of sources: SQL, APIs, spreadsheets, cloud storage, and more
Jobgether uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Their system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company.
Architect, design, implement, and operate end-to-end data engineering solutions.
Develop and manage robust data integrations with external vendors.
Collaborate closely with Data Analysts, Data Scientists, DBAs, and cross-functional teams.
SmartAsset is an online destination for consumer-focused financial information and advice, helping people make smart financial decisions. With over 59 million people reached each month, they operate SmartAsset Advisor Marketing Platform (AMP) to connect consumers with fiduciary financial advisors.
Write and ship a lot of code, working with analysts and stakeholders to refine requirements and debug data sets.
Drive architectural decisions and thoughtfully balance trade-offs in system design, leading cross-functional data initiatives.
Champion high standards in data quality, security, and discoverability, translating complex technical challenges into clear solutions.
EzCater is a technology company that connects workplaces with over 100,000 restaurants nationwide, providing solutions for employee meals and meetings. They are backed by top investors including Insight, Iconiq, Lightspeed, GIC, SoftBank, and Quadrille and value work/life harmony.
Build and evolve our semantic layer, design, document, and optimize dbt models.
Develop and maintain ETL/orchestration pipelines to ensure reliable and scalable data flow.
Partner with data analysts, scientists, and stakeholders to enable high-quality data access and experimentation.
Customer.io's platform is used by over 7,500 companies to send billions of emails, push notifications, in-app messages, and SMS every day. They power automated communication and help teams send smarter, more relevant messages using real-time behavioral data; their culture values empathy, transparency, and responsibility.
Design, build, and maintain robust data pipelines.
Own and scale ETL/ELT processes using tools like dbt, BigQuery, and Python.
Build modular data models that power analytics, product features, and LLM agents.
Jobgether is a platform that uses AI to match candidates with jobs. They aim to review applications quickly and fairly, ensuring the top-fitting candidates are identified and shared with hiring companies.
Collaborate with business leaders, engineers, and product managers to understand data needs.
Design, build, and scale data pipelines across a variety of source systems and streams (internal, third-party, as well as cloud-based), distributed/elastic environments, and downstream applications and/or self-service solutions
Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
NBCUniversal is one of the world's leading media and entertainment companies that creates world-class content, which we distribute across our portfolio of film, television, and streaming, and bring to life through our global theme park destinations, consumer products, and experiences. We champion an inclusive culture and strive to attract and develop a talented workforce to create and deliver a wide range of content reflecting our world.
Build modern, scalable data pipelines that keep the data flowing.
Design cloud-native infrastructure and automation that supports analytics, AI, and machine learning.
Unify and wrangle data from all kinds of sources.
InterWorks is a people-focused tech consultancy that empowers clients with customized, collaborative solutions. They value unique contributions, and their people are the glue that holds their business together.
Design and build robust data pipelines that integrate data from diverse sources.
Build streaming data pipelines using Kafka and AWS services to enable real-time data processing.
Create and operate data services that make curated datasets accessible to internal teams and external partners.
Quanata aims to ensure a better world through context-based insurance solutions. It is a customer-centered team creating innovative technologies, digital products, and brands, backed by State Farm, blending Silicon Valley talent with insurer expertise.
Architect and maintain scalable, secure, and high-performing data pipelines to support analytics, reporting, and operational needs.
Develop and deploy production-grade data engineering code, ensuring reliability and performance across environments.
Manage end-to-end data workflows, including ingestion, transformation, modeling, and validation for multiple business systems.
Onebridge, a Marlabs Company, is a global AI and Data Analytics Consulting Firm that empowers organizations worldwide to drive better outcomes through data and technology. Since 2005, they have partnered with some of the largest healthcare, life sciences, financial services, and government entities across the globe.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and design data models and schemas that facilitate data analysis and reporting
Design, develop, and maintain scalable and efficient data pipelines and ETL processes to ingest, process, and transform large volumes of data from various sources into usable formats
Build and optimize data storage and processing systems, including data warehouses, data lakes, and big data platforms, using AWS services such as Amazon Redshift, AWS Glue, AWS EMR, AWS S3, and AWS Lambda, to enable efficient data retrieval and analysis
ATPCO is the world's primary source for air fare content. They hold over 200 million fares across 160 countries and the travel industry relies on their technology and data solutions. ATPCO believes in flexibility, trust, and a culture where your wellbeing comes first.
Design, build, and operate scheduled and event-driven data pipelines for simulation outputs, telemetry, logs, dashboards, and scenario metadata
Build and operate data storage systems (structured and semi-structured) optimized for scale, versioning, and replay
Support analytics, reporting, and ML workflows by exposing clean, well-documented datasets and APIs
Onebrief is collaboration and AI-powered workflow software designed specifically for military staffs. They transform this work, making the staff faster, smarter, and more efficient. The company is all-remote with employees working alongside customers; it was founded in 2019 and has raised $320m+.
Design, build, and maintain scalable data pipelines and warehouses for analytics and reporting.
Develop and optimize data models in Snowflake or similar platforms.
Implement ETL/ELT processes using Python and modern data tools.
Jobgether uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. They identify the top-fitting candidates, and this shortlist is then shared directly with the hiring company; the final decision and next steps (interviews, assessments) are managed by their internal team.
Design, build, and optimize data pipelines to centralize data in a modern warehouse (PostHog)
Automate ETL processes and existing spreadsheet-based reports
Work closely with finance and business stakeholders to understand ad hoc reporting needs and deliver efficient solutions
Katapult is a nearshore software development agency that combines the best talent in LATAM, with world-class execution and leadership experience, with an AI-first approach to product engineering. Katapult works with PMF+ startups and businesses in the United States with a team-augmentation model.
Ship technically challenging projects end-to-end in a fast-paced, iterative environment.
Propel the business forward, and recognize the impact of your work on the company’s business metrics.
Own features, services, caches, and databases, including: deployment, monitoring, debugging, and testing.
Super.com is on a mission to help people save more, earn more, and get more out of life. They move fast, think big, and always put people first, investing in learning, celebrating bold ideas, and creating pathways for career growth.
Enable efficient consumption of domain data as a product by delivering and promoting strategically designed actionable datasets and data models
Build, maintain, and improve rock-solid data pipelines using a broad range of technologies like AWS Redshift, Trino, Spark, Airflow, and Kafka streaming for real-time processing
Support teams without data engineers in building decentralised data solutions and product integrations, for example, around DynamoDB Act as a data ambassador, promoting the value of data and our data platform among engineering teams and enabling cooperation
OLX operates consumer brands that facilitate trade to build a more sustainable world. They have colleagues around the world who serve millions of people every month.
Projetar, desenvolver e manter pipelines de dados (batch e streaming) para ingestão, transformação e disponibilização de dados para analytics e consumo por aplicações.
Construir e evoluir modelagem analítica (camadas bronze/silver/gold, data marts, star schema, wide tables), garantindo consistência, documentação e reuso.
Implementar boas práticas de qualidade de dados (testes, validações, contratos, SLAs/SLOs, monitoramento de freshness/completeness/accuracy) e atuar na resolução de incidentes com RCA.
CI&T specializes in technological transformation, combining human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters around the world, they have partnered with more than 1,000 clients during their 30 years of history.
Build ETL/ELT pipelines for extracting data from sources and placing it in target destinations.
Transform data into formats usable by AI-based solutions.
Manage datasets for AI model training and fine-tuning.
Jobgether is an AI-powered platform that connects job seekers with employers. They use AI to match candidates with roles and ensure applications are reviewed quickly and fairly.
Partner with data scientists and stakeholders to translate business and ML/AI use cases into scalable data architectures.
Design, develop, and maintain scalable and efficient data pipelines and ETL processes to ingest, process, and transform large data.
Build and optimize data storage and processing systems using AWS services to enable efficient data retrieval and analysis.
ATPCO is the world's primary source for air fare content, holding over 200 million fares across 160 countries. They provide technology and data solutions to the travel industry, helping millions of travelers reach their destinations efficiently. ATPCO believes in flexibility, trust, and a culture where your wellbeing comes first.