Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions. Design and oversee the data architecture and infrastructure, ensuring scalability, performance, and security. Create scalable and efficient data processing frameworks, including ETL processes and data pipelines.
Source Job
20 jobs similar to Data Engineering Architect
Jobs ranked by similarity.
- Design and develop scalable data pipelines and infrastructure to process large volumes of data efficiently
- Collaborate with cross-functional teams to ensure data integrity, accessibility, and usability
- Implement and maintain data quality measures throughout the data lifecycle
CI&T is a tech transformation specialist, uniting human expertise with AI to create scalable tech solutions. With over 8,000 employees around the world, they have a culture that values diverse identities and life experiences, fostering a diverse, inclusive, and safe work environment.
Provide leadership and guidance to the data engineering team, fostering a collaborative work environment. Provide technical expertise and direction in data engineering, guiding the team in selecting appropriate tools and methodologies. Oversee the design and architecture of data solutions, collaborating with data architects and other stakeholders.
Lingaro has been on the market since 2008, with 1300+ talents currently on board in 7 global sites.
- Design and engineer robust data pipelines using technologies like Databricks, Azure Data Factory, Apache Spark, and Delta Lake.
- Craft healthcare data solutions - processing massive healthcare datasets, optimizing performance, and ensuring data is accurate and secure.
- Communicate technical concepts to non-technical stakeholders, manage multiple priorities, and meet deadlines.
Gentiva offers compassionate care in the comfort of patients' homes as a national leader in hospice, palliative, home health care, and advanced illness management. They have nearly 600 locations and thousands of clinicians across 38 states, offering rewarding careers in a collaborative environment.
- Build and operate data pipelines from D365, Power Platform, and other sources into the enterprise data platform.
- Design and implement star schemas, data lake house structures, and semantic models for Power BI.
- Optimize performance and cost management for reporting in Azure.
Jobgether uses an AI-powered matching process to ensure applications are reviewed quickly and fairly. They identify top-fitting candidates and share the shortlist with the hiring company.
- Architect and implement scalable Lakehouse solutions using Delta Tables and Delta Live Tables.
- Design and orchestrate complex data workflows using Databricks Workflows and Jobs.
- Develop production-grade Python and PySpark code, including custom Python libraries.
Coderio designs and delivers scalable digital solutions for global businesses with a strong technical foundation and a product mindset.
- Drive data engineering work across project phases, including discovery, design, build, test, deploy, and ongoing improvement.
- Design and build scalable data pipelines using Microsoft Fabric (lakehouses, warehouses, pipelines, dataflows, notebooks).
- Cleanse, model, and transform raw data to support analytics, reporting, semantic modeling, and governance needs.
Stoneridge Software helps clients succeed in implementing business software solutions. As a 2025 Top Workplace Honoree and Microsoft Solutions Partner, they have a meticulous approach to project delivery and empower client's success with long-term support.
- Partner with clients and implementation teams to understand data distribution requirements.
- Design and develop data pipelines integrating with Databricks and Snowflake, ensuring accuracy and integrity.
- Lead architecture and implementation of solutions for health plan clients, optimizing cloud-based technologies.
Abacus Insights is changing the way healthcare works by unlocking the power of data to enable the right care at the right time. Backed by $100M from top VCs, they're tackling big challenges in an industry that’s ready for change with a bold, curious, and collaborative team.
- Design and implement data solutions for various cloud data platforms.
- Lead and mentor engineering teams in best practices and technical excellence.
- Develop end-to-end technical solutions into production, ensuring optimal performance and security.
Jobgether is a remote-first organization that values a culture of technological curiosity, ownership, and trust and provides a dynamic and casual work environment for top performers. They use an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements.
- Design, develop, and implement end-to-end data pipelines to support data collection and transformation.
- Lead the architecture and development of scalable and maintainable data solutions.
- Collaborate with data scientists and analysts to provide clean and accessible data.
DexCare optimizes time in healthcare, streamlining patient access, reducing waits, and enhancing overall experiences.
As a key member of our Data Engineering team, you will: Collaborate with Data Science, Reporting, Analytics, and other engineering teams to build data pipelines, infrastructure, and tooling to support business initiatives. Oversee the design and maintenance of data pipelines and contribute to the continual enhancement of the data engineering architecture. Collaborate with the team to meet performance, scalability, and reliability goals.
PENN Entertainment, Inc. is North America’s leading provider of integrated entertainment, sports content, and casino gaming experiences.
- Design, develop, and maintain robust data processes and solutions.
- Develop and maintain data models, databases, and data warehouses.
- Collaborate with stakeholders to gather requirements and provide data solutions.
Highmark Health is a national, blended health organization that includes one of America’s largest Blue Cross Blue Shield insurers.
- Design, build, and maintain scalable, high-quality data pipelines.
- Implement robust data ingestion, transformation, and storage using cloud-based technologies.
- Collaborate with stakeholders to understand business goals and translate them into data engineering solutions.
CI&T is a tech transformation specialist, uniting human expertise with AI to create scalable tech solutions. With over 8,000 employees around the world, they have partnerships with more than 1,000 clients and value diversity, fostering a diverse, inclusive, and safe work environment.
- Partner with data scientists and stakeholders to translate business and ML/AI use cases into scalable data architectures.
- Design, develop, and maintain scalable and efficient data pipelines and ETL processes to ingest, process, and transform large data.
- Build and optimize data storage and processing systems using AWS services to enable efficient data retrieval and analysis.
ATPCO is the world's primary source for air fare content, holding over 200 million fares across 160 countries. They provide technology and data solutions to the travel industry, helping millions of travelers reach their destinations efficiently. ATPCO believes in flexibility, trust, and a culture where your wellbeing comes first.
Work with data end-to-end, exploring, cleaning, and assembling large, complex datasets. Analyze raw data from multiple sources and identify trends and patterns, maintaining reliable data pipelines. Build analytics-ready outputs and models that enable self-service and trustworthy insights across the organization.
Truelogic is a leading provider of nearshore staff augmentation services headquartered in New York, delivering top-tier technology solutions for over two decades.
- Design, develop, and maintain scalable and robust data pipelines.
- Create solutions for data ingestion, transformation, and modeling using Databricks, Spark/PySpark, Cloudera, and Azure Data Factory (ADF).
- Ensure the quality, integrity, and usability of data throughout the entire pipeline.
CI&T specializes in technological transformation, uniting human expertise with AI to create scalable tech solutions. With over 8,000 CI&Ters worldwide, they have partnered with over 1,000 clients during their 30-year history, with a focus on Artificial Intelligence.
- Design, build and execute data pipelines.
- Build the configurable ETL framework.
- Optimize SQL queries to maximize system performance.
RefinedScience is dedicated to delivering high-quality emerging tech solutions. While the job description does not contain company size or culture information, the role seems to value innovation and collaboration.
- Build, manage, and operationalize data pipelines for marketing use cases.
- Develop a comprehensive understanding of customer and marketing data requirements.
- Transform large data sets into targeted customer audiences for personalized experiences.
Jobgether uses an AI-powered matching process to ensure your application is reviewed quickly, objectively, and fairly against the role's core requirements. Our system identifies the top-fitting candidates, and this shortlist is then shared directly with the hiring company.
Own the design, build, and optimization of end-to-end data pipelines. Establish and enforce best practices in data modeling, orchestration, and system reliability. Collaborate with stakeholders to translate requirements into robust, scalable data solutions.
YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B.
- Assist in executing data engineering projects within the Customer Intelligence portfolio to meet defined timelines and deliverables.
- Build and maintain ETL pipelines based on user and project specifications to enable reliable data movement.
- Develop and update technical documentation for key systems and data assets.
Stryker is one of the world’s leading medical technology companies and, together with its customers, is driven to make healthcare better.
- Develop data models and pipelines for customer-facing applications, research, reporting and machine learning.
- Optimize data models to support efficient data storage and retrieval processes for performance and scalability.
- Optimize ETL processes for ingesting, processing and transforming large volumes of structured and unstructured data into our data ecosystem.
Inspiren offers the most complete and connected ecosystem in senior living bringing peace of mind to residents, families, and staff.