Design and build Data Warehouse and Data lakes, utilizing knowledge of data warehouse principles and concepts. Demonstrate expertise in design and development of complex data pipelines. Utilize SQL and at least one scripting language to trace and resolve data integrity issues.
Brillio is one of the fastest growing digital technology service providers and a partner of choice for many Fortune 1000 companies.
Design, implement, and maintain scalable ETL/ELT pipelines using Python, SQL, and modern orchestration frameworks. Build and optimize data models and schemas for cloud warehouses and relational databases, supporting AI and analytics workflows. Lead large-scale data initiatives from planning through execution, ensuring performance, cost efficiency, and reliability.
This position is posted by Jobgether on behalf of a partner company.
Play a key role in ensuring the stability, scalability, and efficiency of data platforms and pipelines. Work at the intersection of data engineering, database administration, and technical operations, supporting critical analytics workflows. Monitor, troubleshoot, and optimize databases and data warehouses, while implementing automation and orchestration solutions using Python.
This position is posted by Jobgether on behalf of a partner company, and they use an AI-powered matching process to ensure your application is reviewed quickly.
Work alongside Caylent’s Architects, Engineering Managers, and Engineers to deliver AWS solutions.
Build solutions defined in project backlogs, writing production-ready, well-tested, and documented code across cloud environments.
Participate in Agile ceremonies such as daily standups, sprint planning, retrospectives, and demos.
Caylent is a cloud native services company that helps organizations bring the best out of their people and technology using Amazon Web Services (AWS). They are a global company and operate fully remote with employees in Canada, the United States, and Latin America fostering a community of technological curiosity.
Develop and maintain scalable data pipelines and ETL processes.
Design, build, and optimize data models and databases.
Perform data analysis, data mining, and statistical modeling.
We’re supporting a global fintech and digital currency platform in their search for a Senior Data Engineer to help scale and optimize their analytics and data infrastructure.
Design, build, and maintain scalable and reliable data pipelines.
Develop and maintain ETL data pipelines for large volumes of data, writing clean, maintainable, and efficient code.
Work closely with product managers, data scientists, and software engineers to create and prepare datasets from disparate sources.
Curinos empowers financial institutions to make better, faster and more profitable decisions through industry-leading proprietary data, technologies and insights.
As a key member of our Data Engineering team, you will: Collaborate with Data Science, Reporting, Analytics, and other engineering teams to build data pipelines, infrastructure, and tooling to support business initiatives. Oversee the design and maintenance of data pipelines and contribute to the continual enhancement of the data engineering architecture. Collaborate with the team to meet performance, scalability, and reliability goals.
PENN Entertainment, Inc. is North America’s leading provider of integrated entertainment, sports content, and casino gaming experiences.
Work with data end-to-end, exploring, cleaning, and assembling large, complex datasets. Analyze raw data from multiple sources and identify trends and patterns, maintaining reliable data pipelines. Build analytics-ready outputs and models that enable self-service and trustworthy insights across the organization.
Truelogic is a leading provider of nearshore staff augmentation services headquartered in New York, delivering top-tier technology solutions for over two decades.
Own the design, build, and optimization of end-to-end data pipelines. Establish and enforce best practices in data modeling, orchestration, and system reliability. Collaborate with stakeholders to translate requirements into robust, scalable data solutions.
YipitData is the leading market research and analytics firm for the disruptive economy and most recently raised $475M from The Carlyle Group at a valuation of over $1B.
Design, build, and maintain a robust, self-service, scalable, and secure data platform. Create and edit data pipelines, considering business logic, levels of aggregation, and data quality. Enable teams to access and use data effectively through self-service tools and well-modeled datasets.
We are Grupo QuintoAndar, the largest real estate ecosystem in Latin America, with a diversified portfolio of brands and solutions across different countries.
Design and maintain data models that organize rich content into canonical structures optimized for product features, search, and retrieval.
Build high-reliability ETLs and streaming pipelines to process usage events, analytics data, behavioral signals, and application logs.
Develop data services that expose unified content to the application, such as metadata access APIs, indexing workflows, and retrieval-ready representations.
Udio's success hinges on hiring great people and creating an environment where we can be happy, feel challenged, and do our best work.
Looking for young talent ready to go all in. Offering significant equity to people who want to build something that matters. Define the future of AI in influencer marketing.
Influur is redefining how advertising works through creators, data, and AI, aiming to make influencer marketing as measurable, predictable, and scalable as paid ads.