This position requires someone to work on complex technical projects and closely work with peers in an innovative and fast-paced environment. For this role, we require someone with a strong product design sense & specialized in Hadoop and Spark technologies. Grow our analytics capabilities with faster, more reliable tools, handling petabytes of data every day.
Job listings
As a Microsoft Fabric Architect, you will be the principal authority for designing and leading the implementation of enterprise-scale data solutions for our US-based clients. You will be responsible for creating the strategic vision and technical blueprint for modern data platforms using Microsoft Fabric.
Cloud Data Engineer will be responsible for developing on the data lake platform and all applications on Azure cloud. Good data Engineering, data modeling, SQL knowledge is a must with Python programming background. The Data Engineer will be responsible for providing design and development solutions for applications in the cloud.
Swiftly is searching for a Data Engineer to join their high functioning team. This role will help scale our Retailer Platform that spans multiple technologies using Azure, Databricks, and API development. The platform is an end-to-end system that ingests retailers' raw data and serves it to multiple teams downstream, including the live, production mobile applications.
This role is for a Semi-Senior PySpark Data Engineer who is eager to learn, take initiative, and contribute to the development of high-performance and scalable data pipelines. This is perfect for someone who wants to enhance their technical skills while working on exciting projects within a collaborative team.
The Product Data Analyst is responsible for improving the success of the Finanzguru team in the area of Banking Quality & Insights and achieving measurable results. You will work as an integral part of our product team and get to the bottom of the structures and details of our bank data. You will own the logic and implementation of all KPIs of the product team, including the performance of various machine learning models.
Seeking an experienced Data Engineer with knowledge of Machine Learning to join the team for a USA client. The role involves migrating from AWS SageMaker to Databricks, building and optimizing cloud models. Daily tasks include migrating ML components, deploying models to production, and collaborating on data integration. Requires 7 years of data engineering experience, Databricks proficiency, and AWS experience.
As a Senior Data Engineer, you will be responsible for designing, building, and optimizing scalable data pipelines that power AI and analytics solutions for our clients. You will work on the architecture and deployment of data infrastructure, ensuring efficiency, reliability, and security across large-scale datasets, as well as being hands in the development process. You may work 100% remotely if you are currently living in LATAM, or you can always join us at the office in Montevideo, Uruguay!
Johnson & Johnson is recruiting for an Experienced Data Engineer to play a pivotal role in building the modern cloud data platform. This role requires in-depth technical expertise and interpersonal skills to accelerate data products development as part of the fast-paced data platform team. Data Engineering Responsibilities include developing and maintaining complex SQL queries and building data pipelines in Azure.
We are seeking a highly experienced Data Architect with a strong background in designing scalable data solutions and leading data engineering teams. The ideal candidate will have deep expertise in Microsoft Azure, ETL processes, and modern data architecture principles. This role involves close collaboration with stakeholders, engineering teams, and business units to design and implement robust data pipelines and architectures.