You'll play a crucial role in developing and maintaining scalable data pipelines and infrastructure to drive data analytics and machine learning solutions for our clients. Design and create robust, scalable data pipelines using Databricks, Apache Spark, and SQL to transform and process large datasets efficiently. You will collaborate with data architects to design data models and architecture that support data analytics and machine learning applications.
Job listings
Develop a robust platform that supports the creation of high-quality data pipelines and enables fast, efficient, end-to-end experimentation. You will play a key role in data engineering, ensuring our infrastructure is scalable, reliable, and optimized for large-scale data processing.
The Data Engineer / Integration Engineer will design, develop, and maintain scalable data pipelines, integrating various systems, and ensuring data quality and consistency across platforms. The role involves ETL/ELT processes using Python and workflow automation tools. Implementing and managing data integration between various systems, including APIs and Oracle EBS is critical.