Similar Jobs

See all

Role Overview:

  • Design and lead modern Lakehouse data platforms using Databricks.
  • Focus on building scalable, high-performance data pipelines.
  • Enable analytics and AI use cases on cloud-native data platforms.

Key Responsibilities:

  • Design and optimize batch & streaming data pipelines using Apache Spark (PySpark/SQL).
  • Implement Delta Lake best practices (ACID, schema enforcement, time travel, performance tuning).
  • Build and manage Databricks jobs, workflows, notebooks, and clusters.

Must-Have Skills:

  • 10+ years in data engineering / data architecture.
  • 5+ years of strong hands-on experience with Databricks.
  • Expert in Apache Spark, PySpark, SQL.

Google Chrome Microsoft Edge Apple Safari Mozilla Firefox

They are looking for a Databricks Architect to design and lead modern Lakehouse data platforms using Databricks. The role focuses on building scalable, high-performance data pipelines and enabling analytics and AI use cases on cloud-native data platforms.

Apply for This Position