Similar Jobs

See all

Responsibilities:

  • Design and implement data ingestion and transformation pipelines (batch and near-real-time) using PySpark/SparkSQL on Databricks.
  • Own data pipelines end-to-end in production: freshness, correctness, availability, and SLA adherence.
  • Build and maintain Delta Lake tables following medallion architecture patterns (bronze/silver/gold).

Required Skills:

  • Apache Spark (PySpark, SparkSQL) — production experience
  • Databricks (jobs, workflows, cluster management, tuning)
  • Delta Lake (ACID tables, OPTIMIZE, VACUUM, schema evolution, MERGE)

Additional requirements:

  • Git/GitHub, CI/CD for data pipelines
  • Terraform
  • Python for automation and data processing

Pismo

Pismo, founded in 2016, provides a comprehensive processing platform for banking, card issuing, and financial market infrastructure. With over 500 employees across more than 10 countries and now part of Visa, they empower firms to build and launch financial products rapidly with high security and availability standards.

Apply for This Position