Job Description

Responsibilities:

  • Design, build, and optimize large-scale distributed data pipelines using Apache Beam / Google Cloud Dataflow, Apache Spark, or Apache Flink.
  • Develop ETL/ELT workflows on GCP integrating multiple structured and unstructured data sources.
  • Address schema, type conversion, and performance optimization challenges when migrating data warehouses to BigQuery.

Requirements:

  • 4+ years working as a Data Engineer in modern cloud environments.
  • 2+ years hands-on experience in GCP (BigQuery, Dataflow, Dataform, Composer, Dataproc, Pub/Sub).
  • Strong experience with distributed data processing frameworks (experience in at least one is required).

Company Culture:

  • First rule at Zencore - Be Kind.
  • Own the process - own your customer’s success by proactive practices.
  • Champion a consistent collaborative culture.

About Zencore

Zencore is a fast-growing company founded by former Google Cloud leaders, architects, and engineers.

Apply for This Position