We are seeking a highly skilled Lead Data Engineer with strong expertise in PySpark, SQL, and Python, as well as a solid understanding of ETL and data warehousing principles. The ideal candidate will have a proven track record of designing, building, and maintaining scalable data pipelines in a collaborative and fast-paced environment.
Job listings
As a Senior Microsoft Fabric Data Engineer, you will be responsible for designing, implementing, and managing advanced data engineering solutions using Microsoft Fabric, working closely with an Implementation Partner to ensure seamless execution and optimization of data platforms for an End client. You will develop, implement, and manage data engineering solutions leveraging Microsoft Fabric and collaborate with TCS teams to ensure project objectives align with organizational needs.
Seeking an English-fluent Data Engineer, based in Latin America, available to work remotely. The ideal candidate would enhance and maintain the infrastructure powering our analytics and data products, ingest new data from files, APIs, and external databases into a centralized data store. You would also implement and enforce data governance policies, ensuring adherence to best practices in data security, integrity, privacy, and regulatory compliance.
We are seeking an experienced Analytics Implementation Engineer to join our growing team. In this role, you will take ownership of tracking and instrumentation across multiple digital products, ensuring data integrity and visibility from front-end tagging to data pipelines. You will work closely with engineering, marketing, and analytics teams to implement robust and scalable solutions that drive data-informed decision-making.
We’re seeking a highly skilled and experienced GenAI Engineer with a strong background in Data Engineering and Software Development to join our team to enhance our information retrieval and generation capabilities, with specific experience in Azure AI Search, data processing for RAG, multimodal data integration, and familiarity with Databricks. In this role, you will be responsible for developing a comprehensive framework that focuses on data ingestion processes (vector databases and text-to-SQL).