Senior Data Engineer
Remote, India
33 applied
Full-time
₹ 20 - 30 Lakh /year
8 - 15 yrs
Posted on: Jan 20, 2026
Skills
python
pyspark
kafka
aws
azure
dbt
Responsibilities
Design and develop scalable data pipelines using AWS services such as S3, Redshift, and Lambda.
Implement real-time data processing solutions using Apache Kafka to handle streaming data efficiently.
Utilize Python and PySpark for data transformation and analysis to ensure high-quality deliverables.
Collaborate with data scientists and analysts to understand data requirements and provide appropriate data solutions.
Conduct data modeling and data warehouse optimization to enhance query performance and support analytics needs.
Leverage dbt for data transformation workflows and ensure version control of analytics code.
Participate in architecture discussions to design modern data infrastructure on Azure that supports future growth and scalability.
Requirements
Bachelor's degree in Computer Science, Engineering, or a related field.
Minimum of 8 years of experience in data engineering or a related field.
Strong proficiency in AWS services and architecture for data processing and storage.
Experience with Apache Kafka and its ecosystem for handling real-time data streams.
Proficient in Python and PySpark with hands-on experience in building data processing applications.
Familiarity with dbt for data transformation and analytics workflow management.
Knowledge of Azure cloud services and how they can be integrated into data solutions.