Senior Data Engineer
Niyo Solutions
Recruitment Process
Details
Niyo Solutions is hiring for the role of Senior Data Engineer!
Role Purpose:
As a Senior Data Engineer, your primary responsibility is to ensure the availability, reliability, and efficiency of data within our organization's infrastructure. You will collaborate with cross-functional teams to design, develop, and maintain data pipelines, optimize data storage and retrieval processes, and contribute to overall data architecture. Your expertise in Python, Spark, SQL, Airflow, and AWS will be critical in building scalable and effective data solutions.
Responsibilities of the Candidate:
- Data Pipeline Development: Develop and maintain robust data pipelines for ingesting, transforming, and delivering data from various sources to downstream systems, data lakes, and warehouses.
- Data Transformation: Perform data cleansing, enrichment, and transformation using Python, Spark, and SQL to ensure data quality, consistency, and distributed data processing.
- Data Modeling: Design and implement data models and schemas to support analytical and reporting requirements.
- Performance Optimization: Optimize data processing and storage solutions for efficiency and scalability, especially when handling large volumes of data.
- Data Integration: Collaborate with data scientists, analysts, and other teams to integrate data into analytics and machine learning workflows.
- Monitoring and Maintenance: Implement monitoring and alerting systems to ensure data pipeline reliability, and conduct routine maintenance tasks.
- Documentation: Maintain documentation for data pipelines, schemas, and processes to facilitate knowledge sharing and adherence to best practices.
Requirements:
- Bachelor's degree in Computer Science, Information Technology, or a related field (Master's preferred)
- Minimum of 4 years of experience as a Senior Data Engineer or similar role
- Familiarity with data warehousing concepts and technologies, such as Delta Lake
- Experience building Big Data Architectures using Spark, Delta Lake, Hadoop, or similar technologies
- Proficient in Python for data manipulation and transformation
- Strong skills in Apache Spark for distributed data processing in both batch and real-time scenarios
- Advanced SQL skills for data querying and optimization
- Experience with workflow management tools like Apache Airflow
- Understanding of data security and privacy principles
- Excellent problem-solving and analytical abilities
- Strong communication and collaboration skills to work effectively in a cross-functional team environment
- Ability to thrive in a fast-paced environment and manage multiple projects simultaneously
- Continuous learning mindset to stay updated with the latest industry trends and technologies
Important dates & deadlines?
-
18 Apr'24, 12:00 AM IST Registration Deadline
Contact the organisers
Send queries to organizersAdditional Information
Job Location(s)
Bangalore
Salary
Salary: Not Disclosed
Work Detail
Working Days: 5 Days
Job Type/Timing
Job Type: In Office
Job Timing: Full Time