Data Engineer
Payoneer
Application Deadline
9
days left
Eligibility
Experienced Professionals
Fresher
Details
Payoneer is hiring for the role of Data Engineer!
Responsibilities of the Candidate:
- Build, maintain, and optimize batch and streaming data pipelines that power product and business use cases, using distributed data processing frameworks such as Apache Beam, Spark, or Flink, with managed runners or engines such as Google Cloud Dataflow where relevant.
- Develop curated datasets and dimensional models for analytics and reporting in cloud data warehouses.
- Implement workflow orchestration and automation with an emphasis on reliability, repeatability, and clear failure handling.
- Contribute to event-driven integrations using messaging platforms such as Kafka, building familiarity with core streaming concepts including windowing, late-data handling, replay and backfill strategies, and idempotency.
- Work with operational data stores such as Bigtable, SQL Server, MongoDB, or equivalents where aligned to access patterns, scalability, and performance requirements.
- Strengthen data quality and trust through validation frameworks, pipeline observability, monitoring, and governance-aligned practices.
- Use AI-assisted development tools to improve throughput, for example through faster debugging, automated test scaffolding, and better documentation, and explore data engineering-adjacent AI use cases such as anomaly detection on pipeline or business metrics.
Requirements
- You have a solid foundation in data engineering and are excited to build and operate reliable data pipelines in production.
- You’re comfortable working across core batch data engineering patterns, and you have some exposure to streaming concepts and distributed processing at scale.
- You enjoy debugging and improving performance and data quality.
- You collaborate well with product, analytics, and business stakeholders and can translate requirements into clear technical tasks.
- You care about engineering hygiene, including testing, documentation, and operational ownership, and you’re open to using AI responsibly to improve your throughput and the quality of what you ship
- Hands-on experience building and maintaining production data pipelines, with strong SQL and data modelling fundamentals.
- Experience with at least one distributed data processing framework such as Apache Beam, Spark, or Flink.
- Experience with at least one cloud data warehouse such as BigQuery, Snowflake, Redshift, Databricks SQL, or Synapse.
- Familiarity with pipeline orchestration using frameworks such as Airflow, Composer, Prefect, or equivalent.
- Exposure to streaming platforms such as Kafka and an understanding of core streaming concepts, including windowing, late data, replay, and idempotency.
- Understanding of data quality and observability basics, including validation checks, monitoring, and lineage or metadata concepts.
- Experience with at least one major cloud data platform such as Google Cloud, AWS, or Azure.
- Prior exposure to fintech, payments, lending, or broader financial services domains.
- Exposure to automation tools for reporting workflows.
If an employer asks you to pay any kind of fee, please notify us immediately. unstop does not charge
any fee from the applicants and we do not allow other companies also to do so.
Important dates & deadlines?
-
1 May'26, 12:00 AM IST Registration Deadline
Additional Information
Job Location(s)
Gurgaon
Salary
Salary: Not Disclosed
Work Detail
Working Days: 5 Days
Job Type/Timing
Job Type: In Office
Job Timing: Full Time
Featured Opportunities
Online
Free
International Quant Championship 2026
Online
Free
Accenture Customer Service Representative Hiring Challenge
Online
Free
Win Prizes worth INR 10 lakhs, PPIs and Much More!
Online
Free