Senior Big Data Engineer
We are seeking a highly skilled Senior Data Engineer to lead the development and maintenance of our data pipeline infrastructure. In this role, you will architect and implement scalable, high-throughput pipelines for processing security-related data sources, ensuring data reliability and low latency. You will work closely with cross-functional teams, including Data Scientists, Developers, and Product Managers, to translate business requirements into technical solutions.
About this role
- Architect and Develop Data Pipelines: Design, implement, and maintain high-throughput data pipelines for processing IaaS and SaaS platform activity logs and other security-related data, ensuring scalability, reliability, and low latency.
- Lead Data Infrastructure Upgrades: Upgrade and optimize existing pipelines, including the anomaly detection system, to support large-scale data processing with minimal operational overhead.
- Orchestrate Data Pipeline Workflows: Lead the orchestration strategy of the data pipeline to handle up to 10K tasks efficiently, enhancing observability, monitoring, and fault tolerance.
- Enhance Code Quality and Testing: Improve the codebase by implementing best practices, enhancing testing infrastructure, and automating testing processes to ensure robust, maintainable pipelines.
- Collaborate Across Teams: Work closely with Data Scientists, Data Analysts, Developers, and Product Managers to understand business requirements and translate them into scalable data solutions.
- Drive Architectural Decisions: Lead discussions on data architecture and contribute to the strategic direction of the data infrastructure to support evolving business needs.
- Mentor and Guide: Share expertise with other team members, providing mentorship and guidance on best practices in data engineering.
- Monitor and Troubleshoot: Ensure the health of data pipelines by implementing monitoring solutions and proactively troubleshooting issues to minimize downtime.
Requirements
- Experience: At least 5+ years in Data Engineering with a strong background in cloud technologies.
- Big Data Expertise: Skilled in building data processing infrastructures and pipelines/ETLs using big and distributed data platforms with a strong understanding of stream processing frameworks. Familiarity with data-intensive technologies such as Delta, Iceberg, Kafka, Spark, Flink, SQL, Airflow, Snowflake, and Databricks.
- Coding skills: Expertise in server-side technologies such as Python, CI/CD, Docker, and Git, and familiarity with architecture and design patterns, particularly microservices, SQL, and NoSQL databases.
- Adaptability: Technologically diverse background with the ability and willingness to quickly learn new tools and technologies.
- Problem Solving and Collaboration: Highly self-motivated problem solver with excellent communication skills and a team-oriented mindset.
- Agile Mindset: Comfortable working in an agile, dynamic, and fast-paced environment, able to handle pressure and adapt to changing requirements.
Apply now
Fill in the form and we will contact you shortly.