I am working with a client who is looking for a Data Engineer to take ownership of all things data for their HR team. This role is essential in building and maintaining data stores, automation, and stream consumers, enabling Data Scientists and Analysts to develop effective algorithms, processes, and reports. As a bridge between software engineering and data science, you'll work within the tech team to develop scalable solutions that meet business needs.
Key Responsibilities- Design, build, and optimize ETL/ELT workflows in Snowflake for HR data.
- Develop applications to consume and transform HR production data streams (Kafka) for analytical and ML use.
- Architect and maintain cloud-based data stores (AWS Redshift, Snowflake).
- Automate model training, evaluation, and deployment pipelines.
- Work closely with cross-functional teams to gather requirements and deliver data-driven solutions.
- Strong experience with Python or similar languages (e.g., R).
- Hands-on experience with SQL databases (PostgreSQL preferred).
- Deep understanding of HR data and its specific challenges.
- Experience with Snowflake & AWS services (S3, SageMaker, RDS, EC2).
- Proficiency with CI/CD tools (AWS CodePipeline, GitHub Actions).
- Experience with Infrastructure as Code tools (Terraform, CloudFormation).
- Familiarity with Kafka and distributed data processing tools (Spark, Dask).
- Excellent communication skills and strategic thinking to work across teams.
If this sounds like you, apply now using the link below!