- Location: Hybrid- London/ Birmingham
- Job Type: Contract
We are looking for a Data Engineer with a strong background in AWS and data pipeline management to join our team. The ideal candidate will have proven expertise in designing and implementing scalable data solutions using AWS technologies. This role requires someone who can work independently in remote team configurations and has a solid understanding of continuous data delivery components.
Day-to-day of the role:- Collaborate on designing and implementing scalable data solutions using a range of new and emerging technologies from the AWS platform.
- Demonstrate AWS Data expertise when communicating with stakeholders and translating requirements into technical data solutions.
- Manage both real-time and batch data pipelines using technologies such as Spark, Kafka, AWS Kinesis, Redshift, and DBT.
- Work independently with minimal supervision in remote team configurations.
- Automate existing manual processes to enhance efficiency and accuracy.
- Expertise in the main components of continuous data delivery: setup, design, and delivery of data pipelines including testing, deploying, monitoring, and maintaining.
- Strong data architecture knowledge across Software-as-a-Service, Platform-as-a-Service, Infrastructure-as-a-Service, and cloud productivity suites.
- Strong engineering background with experience in Python, SQL, and DBT, or similar frameworks used to ship data processing pipelines at scale.
- Demonstrated experience with and solid knowledge of AWS services including AWS Glue, AWS EMR, Amazon S3, and Redshift.
- Familiarity with Infrastructure-as-Code (IaC) using Terraform or AWS CloudFormation.
- Demonstrated experience in data migration projects.
- Ability to work with, communicate effectively, and influence stakeholders on internal/external engineering teams, product development teams, sales operations teams, and external partners and consumers.