SonicJobs Logo
Login
Left arrow iconBack to search

Data Engineer Contract

Harnham - Data & Analytics Recruitment
Posted 6 days ago, valid for 12 days
Location

London, Greater London EC1R 0WX

Contract type

Full Time

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • A client is seeking a Data Engineer to manage data for their feasibility product, focusing on building and maintaining data stores and automation processes.
  • The role involves developing applications to transform production data streams, optimizing ETL/ELT workflows, and automating model training and deployment pipelines.
  • Candidates should have strong experience with Python, FastAPI development, and hands-on experience with SQL databases, along with familiarity with AWS services and CI/CD tools.
  • The position requires at least 3 years of relevant experience in data engineering, with a focus on clean and modular code practices.
  • The salary for this role is competitive and commensurate with experience.

I am working with a client who is looking for a Data Engineer to take ownership of all things data for their feasibility product. This role is essential in building and maintaining data stores, automation, and stream consumers, enabling Data Scientists and Analysts to develop effective algorithms, processes, and reports. As a bridge between software engineering and data science, you'll work within the tech team to develop scalable solutions that meet business needs.

THE ROLE AND RESPONSIBILITIES
  • Develop applications to consume and transform production data streams (Kafka) for analytical and ML use.
  • Build and optimize ETL/ELT workflows to support feasibility models.
  • Automate model training, evaluation, and deployment pipelines.
  • Design and maintain cloud-based data stores using AWS Redshift and other cloud tools.
  • Implement monitoring solutions to ensure data integrity and model performance.
  • Develop FastAPI-based RESTful APIs and microservices to expose feasibility data to other teams.
  • Integrate APIs with CI/CD pipelines to enable automated testing and deployment.
  • Work closely with cross-functional teams to gather requirements and deliver data-driven solutions.
  • Follow best practices for clean, modular, and testable code, conducting code reviews to ensure quality.
  • Identify opportunities to improve the performance, maintainability, and scalability of existing systems.
YOUR EXPERIENCE AND QUALIFICATIONS

*** Please only apply if you have experience developing FastAPIs ***

  • Strong experience with Python or similar languages (e.g., R).
  • Experience developing FastAPIs
  • Hands-on experience with SQL databases (PostgreSQL preferred).
  • Familiarity with AWS services (S3, SageMaker, RDS, EC2).
  • Experience with CI/CD tools (AWS CodePipeline, GitHub Actions).
  • Proficiency with infrastructure as code tools (Terraform, CloudFormation).
  • Experience with distributed data processing tools (Spark, Dask, or similar).
  • Proficiency in Git version control and collaborative coding practices.
  • Familiarity with Kafka is an advantage.

If all of the above aligns with your experience, please apply using the link below.

Apply now in a few quick clicks

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.