SonicJobs Logo
Left arrow iconBack to search

Data Engineer

Salt Search
Posted 2 days ago, valid for 6 days
Location

Glasgow, City of Glasgow G62 6EP, Scotland

Salary

£390 - £35 per day

Contract type

Full Time

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • A Data Engineer position is available in a hybrid work setting in Glasgow, offering a day rate of £390 (Inside IR35).
  • The role requires a minimum of 8 years of experience in data engineering, with a strong focus on technologies such as Spark, Python, and Java.
  • Candidates with a background in banking or financial services are preferred due to the domain-specific projects involved.
  • Key responsibilities include developing scalable ETL pipelines, working with large datasets, and maintaining CI/CD pipelines.
  • The ideal applicant should also possess knowledge of HDFS, SQL, and big data ecosystems, along with experience in data wrangling.

Job Title: Data Engineer

Location: Hybrid (2 days in office, Glasgow)

Day Rate: £390/day (Inside IR35)

About the Role:

We are seeking an experienced Data Engineer to join our blue-chip client's dynamic team. The ideal candidate will have extensive hands-on experience in data engineering, particularly with Spark, Python, and Java. A background in banking or financial services is highly desirable, as you will work on projects that require domain-specific knowledge. If you have a deep understanding of big data ecosystems, enjoy solving complex problems, and thrive in a hybrid work environment, this role is for you!

Key Responsibilities:

  • Develop and optimize scalable ETL pipelines using Spark and other big data technologies.
  • Work on large datasets using HDFS, Hive, and similar tools in the big data ecosystem.
  • Implement and maintain CI/CD pipelines to automate data workflows.
  • Collaborate with cross-functional teams to design data solutions and support ongoing analytics efforts.
  • Ensure data quality, availability, and performance in production environments.

Core Skills & Experience Required:

  • Minimum of 8+ years of experience in data engineering.
  • Hands-on experience with Spark 2.x/3.x.
  • Proficiency in Python (or Scala/Java), with a preference for Python.
  • Strong knowledge of HDFS, SQL, and experience in building ETL pipelines.
  • Familiarity with big data ecosystems including data nodes/edge nodes, Hive, and similar technologies.
  • Experience in data wrangling and working with large datasets.
  • Competency with CI/CD pipelines and experience in deploying scalable data solutions.

Desirable:

  • Previous experience in banking or financial services industries.
  • Familiarity with financial data structures and regulatory requirements.

Apply now in a few quick clicks

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.