SonicJobs Logo
Login
Left arrow iconBack to search

Data Engineer - PySpark Developer

Harvey Nash
Posted 10 hours ago, valid for 19 days
Location

London, Greater London EC1R 0WX

Contract type

Full Time

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • The job title is PySpark Engineer for a contract engagement with a Tier 1 Investment Bank based in London.
  • The contract rate ranges from £350 to £500 per day, and the position is set for a duration of 10 months with the possibility of becoming permanent.
  • Candidates must have proven experience in data processing and automation with a strong proficiency in PySpark and Apache Spark.
  • The role requires collaboration with cross-functional teams and expertise in CICD pipelines, including Jenkins and Git.
  • The start date is ASAP, but candidates must be available by 31st March.

Job Title: PySpark Engineer - Data Specialist

Engagement: Contract

Rate: £350 - £500pd

Client: Tier 1 Investment Bank

Duration: 10 months to Permanent

Start Date: ASAP (HAS TO BE BY 31ST March)

Project:

PySpark/SQL Developer - Investment Banking/Data Processing/Automation - Sought by a Tier 1 investment bank based in London. Hybrid - Contract.

Inside IR35 - Umbrella

Key Responsibilities:

  • Develop, maintain, and optimize PySpark data processing pipelines in a fast-paced investment banking environment, dataframes, spark streaming, Python, Spark, PySpark xp vital
  • CICD (Jenkins, Git)
  • Collaborate with cross-functional teams, including data engineers and analysts, to implement data-driven solutions tailored for investment banking needs.
  • Leverage PySpark and Apache Spark to efficiently handle large datasets and improve processing efficiency.

Qualifications:

  • Proven experience in data processing and automation.
  • Strong proficiency in PySpark and Apache Spark for data pipeline development.
  • Expertise in CICD pipelines, Jenkins, Git
  • Excellent problem-solving skills, attention to detail, and ability to manage complex datasets.
  • Strong communication skills with the ability to work in a collaborative, fast-paced team environment.

Apply now in a few quick clicks

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.