SonicJobs Logo
Login
Left arrow iconBack to search

PySpark Developer - PySpark Specialist

Harvey Nash
Posted 10 hours ago, valid for 25 days
Location

London, Greater London EC1R 0WX

Contract type

Full Time

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • The job title is PySpark Engineer - Data Specialist, offering a contract rate of £700 - £800 per day.
  • The position is with a Tier 1 investment bank in London and requires proven experience in data processing and automation within the investment banking sector.
  • Candidates should have strong proficiency in PySpark and Apache Spark, along with a solid understanding of SQL for optimizing complex queries.
  • Key responsibilities include developing and maintaining PySpark data processing pipelines and automating ETL processes to ensure seamless data flow.
  • This is a 6-month contract role that is hybrid and inside IR35, with an ASAP start date.

Job Title: PySpark Engineer - Data Specialist

Engagement: Contract

Rate: £700 - £800pd

Client: Tier 1 Investment Bank

Duration: 6 months

Start Date: ASAP

Project:

PySpark/SQL Developer - Investment Banking/Data Processing/Automation - Sought by a Tier 1 investment bank based in London. Hybrid - Contract.

Inside IR35 - Umbrella

Key Responsibilities:

  • Develop, maintain, and optimize PySpark data processing pipelines in a fast-paced investment banking environment.
  • Automate ETL processes (data extraction, transformation, and loading) to ensure seamless data flow across systems.
  • Collaborate with cross-functional teams, including data engineers and analysts, to implement data-driven solutions tailored for investment banking needs.
  • Leverage PySpark and Apache Spark to efficiently handle large datasets and improve processing efficiency.
  • Optimize SQL queries for faster data retrieval and integration across banking systems.
  • Ensure data integrity, quality, and security throughout the data pipeline lifecycle.
  • Troubleshoot and resolve data-related issues to maintain seamless reporting and analytics workflows.

Qualifications:

  • Proven experience in data processing and automation within an investment banking environment.
  • Strong proficiency in PySpark and Apache Spark for data pipeline development.
  • Solid understanding of SQL and experience optimizing complex queries.
  • Expertise in automating ETL processes to improve data flow and efficiency.
  • Excellent problem-solving skills, attention to detail, and ability to manage complex datasets.
  • Strong communication skills with the ability to work in a collaborative, fast-paced team environment.

Apply now in a few quick clicks

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.