SonicJobs Logo
Login
Left arrow iconBack to search

PySpark Developer - PySpark Specialist

Harvey Nash
Posted 3 days ago, valid for 4 days
Location

London, Greater London EC3V 3LA, England

Salary

£700 - £800 per day

Contract type

Full Time

In order to submit this application, a TotalJobs account will be created for you. As such, in addition to applying for this job, you will be signed up to all TotalJobs’ services as part of the process. By submitting this application, you agree to TotalJobs’ Terms and Conditions and acknowledge that your personal data will be transferred to TotalJobs and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • The job is for a PySpark Engineer specializing in data processing at a Tier 1 investment bank in London.
  • The contract rate is between £700 and £800 per day, and the position is inside IR35.
  • Candidates should have proven experience in data processing and automation within an investment banking environment.
  • Strong proficiency in PySpark, Apache Spark, and SQL is required, along with expertise in automating ETL processes.
  • The contract duration is 6 months, and the start date is ASAP.

Job Title: PySpark Engineer - Data Specialist

Engagement: Contract

Rate: £700 - £800pd

Client: Tier 1 Investment Bank

Duration: 6 months

Start Date: ASAP

Project:

PySpark/SQL Developer - Investment Banking/Data Processing/Automation - Sought by a Tier 1 investment bank based in London. Hybrid - Contract.

Inside IR35 - Umbrella

Key Responsibilities:

  • Develop, maintain, and optimize PySpark data processing pipelines in a fast-paced investment banking environment.
  • Automate ETL processes (data extraction, transformation, and loading) to ensure seamless data flow across systems.
  • Collaborate with cross-functional teams, including data engineers and analysts, to implement data-driven solutions tailored for investment banking needs.
  • Leverage PySpark and Apache Spark to efficiently handle large datasets and improve processing efficiency.
  • Optimize SQL queries for faster data retrieval and integration across banking systems.
  • Ensure data integrity, quality, and security throughout the data pipeline lifecycle.
  • Troubleshoot and resolve data-related issues to maintain seamless reporting and analytics workflows.

Qualifications:

  • Proven experience in data processing and automation within an investment banking environment.
  • Strong proficiency in PySpark and Apache Spark for data pipeline development.
  • Solid understanding of SQL and experience optimizing complex queries.
  • Expertise in automating ETL processes to improve data flow and efficiency.
  • Excellent problem-solving skills, attention to detail, and ability to manage complex datasets.
  • Strong communication skills with the ability to work in a collaborative, fast-paced team environment.

Apply now in a few quick clicks

In order to submit this application, a TotalJobs account will be created for you. As such, in addition to applying for this job, you will be signed up to all TotalJobs’ services as part of the process. By submitting this application, you agree to TotalJobs’ Terms and Conditions and acknowledge that your personal data will be transferred to TotalJobs and processed by them in accordance with their Privacy Policy.