Job Title: PySpark Engineer - Data Specialist
Engagement: Contract
Rate: £700 - £800pd
Client: Tier 1 Investment Bank
Duration: 6 months
Start Date: ASAP
Project:
PySpark/SQL Developer - Investment Banking/Data Processing/Automation - Sought by a Tier 1 investment bank based in London. Hybrid - Contract.
Inside IR35 - Umbrella
Key Responsibilities:
- Develop, maintain, and optimize PySpark data processing pipelines in a fast-paced investment banking environment.
- Automate ETL processes (data extraction, transformation, and loading) to ensure seamless data flow across systems.
- Collaborate with cross-functional teams, including data engineers and analysts, to implement data-driven solutions tailored for investment banking needs.
- Leverage PySpark and Apache Spark to efficiently handle large datasets and improve processing efficiency.
- Optimize SQL queries for faster data retrieval and integration across banking systems.
- Ensure data integrity, quality, and security throughout the data pipeline lifecycle.
- Troubleshoot and resolve data-related issues to maintain seamless reporting and analytics workflows.
Qualifications:
- Proven experience in data processing and automation within an investment banking environment.
- Strong proficiency in PySpark and Apache Spark for data pipeline development.
- Solid understanding of SQL and experience optimizing complex queries.
- Expertise in automating ETL processes to improve data flow and efficiency.
- Excellent problem-solving skills, attention to detail, and ability to manage complex datasets.
- Strong communication skills with the ability to work in a collaborative, fast-paced team environment.