Job Title: PySpark Engineer - Data Specialist
Engagement: Contract
Rate: £350 - £500pd
Client: Tier 1 Investment Bank
Duration: 10 months to Permanent
Start Date: ASAP (HAS TO BE BY 31ST March)
Project:
PySpark/SQL Developer - Investment Banking/Data Processing/Automation - Sought by a Tier 1 investment bank based in London. Hybrid - Contract.
Inside IR35 - Umbrella
Key Responsibilities:
- Develop, maintain, and optimize PySpark data processing pipelines in a fast-paced investment banking environment, dataframes, spark streaming, Python, Spark, PySpark xp vital
- CICD (Jenkins, Git)
- Collaborate with cross-functional teams, including data engineers and analysts, to implement data-driven solutions tailored for investment banking needs.
- Leverage PySpark and Apache Spark to efficiently handle large datasets and improve processing efficiency.
Qualifications:
- Proven experience in data processing and automation.
- Strong proficiency in PySpark and Apache Spark for data pipeline development.
- Expertise in CICD pipelines, Jenkins, Git
- Excellent problem-solving skills, attention to detail, and ability to manage complex datasets.
- Strong communication skills with the ability to work in a collaborative, fast-paced team environment.