SonicJobs Logo
Left arrow iconBack to search

Confluent Engineer - Apache Kafka, PySpark, Python

Henderson Scott
Posted 2 days ago, valid for a day
Location

London, Greater London EC1R 0WX

Contract type

Full Time

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.

Sonic Summary

info
  • We are looking for a Confluent Engineer with 4-7 years of hands-on experience in Kafka and PySpark for a contract role.
  • The position offers a day rate of £500 - £550 outside IR35, which is negotiable.
  • This hybrid role requires you to work 3 days a week in our City of London office, with a contract duration of 6-12 months and potential for extension.
  • Key responsibilities include designing scalable data pipelines, managing Confluent Kafka clusters, and collaborating with cross-functional teams.
  • Ideal candidates should have expertise in distributed systems, real-time data streaming, and strong analytical skills.

Job Title: Confluent Engineer - SME

Day rate: £500 - £550 outside IR35 (negotiable)

Primary Skills: Apache Kafka, PySparkSecondary Skills: PythonWork Mode: Hybrid - 3 days a week in City of London officeContract Type: 6-12 months (with potential extension)

About the RoleWe are seeking an experienced Confluent Engineer to join our team on a contract basis. In this role, you will design, develop, and maintain high-performance, scalable real-time data pipelines using Kafka and PySpark. You will work collaboratively with cross-functional teams to integrate diverse data sources and ensure reliable, low-latency data streaming solutions.

Key Responsibilities:

  • Design and implement scalable, fault-tolerant data pipelines using Kafka and PySpark for real-time and batch processing.
  • Configure, manage, and optimize Confluent Kafka clusters to support high-throughput, low-latency environments.
  • Build and deploy custom Kafka Connectors, Kafka Streams, and producer/consumer applications.
  • Integrate multiple data sources into the pipeline while ensuring data quality and consistency.
  • Collaborate with data scientists, analysts, and developers to deliver solutions aligned with business needs.
  • Monitor, debug, and optimize Kafka and PySpark applications for high availability and performance.
  • Implement security, fault tolerance, and disaster recovery best practices for Kafka ecosystems.
  • Create and maintain clear documentation for system design, processes, and operational guidelines.

Must-Have Skills:

  • 4-7 years of hands-on experience with Kafka and PySpark in production environments.
  • Deep understanding of distributed systems, real-time data streaming, and processing frameworks.
  • Expertise with Confluent Kafka, including Schema Registry, KSQL, and REST Proxy.
  • Proven track record in building scalable, resilient data pipelines.
  • Strong analytical and problem-solving skills.
  • Excellent communication and documentation abilities.

Nice-to-Have Skills:

  • Experience in Python for scripting and automation.
  • Familiarity with DevOps practices and tools for CI/CD pipelines.

Why Join Us as a Contractor?

  • Competitive daily rates.
  • Opportunity to work on cutting-edge technologies in real-time data streaming.
  • Be part of a collaborative and fast-paced team environment.

If you are a highly skilled Confluent Engineer ready for your next challenge, we'd love to hear from you!

Apply now in a few quick clicks

In order to submit this application, a Reed account will be created for you. As such, in addition to applying for this job, you will be signed up to all Reed’s services as part of the process. By submitting this application, you agree to Reed’s Terms and Conditions and acknowledge that your personal data will be transferred to Reed and processed by them in accordance with their Privacy Policy.