- Design, develop, and maintain scalable and efficient data pipelines using Microsoft Fabric components (Data Factory, Synapse Data Engineering, Dataflows Gen2).
- Extract, transform, and load (ETL/ELT) data from diverse sources, including ERP systems, databases, APIs, and flat files, into the Fabric Lakehouse.
- Implement data quality checks and validation processes to ensure data accuracy and reliability.
- Design and implement data models and schemas within the Fabric Lakehouse to support analytical and reporting requirements.
- Manage and maintain data security and access controls within the Fabric environment.
- Develop data transformations using relevant technologies within Fabric Synapse Data Engineering.
- Collaborate with data analysts and business stakeholders to understand data requirements and translate them into technical specifications.  Â
- Develop data models for reporting and analytical purposes.  Â
- Monitor data pipeline performance and identify areas for optimisation, as well as troubleshoot and resolve data-related issues.
- Implement monitoring and alerting solutions to ensure data pipeline reliability.
- Work closely with other data engineers, data analysts, and business stakeholders.
- Document data pipelines, data models, and other technical specifications.
- Communicate effectively with team members and stakeholders.
- 2 or more years proven experience as a Data Engineer, preferably with the Microsoft stack.
- Strong understanding of data warehousing concepts and principles.
- Hands-on experience with Microsoft Fabric, including Data Factory, Synapse Data Engineering, and Dataflows Gen2 would be highly beneficial
- Proficiency in SQL and Spark (Scala, Python, or SQL), plus ETL/ELT processes and tools.
- Good knowledge of data quality and data governance principles.
- Experience with Power BI for data visualisation and reporting is a plus.