Lead Data Engineer required to lead the design and development of end-to-end data pipelines for a leading financial services business. Responsibilities: design and development of pipelines using Databricks and Lakehouse optimisation and maintenance of data workflows, ensuring quality and integrity performance tuning and monitoring Notebook development using Pyspark and Spark SQL implementation of best practices for data engineering, including governance and security mentoring and coaching of junior Data Engineers Requirements: extensive experience building data pipelines in a Databricks Lakehouse environment well-versed with Spark and other big data technologies excellent coding skills using SQL and Python expertise working on Azure platform Click on the 'Apply' button to submit your CV.