Senior Data Engineer required to support the development of an Azure Databricks platform for a leading financial services business. Responsibilities: design and development of pipelines using Databricks and Lakehouse optimisation and maintenance of data workflows, ensuring quality and integrity performance tuning and monitoring Notebook development using Python and/or Pyspark implementation of best practices for data engineering, including governance and security mentoring and coaching of junior Data Engineers Requirements: extensive experience building data pipelines in a Databricks Lakehouse environment well-versed with Spark and other big data technologies excellent coding skills using SQL and Python expertise working on Azure platform Click on the 'Apply' button to submit your CV.