The ideal candidate must be an Australian Citizen and must be able to obtain Baseline Security Clearance. Responsibilities: Build, optimize, and manage data pipelines using Python and SQL within Azure Databricks Notebooks, Develop and implement ETL/ELT workflows in Azure Data Factory to streamline data transformation and loading, Apply best practices in Kimball dimensional modeling and Medallion architecture for scalable and well-structured data solutions, Work closely with team members and stakeholders to gather data requirements and translate them into effective technical solutions, Set up and maintain CI/CD pipelines in Azure DevOps, ensuring seamless deployments and version control using Git, Monitor, debug, and enhance Databricks jobs and queries for optimal performance and efficiency, Partner with data analysts and business intelligence teams to deliver well-structured, high-quality datasets for reporting and analytics, Uphold data governance, security, and privacy standards to ensure compliance, Contribute to code quality improvement through peer reviews, best practices, and knowledge sharing. Key Skills: Expertise in Python for data transformation, automation, and pipeline development, Advanced SQL skills for query optimization and performance tuning within Databricks Notebooks, Extensive hands-on experience with Azure Databricks for large-scale data processing, Proficiency in Azure Data Factory for orchestrating and automating data workflows, Strong experience with Azure DevOps, including CI/CD pipeline setup and Git-based code repository management, In-depth knowledge of Kimball dimensional modeling, including fact and dimension tables, star and snowflake schemas, for enterprise data warehousing, Familiarity with Medallion architecture for structuring data lakes using bronze, silver, and gold layers, Strong understanding of data modeling best practices for analytics and business intelligence, Excellent analytical and problem-solving skills, with a proactive approach to identifying and resolving issues, Strong collaboration and communication skills, effectively engaging both technical and business stakeholders. Essential Criteria: Experience preparing data optimised for query performance in cloud computed engines. (E.g.Distributed computing engines, (Spark) Azure, SQL, Python, R), Deep understanding of Kimball dimensional modelling and Medallion architecture.