Initial 6-month contract with potential extension up to 1–2 years Competitive daily pay rate of $ 800 to $ 906.68 plus super Hybrid work arrangement — Brisbane or Melbourne based candidates can apply Work alongside a highly collaborative team of 25 Data Engineers on exciting projects Purpose of the Role You will be responsible for designing and delivering robust data warehouse solutions that support business needs, following best practice ETL and DataOps principles to ensure scalability and trustworthiness of data across the organisation. This is a hands-on role focused on building modern cloud-based data pipelines using tools such as Databricks, Spark, ADF, Azure DevOps, SQL, and Python. Key Responsibilities Design, develop, and manage scalable data pipelines supporting transformation, data modeling, schemas, metadata, integration, and workload management Work with transactional and occasional streaming data sources and heterogeneous datasets in building and optimising data pipelines, pipeline architectures and integrated datasets using traditional data integration technologies. These should include ETL/ELT, data replication/CDC, message-oriented data movement, API design and access and upcoming data ingestion and integration technologies such as stream data integration, CEP and data virtualisation. Build pipelines using Databricks on Azure cloud, integrating services such as ADF, Function Apps, CosmosDB, and CI/CD via Azure DevOps Translate business and technical requirements into functional pipeline components Collaborate closely with solution designers, analysts, project managers, and data teams. Interpret business requirements to determine the solution required to implement new functionality within the data warehouse. Support the delivery of a new data solution from analysis through to implementation Apply Agile and DataOps principles to data pipelines to streamline and automate development processes Essential Skills 3 to 5 years in a senior data engineering or similar role Bachelor’s or Master’s in Computer Science, Data Management, or related field Hands-on experience with: Databricks and Spark Python SQL Server / SSIS Azure Data Factory and Azure DevOps Azure cloud services such as CosmosDB, Function Apps Sound understanding of data warehousing and cloud data architecture 3 years of experience across design, development and maintenance of complex data warehouses/data lakes Experience building data solutions on Azure using modern pipeline tools Nice to Have Familiarity with DBT, Data Vault, or framework development Exposure to real-time data pipelines and streaming ingestion tools Understanding of data integration in banking or financial domains Certifications in cloud and data technologies will be advantageous If this sounds like you, please submit your resume by clicking the 'Apply Now’ button. About Us At easyA, we connect skilled professionals with opportunities that make an impact. As authorised suppliers to multiple government and corporate organisations across NSW, ACT, QLD, and the Federal Government, we specialise in providing expert talent for critical projects. When you work with easyA, you benefit from our strong relationships with contractors and clients alike, ensuring smooth and transparent recruitment processes tailored to your needs.