Transform Data Engineer to Build Efficient Pipelines
Job Overview
We are seeking an experienced Data Engineer to join our team in Melbourne, with a strong focus on developing data warehouse solutions using Pyspark. As part of this role, you will work closely with IT and non-technical stakeholders to understand and define requirements for building efficient data pipelines.
Key Responsibilities
* Partner with cross-functional teams to gather requirements and define project scope
* Develop, test, and implement data pipelines to ingest data from various systems into Snowflake
* Gather, transform, and present structured and unstructured data in an easy-to-use format for the relevant audience
* Refine processes and develop and troubleshoot queries to ensure optimal performance
Requirements
* Extensive experience working in on-prem and cloud-based data warehousing environments
* Strong experience working with MS Fabric and expertise in developing SQL scripts and stored procedures
* Excellent Pyspark coding skills, with any experience with Snowflake being highly desirable