**Senior Data Engineer (AWS Cloud)**
You stay ahead of the latest AWS Cloud and Data Lake technologies.
We are one of the largest and most advanced Data Engineering teams in the country.
Together we build state-of-the-art data solutions that power seamless experiences for millions of customers.
You do work that matters: As a Senior Data engineer with expertise in software development / programming and a passion for building data-driven solutions, you're at the forefront of AWS Cloud and Data warehouse technologies.
This is why we're a great fit for you.
You'll be part of a team of engineers who go above and beyond to improve the standard of digital banking.
We use the latest tech to solve our customers' most complex data-centric problems.
Data is everything. It powers our innovative features and provides seamless experiences for millions of customers from app to branch.
We are responsible for CommBank's key analytics capabilities and work to create world-leading capabilities for analytics, information management and decisioning.
We seek people who are:
* Passionate about building next generation data platforms and data pipeline solutions across the bank.
* Enthusiastic and able to contribute and learn from wider engineering talent in the team.
* Ready to execute state-of-the-art coding practices, driving high quality outcomes to solve core business objectives and minimize risks.
* Capable of creating both technology blueprints and engineering roadmaps, for a multi-year data transformational journey.
* Able to take the lead and drive a culture where quality, excellence and openness are championed.
* Constantly thinking creatively and breaking boundaries to solve complex data problems.
We are also interested in hearing from people who:
* Are data enthusiastic in providing solutions that source data from various enterprise data platforms into data lake, using technologies like Scala, Python, PySpark; transform and process the source data to produce data products, transform and egress to other data platforms like SQL Server, Oracle, Teradata and other cloud platforms.
* Are practiced in building effective and efficient Data Lake frameworks, capabilities, and features, using common programming language (Scala, PySpark or Python), with proper data quality assurance and security controls.
* Demonstrated experience in creating python/scala functions/libraries and use them for the config-driven pipeline generation and delivering optimised enterprise-wide data ingestion, data integration and data pipeline solutions for Data Lake & warehouse platforms.
* Able to build group data products or data assets from scratch, by integrating large sets of data derived from hundreds of internal and external sources.
* Can collaborate, co-create and contribute to existing Data Engineering practices in the team.
* Have experience and responsible for data security and data management.
* Have a natural drive to educate, communicate and coordinate with different internal stakeholders.
**Technical Skills:**
* AWS Services: Airflow, Redshift, Glue, ETL, DMS, EMR, KMS, MSK (Kafka), S3, CFN/CDK
* Developed knowledge/understanding of: Sql/NoSql data structures, Parquet File Format, Iceberg, Deep understanding of handling complex file formats and structures, Deep understanding of traditional data warehouse concepts (Facts/Dims, normalization) and event driven data architecture.
* Proficient in Programming (essential): Python, PySpark, Advanced SQL
**Working with us:**
We offer a respectful, inclusive, and flexible workplace with flexibility to work from any of our engineering hubs in Sydney.
We are driven by our values and support you to share ideas, initiatives, and energy.
We make a positive impact for customers, communities and each other every day.
We empower you to do your best work and give you choice on when and where that happens.
We really love working here, and we think you will too.