About Us
At ANZ, we're applying new ways technology and data can be harnessed as we work towards a common goal: to improve the financial wellbeing and sustainability of our millions of customers.
About the Role
As a Machine Learning Engineer/Data Scientist at ANZ, you are accountable for building advanced models to understand and solve complex data problems through advanced modelling.
You will work with Data Engineers and Data Analysts to determine the relevant internal and external data sources to enable developing predictive and descriptive models.
You will apply your analytical skills to a broad range of data points to develop customer-centric solutions.
You will communicate insights and models through impactful data visualisation and storytelling.
With data and analytics at the heart of everything you do, you uncover insights and enable data-driven decision making.
What will your day look like?
Design, train and implement Machine learning models using Python and libraries like scikit-learn, tensorflow, pytorch to optimise business processes and automate decision-making.
Deploy and scale machine learning solutions leveraging MLOps practices and CICD pipelines.
Monitor model performance and implement corrective action to address any degradation or issues.
Build and Automate data pipelines using Airflow, Docker and Kubernetes pods.
Address and solve complex business issues using large amounts of data.
Develop tools and methods to scientifically profile customers and customer segments, products and channels and associated costs, revenues, risks and opportunities.
Source data from a variety of sources to combine, synthesise and analyse to support campaigns, pricing, propositions and other decisions.
What will you bring?
* Proficient in programming with Python (including data science libraries such as scikit-learn, Tensorflow and Py Torch).
* Expertise in data query languages such as SQL (Trino, Teradata and ANSI SQL flavours).
* Strong expertise in predictive modelling, pattern recognition, clustering, supervised and unsupervised learning techniques.
* Extensive experience in building and deploying end-to-end pipelines for training, deployment and monitoring using Airflow, with integrated data quality checks to ensure reliability and performance.
* Experience with containerization using Docker, Kubernetes for orchestration and scaling ML models, and MLFlow for model tracking and versioning.
* Experience using Evidently or similar packages to monitor model performance in production.
* Good understanding of generative AI, NLP domain and RAG architecture.
* Exposures to langchain, huggingface.
* Strong ability to translate data insights into practical business recommendations.