About Us
Honeysuckle Health exists to help people lead healthier lives. As a healthcare services company, we develop and deliver digital and telephonic health programs, health services contracting, and healthcare analytics. We have an exciting opportunity for a Mid Data Engineer to join Honeysuckle Health.
A career at Honeysuckle Health presents a unique and exciting opportunity to provide support to health consumers, health funds, and providers in the best possible application of health care services.
The Role
As a Data Engineer, you will play a crucial role in designing, building, and maintaining data warehousing solutions and ELT pipelines. You will work closely with both technical and non-technical stakeholders to ensure data alignment with business needs. There are future opportunities to grow into our machine learning workflow and support our machine learning operations (MLOps) processes within the business.
Your responsibilities may include but are not limited to: Core responsibilities Take ownership of business-critical data pipelines, identify opportunities, refactor them, and improve the design.Improve and design data ingestion patterns, and build reproducible data pipelines that are reliable and observable.Contribute to our data quality framework to deliver acceptable quality data for business use cases (e.g., data quality test cases to ensure data is fresh, accurate, and complete).Review and improve designs, practices, and tooling on an ongoing basis.You must be willing to: Lead by example in defining best practices for Data Engineering and DataOps (Data Operations).Increase awareness amongst team members on data concepts like Data Lake, Data Warehousing, ingestion patterns, etc.Other Responsibilities: Co-designing MLOps (Machine Learning Operations) framework with data scientists and helping the team to deliver ROI with machine learning and AI (e.g., Monitoring ML pipeline, and automating training due to data degradation).Design practices around model packaging and serving via Rest API on the cloud (e.g., Choosing Django or FastAPI for model serving).About your skills: Essential (Commercial not necessary, we appreciate personal experience): Experience with data pipelines (ETL/ELT) within AWS (Amazon Web Services). Familiarity with data partitioning and clustering concepts.Experience with orchestrating pipelines with open-sourced tools such as Airflow, Prefect, Luigi, Gitlab, or Oozie.Experience with data transformation using Spark or DBT.Ability to communicate technical ideas clearly to other team members and business stakeholders.Experience with defining complex workflows for CICD.Previous commercial experience on any three major clouds (GCP, AWS, AZURE).Ideal but not essential experience: Adding monitoring and alerting to data pipelines.Building and consuming REST or Graph APIs.Streaming data concepts and technologies using any of the following (Kafka, Kinesis, Pub Sub, or any other cloud streaming service).Bonus points for: A degree or diploma in technology, data science, or analytics.Building containerized applications on the cloud.Knowledge and passion for serverless architectures.Experience with infrastructure as code.Software Engineering/Dev-Ops previous work experience.Fundamental knowledge about machine learning.Our Stack (100% Cloud): Snowflake for Data warehousing and data share.DBT to interact and deploy our code on Snowflake.AWS Code build and Code pipeline with Bitbucket for CICD.AWS serverless offerings for all our applications.Databricks for machine learning.Terraform, Cloud Formation, and Pulumi for infrastructure as code.QuickSight and Tableau for data visualizations.We Offer: Training budget and time during the week for learning/development.Personal development through business books and one-on-one sessions.Work with amazing people who have a passion for health care and improving the lives of the community.Opportunity to participate in a short-term incentive program.Great support for promoting work-life balance - Inclusion in almost daily fitness activities with the team around Newcastle office site, ranging from swims, exercise sessions, runs, bike rides, etc.Mix of remote and on-site work. We are currently resourced to work from home; however, we may encourage you to travel occasionally.Who, Why, and How to Apply We strongly encourage applications from Women, Aboriginal and Torres Strait Islanders, people from culturally and linguistically diverse backgrounds, and people with a disability as we recognize that these groups are underrepresented throughout the technology industry.
We also actively support accessibility requirements if needed.
If you are seeking professional growth and enjoy working on large, distributed, cloud-based applications, love coffee runs, and enjoy techy conversations, then apply now.
Finally, to apply you must possess the right to work in Australia: we are not setup to offer sponsorship at this time.
Your application will include the following questions:
Which of the following statements best describes your right to work in Australia?How many years' experience do you have as a Data Engineer?Which of the following programming languages are you experienced in?Which of the following data analytics tools are you experienced with?How many years' experience do you have in a DevOps role?Have you worked in a role that requires experience with machine learning techniques?What's your expected annual base salary?How much notice are you required to give your current employer?To help fast track investigation, please include here any other relevant details that prompted you to report this job ad as fraudulent, misleading, or discriminatory.
#J-18808-Ljbffr