Role : Big Data Engineer Permanent Position Location : Sydney Australia Job Description : Required Skills : Scala, Pyspark, Data Pipelines & Databricks • Design, build, and maintain scalable data pipelines and workflows on the Databricks platform. • Collaborate with data engineers, data scientists, and analysts to optimize data architecture and performance. • Implement monitoring, alerting, and automation solutions to ensure the reliability and efficiency of Databricks clusters and jobs. • Provide technical expertise and support for troubleshooting issues and optimizing data processing and analysis tasks. • Proficiency in Databricks, Apache Spark, and related big data technologies. • Experience with designing and implementing ETL processes, data modeling, and data warehousing concepts. • Strong programming skills in languages such as Python, Scala, or SQL. • Familiarity with cloud platforms like AWS • Excellent problem-solving skills and the ability to work effectively in a collaborative team environment. • Certifications in Databricks or related technologies are desirable Interested candidate can share share their resume at rahul.singh@carecone.com.au