Role: Big Data Engineer / Big Data Lead Engineer Location: Sydney Fulltime (Permanent) Required Skills : Scala, PySpark, Data Pipelines & Databricks Job Description: Design, build, and maintain scalable data pipelines and workflows on the Databricks platform. Collaborate with data engineers, data scientists, and analysts to optimize data architecture and performance. Implement monitoring, alerting, and automation solutions to ensure the reliability and efficiency of Databricks clusters and jobs. Provide technical expertise and support for troubleshooting issues and optimizing data processing and analysis tasks. Proficiency in Databricks, Apache Spark, and related big data technologies. Experience with designing and implementing ETL processes, data modeling, and data warehousing concepts. Strong programming skills in languages such as Python, Scala, or SQL. Familiarity with cloud platforms like AWS Excellent problem-solving skills and the ability to work effectively in a collaborative team environment. Certifications in Databricks or related technologies are desirable. Interested Candidates can share their updated resumes on sourabh.sood@carecone.com.au or can reach me on +61 290 559 949.