Job Requirements: Strong Data Engineering skills using Azure and PysparkGood Knowledge on SQLPreferred experience in Big data/Hadoop technologies like Spark, Hive, Hbase, and KafkaPreferred experience in ETL processGood Communication SkillsDesired Experience Range: 5 – 10 years
Location of Requirement: Perth/Australia
Desired Competencies (Technical/Behavioral Competency): Must-Have: Strong Data Engineering skills using Azure and Pyspark (or Databricks, Hadoop/Spark using Java/Scala)Experience in Azure Data Factory and other Azure servicesExperience in loading and transforming the data using Spark or any big data technologies (Hive, Kafka, Hbase, Spark or Storm)Very good SQL knowledgeGood-to-Have: ETL Process experience in any Cloud or any on-premise big data platform#J-18808-Ljbffr