Join to apply for the Senior Data Engineer role at PayTech Group12 month contract opportunityStart - ASAPLocation - Sydney (hybrid)As a Senior Data Engineer with expertise in software development/programming and a passion for building data-driven solutions, you're ahead of trends and work at the forefront of Big Data and Data warehouse technologies.We are seeking people who are:Passionate about building next-generation data platforms and data pipeline solutions across the bank.Ready to execute state-of-the-art coding practices, driving high-quality outcomes to solve core business objectives and minimize risks.Capable of creating both technology blueprints and engineering roadmaps for a multi-year data transformational journey.Experienced in providing data-driven solutions that source data from various enterprise data platforms into Cloudera Hadoop Big Data environment, using technologies like Spark, MapReduce, Hive, Sqoop, Kafka; transform and process the source data to produce data assets; and transform and egress to other data platforms like Teradata or RDBMS system.Experienced in building effective and efficient Big Data and Data Warehouse frameworks, capabilities, and features, using common programming languages (Scala, Java, or Python), with proper data quality assurance and security controls.Experienced in designing, building, and delivering optimized enterprise-wide data ingestion, data integration, and data pipeline solutions for Big Data and Data warehouse platforms.Confident in building group data products or data assets from scratch, by integrating large sets of data derived from hundreds of internal and external sources.Able to lead and mentor other data engineers in project work or initiatives.Responsible for data security and data management.Technical SkillsExperience in designing, building, and delivering enterprise-wide data ingestion, data integration, and data pipeline solutions using common programming languages (Scala, Java, or Python) in a Big Data and Data Warehouse platform. Preferably with at least 5+ years of hands-on experience in a Data Engineering role.Experience in building data solutions in Hadoop platform, using Spark, MapReduce, Sqoop, Kafka and various ETL frameworks for distributed data storage and processing. Preferably with at least 5+ years of hands-on experience.Strong Unix/Linux Shell scripting and programming skills in Scala, Java, or Python.Proficient in SQL scripting, writing complex SQLs for building data pipelines.Experience in leading and mentoring data engineers, including ownership of internal business stakeholder relationships and working with consultants.Experience in working in Agile teams, including working closely with internal business stakeholders.Familiarity with data warehousing and/or data mart build experience in Teradata, Oracle or RDBMS system is a plus.Certification on Cloudera CDP, Hadoop, Spark, Teradata, AWS, Ab Initio is a plus.Experience in Ab Initio software products (GDE, Co>Operating System, Express>It, etc.) is a plus.Experience in AWS technology (EMR, Redshift, DocumentDB, S3, etc.) is a plus.Seniority levelNot ApplicableEmployment typePart-timeJob functionInformation TechnologyIndustriesBusiness Consulting and Services
#J-18808-Ljbffr