Who we are and how are we different?Nuvento is a Canberra-based specialist technical data consultancy that provides consultancy services to both private and public organizations throughout Australia. Our team consists of passionate and highly skilled professionals who are dedicated to delivering the right solutions with the right tech in the right way, on time, every time.We're a business run by techs, for techs, that challenge the mould and strives to be different.Our driving principles include:being generous in our knowledge, spirit, and timeaccountable and transparentdedicated to growth, knowledge sharing and developmentvaluing work-life balanceand rewarding our staff for the value they provideWho we are looking for:We are looking for a Data Engineer with experience in Spark and Kafka to operate with one of our large Federal Government Clients. As a Data Engineer, you will have the opportunity to work with cutting-edge technologies and collaborate with cross-functional teams to ensure successful data integration.Your Responsibilities:Maintain and develop Spark pipelines to process large-scale data sets.Ingest tables into our data lake from Kafka using batch processing, micro-batching, or real-time processing techniques.Run Spark jobs for ingestion, curation, and extraction of data.Alter schemas and configs to meet business requirements and ensure data quality.Attend meetings regarding data ingestion and collaborate with cross-functional teams to ensure successful data integration.Troubleshoot and resolve bugs in the data processing pipelines.Monitor performance and troubleshoot issues using logging and monitoring tools.Document best practices and develop standards for data processing pipelines.Stay up-to-date with emerging technologies and industry trends in big data processing and analytics.Requirements:Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field.3+ years of experience in data engineering, software development, or a related field.Strong programming skills in Scala, Python, or Java.Expertise in Spark, Kafka, and big data processing concepts.Experience with data storage solutions, such as Hadoop HDFS or Amazon S3, and data serialization formats.Knowledge of database concepts, data modelling, schema design, and normalisation.Familiarity with monitoring and logging tools, and workflow management tools, such as Apache Airflow.Experience with version control systems, such as Git.Your attributes and aptitude:Strong communication, problem-solving, and collaboration abilities.Ability to work independently and in a team-oriented, collaborative environment.Security Clearance Required - All applicants must have a Baseline (NV1 preferred)Why work with Nuvento?We offer competitive compensation packages and benefits, along with opportunities for growth and advancement within the company. As a member of our team, you will have the chance to work on challenging projects, develop your skills, and make a real impact on our business.If you are a self-motivated and results-oriented data engineer with a passion for big data processing, we encourage you to apply for this exciting opportunity.How to apply:This role is available to candidates based in Melbourne, Canberra & Brisbane.To apply for this position, please complete your profile and apply via our LiveHire application portal located with this ad.New positions regularly come available with the flexibility provided around commencement dates.All new opportunities will be posted via our talent community, as this is our preferred space to source exceptional top talent.
#J-18808-Ljbffr