About the job Big Data Engineer
Job Description for Big Data Engineer:
Primary Skills- Bigdata Hadoop, Hive, Spark, scala, SQL
Secondary Skills- DW/ETL, Unix, Git, Bitbucket, ControlM
- 3-5 years of experience in developing end to end bigdata pipeline
- Must have experience in developing at least 1 bigdata project using hive, spark, scala
- Thorough understanding of Hadoop and spark architecture
- Strong SQL skills
- Basic understanding of Unix scripting
- Good understanding of Data warehouse/ETL concepts
- Experience on any version control tool
- Experience on any job scheduling tool
- Good to have knowledge on Azure or any other cloud
- Good to have experience on Hive Query or spark code optimization