Design, develop and support data pipelines in a hybrid cloud environment to enable advanced analytics. Design, develop and support CI/CD of data pipelines and micro-services.
Develop new services in AWS using server-less and container-based services. Work with Spark clusters and Bigdata ecosystem tools on-prem and in the cloud.
Minimum Qualifications:
Proficient in Python and Spark
Hands-on experience with Azure/AWS/GCP.
Hands-on experience with Data Lake or Data Warehouse
Intermediate to advanced SQL skills
Experience in using Serverless Development
Should have the ability to work and contribute beyond defined responsibilities
Excellent communication/inter-personal skills a must
Attitude and aptitude to learn new technologies in a fast-paced environment
Effective problem-solving skills
Ability to work in a fast-paced environment with a "can do" attitude
Preferred Qualifications:
2-5+ Years of working experience in relevant technologies.
Besides Minimum Qualification below will be considered added advantages
- Working experience on Python, Airflow, Apache Spark , Apache Beam, Apache Flink, Kubernetes etc.
- Experience with CI/CD and DevOps is added advantage
- Working Knowledge of OpenShift
Familiarity in using AIOPS platforms like mlFlow, AutoML - Knowledge on Kafka is added advantage
Bachelors in Computer Engineering and/or Computer Science and/or Information Technology.