We are sorry!

This job has been closed. You will find bellow the job description as a reminder. It is not possible to apply anymore.

Location: Noida
Salary: Open
Employment Type: Permanent
Industry: Technology/Online
Sub-industry: Software Company
Function: Internal IT

Company Overview

It is a global technology company, provides shipping & mailing solutions powering billions of physical and digital transactions in the connected and borderless world of commerce.

Job Description

The Job
• Develop Framework for Data Science Model Lifecycle (Model Training, Scoring, Monitoring) & Deployment in production. You will be working closely with the Data Scientist & SMEs across geographies for data science projects.
• Developing and executing Frameworks for Data Science Model Automation.
• Application Deployments via Automated CI/CD using Terraform/Ansible on AWS serviceslike EKS , Elasticbeanstalk ,Sagemaker , ECS etc.
• ETL Jobs in Spark/Python on AWS EMR , ELT on Snowflake
• Work with a highly integrated, cross-discipline agile team and will be responsible for building and supporting modern Big Data Engineering solutions leveraging Enterprise data lake across Commerce Services & Sendtech business units.
• Oversees development, execution and continuation of complex large/multiple data analytics and projects.
• Facilitate communication and understanding between business users and data team.
• Take ownership of all assigned tasks and project related assignments.
• Organize and lead meetings with the business and project teams.
• Ensures timely completion of tasks in line with project objectives.
• Provides off hours support of critical production applications when necessary.

Requirements

Must to have
• Software engineering experience in Python/Scala/Java or similar programming languages to contribute to a Python code base.
• Scripting experience on Shell , AWS CLI etc.
• Experience in building and deploying scalable data / Data science applications into AWS cloud using EC2/ECS/ECR/EKS/Elasticbeanstalk, Lambda, API Gateway, Sagemaker, DynamoDB, and S3
• Any Experience on CI/CD frameworks like Ansible , Terraform frameworks .
• Experience working with Docker, CI/CD pipelines using GitlabCI & Jenkins , and familiarity with infrastructure as code principles.
• Work with distributed teams to design and develop frameworks, solution accelerators, proofs of concept, and external customer facing products
• Evaluate and incorporate new technologies into new and existing frameworks and solutions as applicable
• Collaborate with and mentor members of the team and other coworker
• Experience, knowledge, and/or training with the Agile/Scrum methodology
• Ability to collaborate effectively and work as part of a team
• Strong attention to details

Additional Information

• Automation of ETL Jobs and orchestration using scripts, AWS data pipelines, workflows.
• AWS data streaming and big data technologies such as Amazon Kinesis, Managed Kafka, Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS)
• UG - B.Tech/B.E. OR PG – M.S. / M.Tech from REC or any other reputed institute
• 1 years of Experience on any of AWS Services like EC2 , EMR , Sagemaker , S3 , Lambda
• 1 year experience on Big Data Ecosystem