Location: Noida
Salary: Open
Employment Type: Permanent
Industry: Technology/Online
Sub-industry: Enterprise Software
Function: Technology

Company Overview

A global technology company, provides shipping & mailing solutions powering billions of physical and digital transactions in the connected and borderless world of commerce.

Job Description

The Job
• Develop Framework for Data Science Model Lifecycle (Model Training, Scoring, Monitoring) & Deployment in production. You will be working closely with the Data Scientist & SMEs across geographies for data science projects.
• Developing and executing Frameworks for Data Science Model Automation.
• Application Deployments via Automated CI/CD using Terraform/Ansible on AWS services like EKS , Elastic beans talk ,Sage maker , ECS etc.
• Work with a highly integrated, cross-discipline agile team and will be responsible for building and supporting modern Big Data Engineering solutions leveraging Enterprise data lake across Commerce Services & Sendtech business units.
• Oversees development, execution and continuation of complex large/multiple data analytics and projects.
• Facilitate communication and understanding between business users and data team.
• Take ownership of all assigned tasks and project related assignments.
• Organize and lead meetings with the business and project teams.
• Ensures timely completion of tasks in line with project objectives.
• Provides off hours support of critical production applications when necessary.


Required Qualifications & Skills
This role requires a talented self-directed individual with a strong work ethic and the following skills:

Must to have
• Software engineering experience in l Python/Scala/Java or similar programming languages to contribute to a Python code base.
• Scripting experience on Shell , AWS CLI etc.
• Experience architecting, building and deploying scalable data / Data science applications into AWS cloud using ECS, ECR,EKS, Elasticbeanstalk, Lambda, API Gateway, Sagemaker, DynamoDB, and S3
• Ansible , Terraform frameworks .
• Docker , Kubernetes (EKS)
• Operational knowledge of setting up and Managing Notebook Environments – Jupyter/Zeppelin
• Experience designing internet-scale public APIs.
• Operational Experience on setting up & maintaining Big Data Eco sysytem on AWS EMR & AWS Data pipeline
• Experience working with Docker, CI/CD pipelines using GitlabCI & Jenkins , and familiarity with infrastructure as code principles.
• Hands on experience working on Production Solutions with understanding on Scalability, Reliability, Uptime, Cost Optimization of a solution.
• Participate in design review sessions and ensure all solutions are aligned to pre-defined architectural specifications
• Work with distributed teams to design and develop frameworks, solution accelerators, proofs of concept, and external customer facing products
• Evaluate and incorporate new technologies into new and existing frameworks and solutions as applicable
• Collaborate with and mentor members of the team and other coworker
• Experience, knowledge, and/or training with the Agile/Scrum methodology
• Ability to collaborate effectively and work as part of a team
• Strong attention to details

Good to have
• Project management and Stakeholder management experience
• Automation of ETL Jobs and orchestration using scripts, pipelines, workflows.
• AWS data streaming and big data technologies such as Amazon Kinesis, Managed Kafka, Amazon Simple Queue Service (Amazon SQS), Amazon Simple Notification Service (Amazon SNS)
Qualification & work experience
• UG - B.Tech/B.E. OR PG – M.S. / M.Tech from REC or any other reputed institute
• 1+ years of Experience on AWS Sagemaker
• 1+ years' experience in an AWS(EMR data Pipeline) development environment with experience working with hands on experience designing and implementing solutions.
• 1+ year experience on Ansible/Terraform
• 1+ year experience on Docker/Kubernetes

Job reference: JO-210507-255781

Need Help?

Let us take a look at your profile and give you extra advice.
Send us your CV

Job application

You are about to apply to the following job: Software Engineer – ML Ops . Applying a job is not a commitment, it gives us the authorization to show your CV to the employer

Upload from Computer
Application confirmed!
Thank you for applying! We shall be in touch with you.