Company Overview
US Based Software Service Company
This job has been closed. You will find bellow the job description as a reminder. It is not possible to apply anymore.
US Based Software Service Company
Role and responsibilities
• Project Management (50%)
• Front Door (Requirements, Metadata collection, classification & security clearance)
• Data pipeline template development
• Data pipeline Monitoring development & support (operations)
• Design, develop, deploy, and maintain production-grade scalable data transformation, machine learning and deep learning code, pipelines; manage data and model versioning, training, tuning, serving, experiment and evaluation tracking dashboards.
• Manage ETL and machine learning model lifecycle: develop, deploy, monitor, maintain, and update data and models in production.
• Build and maintain tools and infrastructure for data processing for AI/ML development initiatives.
Technical skills requirements
The candidate must demonstrate proficiency in,
• Experience deploying machine learning models into production environment.
• Strong DevOps, Data Engineering and ML background with Cloud platforms
• Experience in containerization and orchestration (such as Docker, Kubernetes)
• Experience with ML training/retraining, Model Registry, ML model performance measurement using ML Ops open source frameworks.
• Experience building/operating systems for data extraction, ingestion and processing of large data sets
• Experience with MLOps tools such as MLFlow and Kubeflow
• Experience in Python scripting
• Experience with CI/CD
• Fluency in Python data tools e.g. Pandas, Dask, or Pyspark
• Experience working on large scale, distributed systems
• Python/Scala for data pipelines
• Scala/Java/Python for micro-services and APIs
• HDP, Oracle skills & Sql; Spark, Scala, Hive and Oozie DataOps (DevOps, CDC)
Nice-to-have skills
• Jenkins, K8S
• Google Cloud certification
• Unix or Shell scripting