ML Ops applying DevOps philosophy into Machine Learning projects
September 2, 2020
What is MLOps?
MLOps is the application of DevOps principles to Machine Learning (ML) systems/projects. MLOps is a culture and practice that aims at combining Machine Learning system development and Machine Learning system operations. The goal of incorporating MLOps into a ML project life cycle is to achieve continuous integration (CI), continuous delivery (CD) and continuous training (CT) and automate as much of the process as much as we can.
The process of training a machine learning model that solves a business problem and deploying the model into production to be used by a business are two very different processes and requires different skills and tools. Furthermore, machine learning projects are different to traditional software development projects and the field of deploying ML projects into a production environment is quite a new field. Deeplearning.ai has reported that only 22 percent of companies using machine learning have successfully deployed a model (read more here).
With the numbers being this low, it seems important that for a digital organisation to utilise their machine learning models they should lay the foundation for a successful MLOps practice.
Azure Implementation of MLOps level 1: ML pipeline automation
Once the data scientist has conducted an experiment they push their code into a git repository.
The pushing up of the code triggers a build pipeline that runs several tasks which includes setting up infrastructure, testing etc. One of the tasks in this pipeline is to run a Machine Learning pipeline.
The Machine Learning pipeline covers the typical steps in an experiment which can include data preparation, model training, model tuning, model validation etc. Part of this step must involve the registration of the model artifacts.
Once a model has been registered this will trigger a release pipeline
The release pipeline runs tasks to deploy the model artifacts into an environment that can be consumed via an application/users.
A web app deployed into a container instance is deployed where the model's predictions can be consumed via an API endpoint.
The above process is one such implementation and is an example of online prediction. Not all models require to be online and may be better suited to be consumed as batch predictions.
Furthermore, there needs to be a model monitoring process that monitors and evaluates the performance of the deployed model in production with live data. The evaluation of the deployed model on live data will determine if it needs to be re-trained which will trigger the entire pipeline again.
If you would like to learn more about how we can help create an MLOps practice for your business or would like to see a demo please reach out and contact us.
Grow your business.
Today is the day to grow the business of your dreams. Share your mission with the world — and blow your customers away. Let us guide you on your journey.