โ ๐ ๐ฎ๐ฐ๐ต๐ถ๐ป๐ฒ ๐๐ฒ๐ฎ๐ฟ๐ป๐ถ๐ป๐ด ๐๐๐ถ๐ป๐ด ๐๐ผ๐ป๐๐ฎ๐ถ๐ป๐ฒ๐ฟ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป ๐ง๐ฒ๐ฐ๐ต๐ป๐ผ๐น๐ผ๐ด๐๐ป โ
Technology was not what it was many years ago. It has changed and is evolving continuously. ๐ป
In todayโs world, we can say that Machine Learning is the most trending technology. Almost all the big companies are trying to automate things using Artificial Intelligence.
But the challenge faced by them is that the ML model that is created is somehow not getting deployed to the production world.
The percentage of AI models created but never put into production in large enterprises has been estimated to be as much as 90% or more.
With massive investments in data science teams, platforms, and infrastructure, the number of AI projects is dramatically increasing โ along with the number of missed opportunities. Unfortunately, most projects are not showing the value that business leaders expect and are introducing new risks that need to be managed.
โจ The solution to the above problem statement is MLOps. โจ
๐ฐ But, what exactly is MLOps? ๐ค ๐ป
Simply put, MLOps is when DevOps is applied to machine learning. MLOps can be looked upon as the communication between data scientists and the production or operations team of an organization or business. Collaborative in nature, MLOps is designated for eliminating unnecessary functions or waste and automate processes as much as possible that helps in producing consistent and richer business or organizational insights using machine learning.
๐ The first thing we need for MLOps is โ ๐๐ผ๐ป๐๐ฎ๐ถ๐ป๐ฒ๐ฟ๐ถ๐๐ฎ๐๐ถ๐ผ๐ป ๐๐ฒ๐ฐ๐ต๐ป๐ผ๐น๐ผ๐ด๐.
Containerization is defined as a form of operating system virtualization, through which applications are run in isolated user spaces called containers, all using the same shared operating system (OS).
For using containerization technology we have many tools like Docker, Podman, etc.
Now letโs get into some hands-on practical implementing the topic. ๐
Before that certain pre-requisites need to be taken care of. I will be using ๐ฅ๐ฒ๐ฑ๐๐ฎ๐ ๐๐ป๐๐ฒ๐ฟ๐ฝ๐ฟ๐ถ๐๐ฒ ๐๐ถ๐ป๐๐ ( ๐ฅ๐๐๐ ) ๐ด for doing this practical.
โ Your yum should be configured in the system. โ
Now for installing the Docker community edition into the system create a repo file in the location /etc/yum.repos.d/ with the name docker-ce.repo
touch /etc/yum.repos.d/docker-ce.repo
๐ Note: The name of the repo file can be anything of your choice but the extension should be โ.repoโ only.
Now open the repo file with any of your favorite editors and copy the below lines to configure docker.
Now your Docker should be configured in your system. โจ
Run the following command to install docker.
yum install docker-ce --nobest -y
It will install the docker community edition software along with all the dependencies that it might require.
Here I will be using a Dockerfile to create the container image.The container image will have the machine learning code and the dataset.
The goal is when we run a container from this image then the ML code will create the model and then start the app for prediction.
โ This point will be more clear when we go onto the practicals.
First create a directory containing all the required files.
My directory structure looks something as follows -
[root@localhost ML]# tree
.
โโโ Dockerfile
โโโ Salary_Data.csv
โโโ salaryLR.py
โโโ salary_predictor.py0 directories, 4 files
- โSalary_Data.csvโ is the dataset file.
- โsalaryLR.pyโ is the file that contains the Linear Regression code.
- โsalary_predictor.pyโ is the app that will be used by the user for predicting the salary.
You can get the salary dataset from the following link.
Now letโs come onto the Dockerfile.
FROM centos:latestMAINTAINER RAHUL SIL <rahul.official.150@gmail.com>RUN dnf install python3 ncurses net-tools -y && \
pip3 install numpy pandas joblib scikit-learnWORKDIR /root/MLCOPY * /root/ML/RUN python3 salaryLR.py
CMD ["python3", "salary_predictor.py" ]
Lets build the dockerfile.
docker build -t linear_regression:v1.0 .
After the dockerfile gets executed you can confirm that a new linear_regression:v1.0 image has been created and stored in the local repository using the following command.
docker images
Now lets use the image to run and use the ML model.
This way we have sucessfully implemented the task of running a Machine Learning model inside a docker container. โจ๐
I will soon come up with a blog where I will be explaining the basics of docker and the commnds of docker, dockerfile that I have used here. Stay tuned for that !!
You can get all the code for this task from the below link.
I hope you liked this article.๐
Would definitely like to hear your views on this and feedbacks so that I can improve on those points in future articles. ๐ Comment your views below.
You can also check my LinkedIn profile and connect with me.
Follow me on medium as I will come up with articles on various technologies like Cloud Computing, DevOps, Automation and their integration.
Thatโs all for now. Thank You !! ๐โ