<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Machine-Learning on Izzuddin Ahsanujunda</title><link>https://iahsanujunda.me/tags/machine-learning/</link><description>Recent content in Machine-Learning on Izzuddin Ahsanujunda</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Thu, 21 Oct 2021 11:57:21 +0900</lastBuildDate><atom:link href="https://iahsanujunda.me/tags/machine-learning/index.xml" rel="self" type="application/rss+xml"/><item><title>MLOps Part 3 - Evaluation and Optimization</title><link>https://iahsanujunda.me/2021/10/21/mlops-part-3-evaluation-and-optimization/</link><pubDate>Thu, 21 Oct 2021 11:57:21 +0900</pubDate><guid>https://iahsanujunda.me/2021/10/21/mlops-part-3-evaluation-and-optimization/</guid><description>&lt;p>Now that we have trained a model and store it as a reusable artifact, we are ready to evaluate the model on unseen data. As with usual training practice, we are going to pull out the test portion of our split data, run this data through the trained model, and record the score we got from the test data. As a good measure, we will also re-run training process with mlflow-powered hyperparameter sweep and discover the most optimal hyperparameter that could gave us best generalization between training and testing data.&lt;/p></description></item><item><title>MLOps Part 2 - Feature Engineering and Training</title><link>https://iahsanujunda.me/2021/09/14/mlops-part-2-feature-engineering-and-training/</link><pubDate>Tue, 14 Sep 2021 23:00:18 +0900</pubDate><guid>https://iahsanujunda.me/2021/09/14/mlops-part-2-feature-engineering-and-training/</guid><description>&lt;p>Previously, we have set up the main skeleton of our training pipeline using mlflow project and implemented a &lt;code>download&lt;/code> step component. Now let&amp;rsquo;s continue building the training pipeline.&lt;/p>
&lt;p>&lt;img alt="image" loading="lazy" src="https://drive.google.com/uc?export=view&amp;id=1KVqCU7TUzuln1CPufvR60X9_a4Evqf1r">&lt;/p>
&lt;p>Right now we are going to develop the feature engineering and training part. For the sake of simplicity, we are going to implement a bare minimum feature engineering for our model, because we are looking to focus our work on mlops. It is very possible to develop a more rigorous feature engineering step that results in much better model performance.&lt;/p></description></item><item><title>MLOps Part 1 - Intro to MLflow Project and Setting-up Our First Component</title><link>https://iahsanujunda.me/2021/08/02/mlops-part-1-intro-to-mlflow-project-and-setting-up-our-first-component/</link><pubDate>Mon, 02 Aug 2021 21:49:38 +0900</pubDate><guid>https://iahsanujunda.me/2021/08/02/mlops-part-1-intro-to-mlflow-project-and-setting-up-our-first-component/</guid><description>&lt;p>MLflow is a very nice tool to handle our MLOps needs. It covers several important features for doing MLOps, namely tracking server, model registry, and source code packaging. Here we are going to focus on &lt;a href="https://mlflow.org/docs/latest/projects.html">MLFlow Projects&lt;/a>, the source code packaging feature that can help us develop a reproducible machine learning pipeline.&lt;/p>
&lt;p>MLFlow projects enable us to run source codes in a consistent way by encapsulating the runtime environment together with the source code, so that we can develop our source code on OSX, and have it run on linux with the same reproducible result, if we so need.&lt;/p></description></item></channel></rss>