Download Training Materials
Overview:
This exercise introduces you to the ML Action. You can integrate trained ML models to add intelligence to your tasks. In the previous exercise we used the OCR action to recognize text. The OCR action uses the trained Tesseract Character Recognition models to recognize and extract text from documents. The ML action allows you to plug in other trained ML models into Automate. Note that, currently, Automate directly supports only models created using the ML.NET framework.
For this exercise we provide you with an ML model trained to recognize 5 types of flowers, i.e., daisies, dandelions, roses, sunflowers, and tulips. After completing this challenge, you will have learned to:
- Load an ML model
- Setup the inputs to the model
- Run the model and get it’s prediction.
General Instructions:
Note: trained ML models contain an MLModel.zip file which is the trained model, and ModelInput.cs and ModelOutput.cs files which define the input and output types. The three files should be in a single folder that Automate can access.
- Download and save the provided Model Folder (i.e., ImageClassification_x64only). You can rename the folder.
- Download an image of a flower and place it in some folder. This is your test case. The image must be of one of the flowers the model has been trained to recognize.
- Create a new Task (i.e. Machine Learning Challenge)
- Create a Variable (i.e. PredictionResult). Note: we won’t be using this – we will be using a JSON object instead.
- Create a Variable (i.e. Prediction_JSON_String) to capture the result in JSON format.
- Load the trained model. Create a “Load model” activity from the Machine Learning Action.
- For “Model name” type in a name for this execution session (e.g., MachineLearningSession1)
- For “Model folder location” provide the location of the folder (e.g., C:\Automate\ ImageClassification_x64only)
- Click on the “Show model inputs & outputs to see the inputs and outputs for this model"
- For the “Model output” note that it provides a Prediction and Score. In this case the Prediction would be the type of flower. The Score would represent the confidence level, that is, how confident the model is of its prediction.
- Create an Activity to run the model. Create a new “Run model” Activity from the Machine Learning Action.
- For “Model to run” choose the name of the session created in the previous step.
- For “ImageSource” under Inputs, provide the full path to the image that was downloaded. You can keep the Label field empty.
- For “Output object name” provide the name of the variable created for this purpose (i.e., PredictionResult)
- For “Output object as JSON string” provide the variable created for this purpose (e.g., Prediction_JSON_String)
- (Optional) Create a JSON object to see result as a formatted JSON object that is more human readable. Create a “Create” Activity from the JSON Object Action.
- For “JSON Object name” provide the name for the JSON Object (e.g., Prediction_JSON)
- For JSON String, provide the name of the variable created for this purpose (e.g., Prediction_JSON_String). (Note the variable name must be within percentage signs, that is % Prediction_JSON_String %.)
- Close the session. Create new “Close model” Activity from the Machine Learning Action.
- Save and close the task.
You can run the task from the Management Console or from within the Task Builder. When the task completes you can evaluate the prediction. To evaluate the prediction, inspect the values of the output variables (e.g., Prediction_JSON or Prediction_JSON_String). Note that the JSON Object is presented in more human readable format.