Intelligent Automation

Learn how to bring intelligence to tasks where more critical thinking is needed.

Experiential Learning

Download Training Materials

Overview:

This exercise introduces you to Automate's OCR and OpenDocument Spreadsheet actions. You will create a task which performs OCR on files in a folder, extracting the needed fields from each file based on the vendor’s template. You will write the “Vendor Name,” “Payer,” and “Balance Due” to an Excel file. Either the Excel or Open Document Spreadsheet action can be utilized for this exercise. After completing this exercise, you will have learned to:

  • Extract text from images
  • Specify regions from which to extract needed text
  • Manipulate text
  • Using Functions to modularize a task

General Instructions:

  • Create variables for any values you need to extract from the images.
    • Task variables can be used between functions and local variables can only be used in a single function in which they are created
  • When creating the Excel spreadsheet to store the captured values, you can use a list to define the columns and write the list to the first row of your worksheet.
  • Utilize the Loop action to move through all the image files you would like to extract data from.
  • Create a specific function for each image file variation. These functions can then be called from the main function within the task builder using the Task > Call function sub action.
  • Utilize conditional logic actions like IF in the main function of your task to determine which vendor template is being utilized.
  • Your main function can be used for:
    • Setting any needed variables
    • Creating the initial Excel file and saving it after extracting the required data from the image(s).
    • Looping through the folder that contains the image source file(s).
    • Extracting the vendor text to be used in the IF conditional logic.
    • Calling the vendor specific functions to perform the required field extraction.
  • Your vendor specific functions can be used for:
    • Setting the OCR regions to extract the required information based on the vendor template.
    • Writing the extracted data into the appropriate worksheet column and row.
  • Tips:
    • Utilize a task variable (number) that contains the starting row you would like to begin writing Excel data on, then increment that variable each time you write a row to the worksheet.
    • Use text actions to remove/trim whitespace from the extracted values before writing them to the Excel worksheet.
    • Use best practices when creating any variable (var_variablename) or datasets (ds_datasetname) so you can easily recognize these objects

Use the Automate expression builder to help find and insert these object values

Download Training Materials

Overview:

This exercise introduces you to the ML Action. You can integrate trained ML models to add intelligence to your tasks. In the previous exercise we used the OCR action to recognize text. The OCR action uses the trained Tesseract Character Recognition models to recognize and extract text from documents. The ML action allows you to plug in other trained ML models into Automate. Note that, currently, Automate directly supports only models created using the ML.NET framework.

For this exercise we provide you with an ML model trained to recognize 5 types of flowers, i.e., daisies, dandelions, roses, sunflowers, and tulips. After completing this challenge, you will have learned to:

  • Load an ML model
  • Setup the inputs to the model
  • Run the model and get it’s prediction.

General Instructions:

Note: trained ML models contain an MLModel.zip file which is the trained model, and ModelInput.cs and ModelOutput.cs files which define the input and output types. The three files should be in a single folder that Automate can access.

  • Download and save the provided Model Folder (i.e., ImageClassification_x64only). You can rename the folder.
  • Download an image of a flower and place it in some folder.  This is your test case. The image must be of one of the flowers the model has been trained to recognize.
  • Create a new Task (i.e. Machine Learning Challenge)
  • Create a Variable (i.e. PredictionResult). Note: we won’t be using this – we will be using a JSON object instead.
  • Create a Variable (i.e. Prediction_JSON_String) to capture the result in JSON format.
  • Load the trained model. Create a “Load model” activity from the Machine Learning Action.
    • For “Model name” type in a name for this execution session (e.g., MachineLearningSession1)
    • For “Model folder location” provide the location of the folder (e.g., C:\Automate\ ImageClassification_x64only)
    • Click on the “Show model inputs & outputs to see the inputs and outputs for this model"
      • For the “Model output” note that it provides a Prediction and Score. In this case the Prediction would be the type of flower. The Score would represent the confidence level, that is, how confident the model is of its prediction.
  • Create an Activity to run the model. Create a new “Run model” Activity from the Machine Learning Action.
    • For “Model to run” choose the name of the session created in the previous step.
    • For “ImageSource” under Inputs, provide the full path to the image that was downloaded. You can keep the Label field empty.
    • For “Output object name” provide the name of the variable created for this purpose (i.e., PredictionResult)
    • For “Output object as JSON string” provide the variable created for this purpose (e.g., Prediction_JSON_String)
  • (Optional) Create a JSON object to see result as a formatted JSON object that is more human readable. Create a “Create” Activity from the JSON Object Action.
    • For “JSON Object name” provide the name for the JSON Object (e.g., Prediction_JSON)
    • For JSON String, provide the name of the variable created for this purpose (e.g., Prediction_JSON_String). (Note the variable name must be within percentage signs, that is % Prediction_JSON_String %.)
  • Close the session. Create new “Close model” Activity from the Machine Learning Action.
  • Save and close the task.

You can run the task from the Management Console or from within the Task Builder. When the task completes you can evaluate the prediction. To evaluate the prediction, inspect the values of the output variables (e.g., Prediction_JSON or Prediction_JSON_String). Note that the JSON Object is presented in more human readable format.