How to use the Amazon Machine Learning API

Machine learning, AI, deep learning – all of these technologies are popular and the hype cycle is intense.  When one digs into the topic of machine learning, it turns out to be a lot of math, data wrangling, and writing coding.  Which means one can quickly become overwhelmed by the details and the lose sight of the practicality of machine learning.

Take a step back from all the complexity and ask a fundamental question.  Why should we even look at machine learning?   Simple, we want our applications to get smarter and make better decisions.   Ignore the complexities of machine learning and think about the end result instead.  Machine learning is a powerful tool that can be used to our advantage when building applications.

Now consider how we would implement machine learning into our applications.  We could write lots of code within our app to pull historical data, use this data to create and train models, evaluate and tweak our models, and finally make predictions.  Or we could offload this work to public cloud services like Amazon Machine Learning.  All it takes is some code within our application to make calls Amazon Machine Learning APIs.

Machine learning via APIs?

I’ve been an infrastructure guy for most of my career.  While I have worked for software companies and have worked closely with software developers – I’m not a developer and have not made my living writing code.

That being said, machine learning requires learning to write code.  One wouldn’t go out and buy machine learning from a vendor but would rather write code to build and use machine learning models.  Furthermore, machine learning models don’t provide much value by themselves but are valuable when built into an application.  So coding is important for using machine learning.

Think along the same lines regarding consumption of public cloud.  Cloud isn’t just about cost, it’s about applications consuming infrastructure  and services with code.  Anything that can be created, configured, or consumed in AWS can (and should?) be done so with code.   So coding is also important for consuming public cloud.

Fortunately, writing code to use Amazon Machine Learning isn’t very hard at all.  Amazon publishes a machine learning developer guide and API reference documentation that shows how to write code to add machine learning to your application.  That combined with some machine learning examples on Github make the process easy follow.

Overview of an application using Amazon Machine Learning

What is the high-level architecture of an application using Amazon Machine Learning?  Let’s use an example as we walk through the architecture.  Think of a hypothetical customer loyalty application that sends rebates to customers we think may stop using our theoretical company’s services.  We could incorporate Amazon machine learning into the app to flag at-risk customers for rebates.

Raw data – first we need data.  This could be in a database, data warehouse, or even flat files in a filesystem.  In our example, these are the historical records of all customers that have used our service and have either remained loyal to our company or stopped using our services (the historical data is labeled with the outcomes).

Data processed and uploaded to S3 – Our historical data needs to be extracted from our original source (above), cleaned up, converted into a CSV file,  and then uploaded to an AWS S3 buckets.

Training datasource – our application needs to invoke Amazon Machine learning via an API and create a ML datasource out of our CSV file.

Model – our application needs to create and train a model via an AWS API using the datasource we created above.

Model evaluation – our app will need to measure how well our model performs with some of the data reserved from the datasource.  Our evaluation scoring gives us an idea of how well our model makes predictions on new data.

Prediction Datasource –  if we want to make predictions with machine learning, we need new customer data that matches our training data schema but does not contain the customer outcome.    So we need to collect this new data into a separate CSV file and upload it to S3.

Batch predictions – now for the fun part, we can now call our model via AWS APIs to predict the target value of the new data we uploaded to S3.  The output will be contained in a flat file in an S3 bucket we specify that we can read with our app and take appropriate action (do nothing or send a rebate).

How to get started?

First pick a programming language.  Python is popular both for machine learning and for interacting with AWS so we’ll assume Python is our language.  Next read through the AWS Python SDK Quickstart guide to get a feel for how to use AWS with Python.  Python uses the SDK Boto3 which needs to be installed and also needs to be configured to use your AWS account credentials.

Lastly, read through the machine learning section of the Boto3 SDK guide to get a feel of the logical flow of how to consume Amazon Machine Learning via APIs.  Within this guide we would need to call the Python methods below using the recommended syntax given.

create_data_source_from_s3 – how we create a datasource from a CSV file

create_ml_model – how we create an ML model from a datasource

create_evaluation – how we create an evaluation score for the performance of our model

create_batch_prediction –  how we predict target values for new datasources

Getting started with an example

At this point we have a good idea of what we need to do to incorporate Amazon machine learning into a Python app.  Those already proficient in Python could start writing new functions to call each of these Boto3 ML methods following the architecture we described above.

What about an example for those of use who aren’t as proficient in Python?  Fortunately there are plenty of examples online of how to write apps to call AWS APIs.  Even better, AWS released some in-depth ML examples on Github that walks through examples in several languages including Python.

Specifically lets look at the targeted marketing example for Python.  First check out the README.md.  We see that we have to have our AWS credentials setup locally where we will run the code so that we can authenticate to our AWS account.  We’ll also need Python installed locally (as well as Boto3) so that we can run Python executables (.py files).

Next take a look at the Python code to build an ML model.  A few things jump out in the program:

—  The code imports a few libraries to use in the code (boto3, base64, import json, import os, sys).

—  The code then defines the S3 bucket and CSV file that we will use to train our model (“TRAINING_DATA_S3_URL”).

—  Functions are written for each subtask that needs to be performed for our machine learning application.  We have functions to create a datasource (def create_data_sources), build a model (def build_model), train a model (def create_model), and performs an evaluation of the model (def create_evaluation).

—  The main body of the code (line 123 – ‘if __name__‘) ties everything together using the CSV file, a user defined schema file, and a user defined recipe file to call each function and results in a ML model with a performance measurement.

Schema and recipe

Along with the Python code we find a .schema file and recipe.json file.  We explored these in previous posts when using Amazon Machine Learning through the AWS console wizard.  When using machine learning with code through the AWS API, the schema and recipe must be created and formatted manually and passed as arguments when calling the Boto3 ML functions.

Data pulled out of a database probably already has a defined schema – these are just the rows and columns that describe the data fields and define the types of data inside them.  The ML schema needs to be defined in the correct format with our attributes and our target value along with some other information.

A recipe is a bit more complicated, the format for creating a recipe file gives the details.  The recipe simply defines how our model treats the input and output data defined in our schema.  Our example recipe file only defines outputs using some predefined groups (ALL_TEXT, ALL_NUMERIC, ALL_CATEGORICAL, ALL_BINARY ) and does a few transformations on the data (quantile bin the numbers and convert text to lower case).

Making predictions in our application 

Assuming everything worked when we ran the ‘build_model.py’ executable, we would have a model available to make predictions on unlabeled data.  The ‘use_model.py’ executable does just that.  We just need to get this new unlabeled in a CVS file and into an S3 bucket (“UNSCORED_DATA_S3_URL”) and allow the ‘use_model.py’ code create a new datasource to use for predictions.  The main section of this ‘use_model.py’ code ties everything together and calls the functions with the input/output parameters required.

Think about it, we have all the code syntax, formatting, and necessary steps necessary to add predictive ability to our application by simply calling AWS APIs.  No complex rules in our application, no need to write an entire machine learning framework into our application, just offload to Amazon Machine Learning via APIs.

Thanks for reading!