ユニファ開発者ブログ

ユニファ株式会社プロダクトデベロップメント本部メンバーによるブログです。

AWS and Machine Learning

By Matthew Millar R&D Scientist at ユニファ

Purpose

This blog will discuss the best practices for using AWS technology in developing and deploying Machine Learning models on AWS.
This blog will cover the moving parts of AWS services that can be used with Machine Learning models. It will also cover what they do and when and where you should use it.

AWS Data Stores for Machine Learning:

S3 Buckets:

S3 is the standard storage for AWS. This is basically your computer disk. It will store files in separate folders. There are 5 different types of S3 storage.
S3 General Purpose (GP) Standard which is the standard storage easy to access in real-time.
S3 Standard Infrequent Access (IA). This is ideal for items that you do not use very often. It is very similar to the S3 bucket but is intended for use of a file that is not needed often to keep them separate. It is just as fast and available as GP storage. IA is ideal for long term storage and backup files.
S3 One Zone Infrequent Access is for long term storage of infrequently accessed data. Unlike IA and GP, the data is only stored in one single location in Arizona, USA. If an availability zone is destroyed the data will be lost.
S3 Intelligent Tiering is a smart system that can automatically move data to the optimal storage solution based on access patterns without incurring performance impact or operation burden. It will automatically move data between frequently and infrequently access tiers which can save money and reduce management time and cost.
S3 Glacier comes in two flavors. S3 Glacier and S3 Glacier Deep Archive. The glacier is a very secure and durable and possibly less expensive than a local solution (buying your own storage system). This also increases the availability of the data geographically compared to local storage far easier than if you set up the access points yourself. Data can be transferred over the Glacier using the S3 lifecycle which if the preferred way than manually migrating the data over. A deep archive is the cheapest of all storage classes. This is ideal for data that is accessed once or twice a year only. This is ideal for data that needs to meet high regulations like FinTech, Healthcare, or Government data. Deep Archive is for storage of over 7 to 10 years to help meet regulation storage as in MiFID II regulations.

S3 Lifecycle:

S3 Lifecycles are very important to establish and use, otherwise you will have to manually manage your data which is nearly impossible with Bigdata and other complex datasets.
The use of rules to move data from one storage option to another storage option is important as this can not only save money, but keep your data well organized, and safe. For example: after an object is created move the data to IA from GP storage. After 6 months move the data from IA to Glacier for storage.

S3 Encryption:

Data encryption is very important for security especially if you are working with Personal Identifiable Information (PII). There are 4 types of encryption for S3.

  1. SSE-S3- AWS handled Keys for encryptions.
  2. SSE-KMS- AWS Key Manager to managed keys for encryptions + additional security + audit trail for KMS usage.
  3. SSE-C – Self-managed encryption keys on AWS.
  4. Client-side Encryption - Encrypt the data before it is sent to AWS.

S3 Access Control:

Managing who and what can access your data is extremely important. There are two main ways to manage access to an S3 Bucket.

  • User-Based:
    • IAM policies – Controls which API can be called as well as what a user can do.
  • Resource-Based:
    • Bucket Policy – Bucket wide rules as to what can be done and what data can be accessed and how.
    • Object Access Controls List – ACL, fine-grain access control for individual objects.
    • Bucket Access Control List – ACL for bucket wide access control. Less common compared to bucket policy and OACL.

Redshift:

Redshift primary use for analytics and not Machine Learning. Redshift is the main data warehousing and primally uses SQL Analytics for analysis of the data. Redshift is built for Online Analytical Processing (OLAP). Data can be moved for storage from S3 to Redshift.

RDS/Aurora:

Another data storage system. It is relational storage that uses SQL queries to find data. This storage service uses Online Transaction Processing (OLTP) and must be provisioned before use.

DynamoDB:

It is a NoSQL data storage solution that is serverless and can scale as needed so there is no need for provisionings like RDS or Redshift. You do have to provision the read and write capability for this though. This is a very good place to stored saved Machine Learning models.

Streams:

AWS Kinesis:

Kinesis is an idea for data streaming in real-time to increase real-time analytics and insight to help decision making and increase response timer pr ocess/replay alternative to Apache Kafka. It is ideal for use in logs, IoT, clickstreams, Bigdata, and real-time data applications. Kinesis goes hand in hand with Spark or other streaming processing frameworks.
There are 4 types of data streams for Kinesis:

  1. Kinesis Streams – Low latency Streaming for consuming data at scale.
  2. Kinesis Analytics – Real-time Analytics on streams using SQL.
  3. Kinesis Firehose – Flows data to storage services like S3, Redshift, Elastic Search, Splunk, etc…
  4. Kinesis Video Stream – For real-time video analysis.

The basic flow of Kinesis is to send

input data -> Kinesis Stream -> Kinesis Analytics -> Kinesis Firehose -> Storage

Kinesis streams have shards which control the amount of input that can go through each stream. These shards must be provisioned beforehand which requires capacity planning and input knowledge.
Data retention is for up to 24 hours by default but can be extended up to 7 days after the configuration of each stream/shard. This gives the ability to reprocess/replay the data that is in the stream without reloading the data. Also, multiple application/analysis systems can use the same data from the same stream/shard. However, the data that is in the stream is immutable and cannot be removed manually. The data limit (ingestion) is up to 1mb per second of data per shard.

Kinesis Firehose:

This stream is a fully managed service that does not need configuration or an admin intervention/setup. Firehose is not real-time, but near real-time processing as the limit is 60 latency for a non-full batch. The primary purpose of Firehose is for data ingestion to S3, Redshift, Elastic Search, and Splunk. Firehose auto-scales to meet the needs of data transmission. Do some limited data conversions for S3 using AWS Lambda. This can convert CSV<->JSON<->Parquet. Firehose also allows for compression of data to zipping, GZip, or Snappy. This is very good for long term storage.

Kinesis Analytics:

This stream is for real-time analytics of data. Analytics has two types of input, Kinesis Stream and Firehose.

  • Use Cases:
    • Stream ELT: You can use analytics to transform data in a column on stream data.
    • Continuous Metric Generation: Live updates on data streams.
    • Responsive Analytics: Set up alerts in real-time.

Analytics streams are serverless and scale automatically to meet traffic flow. You will have to set up an IAM privilege to stream to certain sources and destinations like S3. You will also need to use Fink/SQL for all computations. You can also use Lambdas for preprocessing and schema discovery.
There are two types of built-in Machine learning algorithms in Analytics. These two are Random Cut Forest and Hotspot analysis. RCF uses SQL function for anomaly detection on numerical column data. This model gets updated as new data comes into the stream. This is a big benefit as it keeps the model accurate as your data changes over time. The hotspot algorithm is used for finding information on relatively dense regions in the data. This is very similar to KNN or other clustering algorithms. Unlike RCF this model is not dynamically trained and must be retrained for each dataset.

Kinesis Video Stream:

This stream is intended for video analysis and processing. The input or producers for this stream come from Security cameras, body cams, IoT cameras, and other video capturing devices. There is a restriction of 1 producer to 1 stream (1 to 1). Data can last from 1 hour (default storage) to 10 years after configuration. Video Streams have video playback capability as well. The consumers are limited compared to other streams. There are 3 types of consumers for this stream.

  1. Build your own custom consumer (Pytorch models, Tensorflow models)
  2. AWS SageMaker
  3. Amazon Rekognition Video.

With these 3 approaches, you can apply Machine learning or Deep Learning models to video streams.
These are the types of streams that you can use in Kinesis.

Processors:

Glue Data Catalog:

Glue is often overlooked as its main purpose is to be a metadata repository for all your table data. Glue can generate the schema for your dataset automatically by looking over the data. Glue Crawlers help go through your datasets and build out the Data Catalog. Glue Crawlers can help in inferring schemas and partitions in the data. This works with JSON, CSV, and Parquet data. Glue can be used in all storage systems (S3, Redshift, RDS, and Glacier. You can set up a schedule to run Glue or run it manually to create/update the Glue catalog. The glue will have to be given IAM permissions to access each storage service.

Glue ETL (Extract Transform Load):

This is one of the most important aspects of Glue and one of the main uses of Glue is to preprocess and manage your data on AWS. Glue can transform, clean, and even enrich data before sending it for analysis. ETL code can be written in either Python or Scala. For Big Data, Spark/PySparks can be used. S3, JDBC, RDS, Redshift, Glue Data Catalog can be the targets for ETL.

AWS Pipeline:

AWS pipelines are exactly what it sounds like, its main goal is to aid in the movement of data from source to destination throughout all parts of AWS architecture. Typical destinations are S3, RDS, DynamoDB, Redshift, and EMR. It can also manage Task dependencies. It can also handle local or on-premises data and push that into AWS systems. The pipeline can orchestrate services and manages everything.

AWS Batch:

AWS Batch allows for batch jobs to run on AWS as a docker image. This allows for the dynamic provisioning of instances (EC2 or spot instances). Automatically adjust to get the optimal quantity and type based on the volume and requirements of the input/task. This is serverless so no managing of clusters is needed. The use of CloudWatch events can automatically run batch jobs as needed. Batch jobs can be managed by using AWS Step Functions.

Database Migration Services DBM:

This allows for quick and easy migrations from the local database to AWS. It is also resilient and self-healing which makes it a far more secure method for data transfer. The source database will also remain available for use during the migration. It supports both homogeneous and heterogeneous migrations of databases. DBM will need an EC2 instance started before the transfer can happen. EC2 is responsible for moving the source database to the target database.

Step Function:

Step functions are used to orchestrate steps and processes in workflows. SF has advanced error handling functions and sophisticated retry mechanisms. It is simple to audit workflow and history flows. Step functions can be put on hold for an arbitrary amount of time until a function/task is complete. But the max execution time of a step function is 1 year. A step function consists of steps to achieve the outcome that is desired. For example training a Machine Learning model would be like this:

Start -> Generate Training dataset -> Hyperparameter training (XGBoost) -> Extract Model Path -> Hyperparameter testing -> Extract Model Name -> Batch Transfer -> End

Step functions are idea for flow design and to ensure that one step happens after another step in a certain order.