NORDICS20 - Assets

Deep Learning on AWS

Amazon Web Services Resources EMEA

Issue link: https://emea-resources.awscloud.com/i/1242450

Contents of this Issue

Navigation

Page 29 of 50

Amazon Web Services Deep Learning on AWS Page 25 Code, Data, and Model Versioning Version Code with Git The training, preprocessing, and inference scripts are the smallest components of the overall deep learning stack. In addition to the above scripts, you can also script your deep learning pipeline for model retraining and model deployment using services such as AWS Step Functions. Together, all the above scripts can be version controlled using any Git-based repository or using AWS CodeCommit, a fully managed source control service that hosts secure Git-based repositories. Version Data in Amazon S3 In deep learning, the copy of the data that was used to retrain a model is important to explain and troubleshoot the bias and the drift in the model. You can use Amazon S3 to version your training data. Create a new Amazon S3 object or new version of Amazon S3 object for every new or updated training dataset. You can use the naming convention of the Amazon S3 object or use object tags to track training dataset versions. Optionally, you can push dataset S3 object location and data metadata in an Amazon DynamoDB table and index the table to make it searchable for data discovery. Version Model in Amazon S3 Training a model is costly and time consuming. It is important to persist the model to compare the performance and accuracy of new variants of the model. You can assume trained model as a special data file that can be persisted in Amazon S3 as a data file with version control enabled. Amazon S3 allows you to tag and version data files of any type. Automation of Deep Learning Process for Retrain and Redeploy After you demonstrate a functional prototype, it is time to put the model in production and create an endpoint for serving prediction using the trained model. During the prototyping, all the steps to build, train, and deploy are performed manually in the Jupyter notebook. However, deployment in production requires precision, consistency, and reliability. Manual interventions in the production pipeline often lead to human errors that can lead to downtime. You can address human errors by automating all the

Articles in this issue

Links on this page

view archives of NORDICS20 - Assets - Deep Learning on AWS