NORDICS20 - Assets

Deep Learning on AWS

Amazon Web Services Resources EMEA

Issue link:

Contents of this Issue


Page 13 of 50

Amazon Web Services Deep Learning on AWS Page 9 a GPU core to an inference endpoint allows you to meet the demands of your application without overprovisioning capacity. Fourth, implementing tools to audit the performance of a model over time is the last step in implementing a model into production. The auditing solution must be able to accurately observe an objective evaluation metric over time, detect failure, and have a feedback loop in place should the model's accuracy deteriorate over time. Note that we do not cover auditing solutions in this guide. Lastly, we will discuss the different model deployment and model and data version control approaches available to you on AWS in more detail in the next two sections – Code, Data and Model Versioning and Patterns for Deep Learning at Scale. Step 6. Scale and Manage the Production Environment Building and deploying effective machine learning systems is an iterative process, and the speed at which changes can be made to the system directly affects how your architecture scales, while also influencing the productivity of your team. Three of the most important considerations to achieve scale and manageability in your deep learning implementations are modularity, tiered offerings, and choosing from a multitude of frameworks. This decoupled and framework agnostic approach will provide your deep learning engineers and scientists with the tools that they want, while also catering to specific use cases and skillsets in your organization. AWS provides a broad portfolio of services that cover all the common machine learning (ML) workflows while also providing you the flexibility to support the less common and custom use cases. Before discussing the breadth and depth of the AWS machine learning stack, let us look at the most common challenges encountered by machine learning practitioners today. Challenges with Deep Learning Projects Software Management Deep learning projects are dependent on machine learning frameworks. Many deep learning frameworks are open source and supported by the community that is actively contributing to the framework code. The changes are frequent and sometimes breaking. In some cases, you need to customize the framework to meet your immediate needs for performance by writing custom operators.

Articles in this issue

view archives of NORDICS20 - Assets - Deep Learning on AWS