NORDICS20 - Assets

Deep Learning on AWS

Amazon Web Services Resources EMEA

Issue link:

Contents of this Issue


Page 43 of 50

Amazon Web Services Deep Learning on AWS Page 39 workloads with a single query. For this setup, you build two Docker images: one for Amazon SageMaker training and the other for orchestrating training in multiple Regions using Amazon SageMaker APIs. The orchestrator image run by AWS Batch has the logic to spawn multiple child jobs in different AWS Regions with different parameters, but it will be using the same job configuration in all four Regions. Figure 14: Reference architecture to orchestrate Amazon SageMaker jobs in multiple Regions Use Amazon S3 and Amazon DynamoDB to Build a Feature Store for Batch and Real-Time Inference and Training Many organizations that want to be a data-centric company or may already be one are either in the process of building a data lake solution or may already have a data lake solution to democratize their data for analytics and AI/ML. Data lake creation is a critical step in the machine learning process because your entire organization's data is managed and shared from a single repository. However, the question that arises is how deep learning engineers and scientists, who are not data engineers, can easily acquire new features to solve new problems. How do deep

Articles in this issue

Links on this page

view archives of NORDICS20 - Assets - Deep Learning on AWS