NORDICS20 - Assets

Deep Learning on AWS

Amazon Web Services Resources EMEA

Issue link:

Contents of this Issue


Page 19 of 50

Amazon Web Services Deep Learning on AWS Page 15 endpoint can grow on-demand to petabytes without disrupting applications, growing and shrinking automatically, as you add and remove files. Figure 5: Multiple EC2 instances connected to a file system Compute Amazon EC2 P3 Instances The computationally intensive part of the neural network is made up of many matrix and vector operations. We can make this process faster by doing all of the operations at the same time instead of doing operations one after the other. This is why GPUs, which are better at handling multiple simple calculations in parallel are used instead of CPUs for training neural networks. Adding more layers to a neural network (up to a specific limit) and training on more and more data has been proven to improve the performance of deep learning models. GPU has thousands of simple cores and can run thousands of concurrent threads. GPUs have improved the training time required for training a complex neural network. The access and availability of high performance and cost-effective GPU infrastructure is the primary requirement for a project using neural network architecture to build complex models. The GPU-based Amazon EC2 P3 instances offer the best price/performance compared to other GPU alternatives in the cloud today. Amazon EC2 P3 instances, the next generation of EC2 compute-optimized GPU instances, are powered by up to eight of the latest-generation NVIDIA Tesla V100 GPUs and are ideal for deep learning applications.

Articles in this issue

Links on this page

view archives of NORDICS20 - Assets - Deep Learning on AWS