Deep learning influences key aspects of core sectors such as IT, finance, science, and many more. Problems arise when it comes to getting computational resources for your network. You need to have a powerful GPU and plenty of time to train a network for solving a real-world task.
Dynamic neural networks help save training time on your networks. They also reduce the amount of computational resources required. In this course, you'll learn to combine various techniques into a common framework. Then you will use dynamic graph computations to reduce the time spent training a network.
By the end, you'll be ready to use the power of PyTorch to easily train neural networks of varying complexities.
This course uses Python 3.6, PyTorch 0.4 and CUDA Toolkit 7.5 while not the latest version available, it provides relevant and informative content for legacy users of Python.
About the Author
Anastasia Yanina is a Senior Data Scientist with around 5 years' experience. She is an expert in Deep Learning and Natural Language processing and constantly develops her skills as far as possible. She is passionate about human-to-machine interactions. She believes that bridging the gap may become possible with deep neural network architectures.