PyTorch Framework

From GM-RKB
(Redirected from PyTorch library)
Jump to navigation Jump to search

A PyTorch Framework is a Python tensor-based deep learning framework.



References

2023

2018a

  • "PyTorch developer ecosystem expands, 1.0 stable release now available."
    • … Researchers and engineers can now readily take full advantage of the open source deep learning framework’s new features, including a hybrid front end for transitioning seamlessly between eager and graph execution modes, revamped distributed training, a pure C++ front end for high-performance research, and deep integration with cloud platforms. ...

      ... PyTorch has been applied to use cases from image recognition to machine translation. As a result, we’ve seen a wide variety of projects from the developer community that extend and support development. A few of these projects include:

      • Horovod — a distributed training framework that makes it easy for developers to take a single-GPU program and quickly train it on multiple GPUs.
      • PyTorch Geometry – a geometric computer vision library for PyTorch that provides a set of routines and differentiable modules.
      • TensorBoardX – a module for logging PyTorch models to TensorBoard, allowing developers to use the visualization tool for model training.

2018b

2018c

Software Creator Software licenseTemplate:Efn Open source Platform Written in Interface OpenMP support OpenCL support CUDA support Automatic differentiation[1] Has pretrained models Recurrent nets Convolutional nets RBM/DBNs Parallel execution (multi node)
PyTorch ... BSD license Yes Linux, macOS Python, C, CUDA Python Yes Yes Yes Yes Yes Yes

2018d

torch a Tensor library like NumPy, with strong GPU support
torch.autograd a tape-based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.nn a neural networks library deeply integrated with autograd designed for maximum flexibility
torch.multiprocessing Python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and Hogwild training.
torch.utils DataLoader, Trainer and other utility functions for convenience
torch.legacy(.nn/.optim) legacy code that has been ported over from torch for backward compatibility reasons

Usually one uses PyTorch either as:

  • a replacement for NumPy to use the power of GPUs.
  • a deep learning research platform that provides maximum flexibility and speed

If you use NumPy, then you have used Tensors (a.k.a ndarray).

PyTorch provides Tensors that can live either on the CPU or the GPU, and accelerate compute by a huge amount.

We provide a wide variety of tensor routines to accelerate and fit your scientific computation needs such as slicing, indexing, math operations, linear algebra, reductions. And they are fast!
PyTorch has a unique way of building neural networks: using and replaying a tape recorder. Most frameworks such as TensorFlow, Theano, Caffe and CNTK have a static view of the world. One has to build a neural network, and reuse the same structure again and again. Changing the way the network behaves means that one has to start from scratch.
With PyTorch, we use a technique called reverse-mode auto-differentiation, which allows you to change the way your network behaves arbitrarily with zero lag or overhead. Our inspiration comes from several research papers on this topic, as well as current and past work such as torch-autograd[8], autograd[9], Chainer[10], etc.

2017

Package 	Description
torch 	a Tensor library like NumPy, with strong GPU support
torch.autograd 	a tape based automatic differentiation library that supports all differentiable Tensor operations in torch
torch.nn 	a neural networks library deeply integrated with autograd designed for maximum flexibility
torch.optim 	an optimization package to be used with torch.nn with standard optimization methods such as SGD, RMSProp, LBFGS, Adam etc.
torch.multiprocessing 	python multiprocessing, but with magical memory sharing of torch Tensors across processes. Useful for data loading and hogwild training.
torch.utils 	DataLoader, Trainer and other utility functions for convenience
torch.legacy(.nn/.optim) 	legacy code that has been ported over from torch for backward compatibility reasons
    • Usually one uses PyTorch either as:
      • A replacement for numpy to use the power of GPUs.
      • a deep learning research platform that provides maximum flexibility and speed