An Overview of Deep Learning Frameworks

Traducciones al Español
Estamos traduciendo nuestros guías y tutoriales al Español. Es posible que usted esté viendo una traducción generada automáticamente. Estamos trabajando con traductores profesionales para verificar las traducciones de nuestro sitio web. Este proyecto es un trabajo en curso.
Create a Linode account to try this guide with a $ credit.
This credit will be applied to any valid services used during your first  days.

What is Deep Learning?

Artificial Intelligence (AI) is an exceptionally broad term for a multitude of methods and practices to simulate or mimic human behavior. This includes machine learning (ML), natural language processing (NLP), language synthesis, computer vision, robotics, sensor analysis, and more.

Machine learning (ML) is a popular form of AI designed to enable computer systems to learn from previous experience. These systems can then use what they learn to improve their behavior for a given task. A common example is image recognition. ML involves training the system by reviewing billions of pictures to learn to recognize a cat or a tree or a car. As the system runs, it continues to learn new variations of known images.

Under the ML umbrella is deep learning, which uses neural networks (NNs) as its core. Inspired by biological neural networks, AI NNs mimic the behavior of the human brain. This allows computer programs to recognize patterns and solve common problems in the fields of AI, ML, and deep learning.

A neural network is a series of algorithms that endeavor to recognize underlying relationships in a set of data. This is accomplished through a process that mimics the way the human brain operates. NNs rely on training data to learn and improve their accuracy over time.

NNs can adapt to changing input, so the network generates the best possible result without needing to redesign the output criteria. In short, NNs learn as they processes data, and become smarter and more accurate over time. NNs accomplish this without any reprogramming or human intervention. Among the best known and most commonly used NNs are the search algorithms for Google, Bing, and Yahoo.

Deep learning frameworks provide a set of base functionality so you don’t have to reinvent the wheel. There are many out there you can use, saving a lot of programming time. Below is a rundown of the seven best-known and widely used.

An Introduction to Deep Learning Frameworks

While there are many frameworks, only a few are widespread and widely used. The tech industry has consolidated largely around a few deep learning frameworks of choice.

TensorFlow

TensorFlow is the most popular and widely used of the deep learning frameworks. It was developed by Google to work with its AI processors called Tensor Processing Unit. It supports a multitude of languages but the most popular is Python. This software framework also runs models on mobile platforms (iOS and Android devices). It is free, reliable, and continuously supported by Google.

Pros

With Google behind it, TensorFlow has the advantage of seamless performance, quick updates, and constant new releases with new features. The libraries can be deployed on a wide range of hardware, from cellular devices to computers with complex setups. TensorFlow is fairly easy to use, with broad tutorials to get started quickly. And it’s not just Google behind it, community support for TensorFlow is very good. If there is a problem, you can quickly get help.

Cons

Setting up data ingestion for TensorFlow used to be painful. It only supports Linux and it heavily favors Python over Java.

PyTorch

PyTorch is an open-source Deep Learning framework developed by Facebook and based on the now-abandoned Torch library. Unlike Torch, PyTorch uses the popular Python language and is known for its flexibility, ease of use, and simplicity. It was merged with another framework, called Caffe, in 2018. Caffe is known for its excellent image processing capabilities.

Pros

Since PyTorch is built on the Python language, anyone with a basic knowledge of Python can adopt this framework and start building deep learning models. It has a strong user community and rich library of tools.

Cons

PyTorch does not have TensorFlow or Keras’s level of support, and while written in Python, it can be tough to master. Also, it officially only supports NVIDIA GPUs and not AMD Radeon.

Keras

Keras is built on top off TensorFlow. It allows you to build NNs that are compatible with TensorFlow. Keras was built to be a lightweight and easy-to-use framework perfect for beginners and experts.

Pros

Keras provides a low-barrier to entry into NN programming and data prepping before processing.

Cons

Since Keras is a wrapper library, you can not modify or optimize the libraries for specific needs. It also uses a lot of resources and does not come with any pre-configured models. It only supports the Python language.

Deeplearning4j

Deeplearning4j is a library written and optimized for the Java Virtual Machine (JVM) and all of its supported languages. Some of these languages include Kotlin, Clojure, and Scala. However, it does also support Python. The framework is released under the Apache license and includes support for Apache NN. There are also distributed parallel versions that work with Apache Hadoop and Spark.

Pros

This framework is more efficient for image recognition tasks using multiple GPUs since it is implemented in Java. Deeplearning4j is particularly good for natural language processing, text mining, fraud detection, image recognition, and parts of speech tagging.

Cons

Java is nowhere near as popular for machine learning as Python. As such, the framework itself does not have the growing community and codebase that TensorFlow or Keras has. For these reasons, it might be more difficult to deploy a Java-based deep learning project.

ONNX

ONNX (Open Neural Network Exchange) was developed as an open-source deep learning ecosystem by Microsoft and Facebook. It was intended to provide a deep learning framework that enables developers to switch easily between platforms. ONNX models are natively supported in The Microsoft Cognitive Toolkit, Caffe2, MXNet, and PyTorch.

Pros

ONNX is popular for its flexibility and interoperability between multiple frameworks. Developers can easily convert their pre-trained model into a file, which can then be merged with their app. Because of its broad native support, ONNX prevents framework lock-in by providing easier access to hardware optimization and enabling model sharing.

Cons

Conversion can be challenging with more complicated models. Depending on the architecture and implementation you may need to adapt the code to support ONNX. Its runtime lacks the widespread support of other, more popular libraries.

Apache MXNet

MXNet was created by the Apache Software Foundation and has the backing and support of Amazon Web Services. MXNet is notable for offering near-linear scaling efficiency, and utilizes hardware to its greatest potential.

Pros

Apache MXNet enables the user to code in a variety of programming languages, including Python, C++, R, Julia, and Scala. The back-end is written in C++ and CUDA, allowing it to scale on all GPUs. It is notable for supporting Long Short-Term Memory (LTSM) networks. This deep learning framework is known for its capabilities in imaging, handwriting/speech recognition, forecasting and NLP.

Cons

MXNet has a much smaller open source community compared with TensorFlow and PyTorch. Improvements, bug fixes, and new features come more slowly.

This page was originally published on


Your Feedback Is Important

Let us know if this guide was helpful to you.


Join the conversation.
Read other comments or post your own below. Comments must be respectful, constructive, and relevant to the topic of the guide. Do not post external links or advertisements. Before posting, consider if your comment would be better addressed by contacting our Support team or asking on our Community Site.
The Disqus commenting system for Linode Docs requires the acceptance of Functional Cookies, which allow us to analyze site usage so we can measure and improve performance. To view and create comments for this article, please update your Cookie Preferences on this website and refresh this web page. Please note: You must have JavaScript enabled in your browser.