This post is also available as a Python notebook.
From September 2017 to October 2018, I worked on TensorFlow 2.0 alongside many engineers. In this post, I’ll explain what TensorFlow 2.0 is and how it differs from TensorFlow 1.x. Towards the end, I’ll briefly compare TensorFlow 2.0 to PyTorch 1.0. This post represents my own views; it does not represent the views of Google, my former employer.
TensorFlow (TF) 2.0 is a significant, backwards-incompatible update to TF’s execution model and API.
Execution model. In TF 2.0, all operations execute imperatively by default. Graphs and the graph runtime are both abstracted away by a just-in-time tracer that translates Python functions executing TF operations into executable graph functions. This means in TF 2.0, there is no
Session, and no global graph state. The tracer is exposed as a Python decorator,
tf.function. This decorator is for advanced users. Using it is completely optional.
API. TF 2.0 makes
tf.keras the high-level API for constructing and training neural networks. But you don’t have to use Keras if you don’t want to. You can instead use lower-level operations and automatic differentiation directly.
To follow along with the code examples in this post, install the TF 2.0 alpha.
pip install tensorflow==2.0.0-alpha0
import tensorflow as tf tf.__version__
- Why TF 2.0?
- Imperative execution
- Automatic differentiation
- Graph functions
- Comparison to other Python libraries
- Domain-specific languages for machine learning