More info here.
- TensorFlow Tutorial 1 – From the basics to slightly more interesting applications of TensorFlow
- TensorFlow Tutorial 2 – Introduction to deep learning based on Google’s TensorFlow framework. These tutorials are direct ports of Newmu’s Theano
- TensorFlow Examples – TensorFlow tutorials and code examples for beginners
- Sungjoon’s TensorFlow-101 – TensorFlow tutorials written in Python with Jupyter Notebook
- Terry Um’s TensorFlow Exercises – Re-create the codes from other TensorFlow examples
- Installing TensorFlow on Raspberry Pi 3 – TensorFlow compiled and running properly on the Raspberry Pi
- Classification on time series – Recurrent Neural Network classification in TensorFlow with LSTM on cellphone sensor data
- Getting Started with TensorFlow on Android – Build your first TensorFlow Android app
- Predict time series – Learn to use a seq2seq model on simple datasets as an introduction to the vast array of possibilities that this architecture offers
- Single Image Random Dot Stereograms – SIRDS is a means to present 3D data in a 2D image. It allows for scientific data display of a waterfall type plot with no hidden lines due to perspective.
- Domain Transfer Network – Implementation of Unsupervised Cross-Domain Image Generation
- [Show, Attend and Tell] (https://github.com/yunjey/show_attend_and_tell) – Attention Based Image Caption Generator
- Neural Style Implementation of Neural Style
- Pretty Tensor – Pretty Tensor provides a high level builder API
- Neural Style – An implementation of neural style
- AlexNet3D – An implementations of AlexNet3D. Simple AlexNet model but with 3D convolutional layers (conv3d).
- TensorFlow White Paper Notes – Annotated notes and summaries of the TensorFlow white paper, along with SVG figures and links to documentation
- NeuralArt – Implementation of A Neural Algorithm of Artistic Style
- Deep-Q learning Pong with TensorFlow and PyGame
- Generative Handwriting Demo using TensorFlow – An attempt to implement the random handwriting generation portion of Alex Graves’ paper
- Neural Turing Machine in TensorFlow – implementation of Neural Turing Machine
- [GoogleNet Convolutional Neural Network Groups Movie Scenes By Setting] (https://github.com/agermanidis/thingscoop) – Search, filter, and describe videos based on objects, places, and other things that appear in them
- Neural machine translation between the writings of Shakespeare and modern English using TensorFlow – This performs a monolingual translation, going from modern English to Shakespeare and vis-versa.
- Chatbot – Implementation of “A neural conversational model”
- [Colornet – Neural Network to colorize grayscale images] (https://github.com/pavelgonchar/colornet) – Neural Network to colorize grayscale images
- Neural Caption Generator – Implementation of “Show and Tell”
- Neural Caption Generator with Attention – Implementation of “Show, Attend and Tell”
- Weakly_detector – Implementation of “Learning Deep Features for Discriminative Localization”
- Dynamic Capacity Networks – Implementation of “Dynamic Capacity Networks”
- HMM in TensorFlow – Implementation of viterbi and forward/backward algorithms for HMM
- DeepOSM – Train TensorFlow neural nets with OpenStreetMap features and satellite imagery.
- DQN-tensorflow – TensorFlow implementation of DeepMind’s ‘Human-Level Control through Deep Reinforcement Learning’ with OpenAI Gym by Devsisters.com
- Highway Network – TensorFlow implementation of “Training Very Deep Networks” with a blog post
- Sentence Classification with CNN – TensorFlow implementation of “Convolutional Neural Networks for Sentence Classification” with a blog post
- End-To-End Memory Networks – Implementation of End-To-End Memory Networks
- Character-Aware Neural Language Models – TensorFlow implementation of Character-Aware Neural Language Models
- YOLO TensorFlow ++ – TensorFlow implementation of ‘YOLO: Real-Time Object Detection’, with training and an actual support for real-time running on mobile devices.
- Wavenet – This is a TensorFlow implementation of the WaveNet generative neural network architecture for audio generation.
- Mnemonic Descent Method – Tensorflow implementation of “Mnemonic Descent Method: A recurrent process applied for end-to-end face alignment”
- CNN visualization using Tensorflow – Tensorflow implementation of “Visualizing and Understanding Convolutional Networks”
- YOLO TensorFlow – Implementation of ‘YOLO : Real-Time Object Detection’
- android-yolo – Real-time object detection on Android using the YOLO network, powered by TensorFlow.
- Magenta – Research project to advance the state of the art in machine intelligence for music and art generation
- tf.contrib.learn – Simplified interface for Deep/Machine Learning (now part of TensorFlow)
- tensorflow.rb – TensorFlow native interface for ruby using SWIG
- tflearn – Deep learning library featuring a higher-level API
- TensorFlow-Slim – High-level library for defining models
- TensorFrames – TensorFlow binding for Apache Spark
- TensorFlowOnSpark – initiative from Yahoo! to enable distributed TensorFlow with Apache Spark.
- caffe-tensorflow – Convert Caffe models to TensorFlow format
- keras – Minimal, modular deep learning library for TensorFlow and Theano
- SyntaxNet: Neural Models of Syntax – A TensorFlow implementation of the models described in Globally Normalized Transition-Based Neural Networks, Andor et al. (2016)
- keras-js – Run Keras models (tensorflow backend) in the browser, with GPU support
- NNFlow – Simple framework allowing to read-in ROOT NTuples by converting them to a Numpy array and then use them in Google Tensorflow.
- Sonnet – Sonnet is DeepMind’s library built on top of TensorFlow for building complex neural networks.
- tensorpack – Neural Network Toolbox on TensorFlow focusing on training speed and on large datasets.
- TensorFlow Guide 1 – A guide to installation and use
- TensorFlow Guide 2 – Continuation of first video
- TensorFlow Basic Usage – A guide going over basic usage
- TensorFlow Deep MNIST for Experts – Goes over Deep MNIST
- TensorFlow Udacity Deep Learning – Basic steps to install TensorFlow for free on the Cloud 9 online service with 1Gb of data
- Why Google wants everyone to have access to TensorFlow
- Videos from TensorFlow Silicon Valley Meet Up 1/19/2016
- Videos from TensorFlow Silicon Valley Meet Up 1/21/2016
- Stanford CS224d Lecture 7 – Introduction to TensorFlow, 19th Apr 2016 – CS224d Deep Learning for Natural Language Processing by Richard Socher
- Diving into Machine Learning through TensorFlow – Pycon 2016 Portland Oregon, Slide & Code by Julia Ferraioli, Amy Unruh, Eli Bixby
- Large Scale Deep Learning with TensorFlow – Spark Summit 2016 Keynote by Jeff Dean
- Tensorflow and deep learning – without at PhD – by Martin Görner
- TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems – This paper describes the TensorFlow interface and an implementation of that interface that we have built at Google
- TF.Learn: TensorFlow’s High-level Module for Distributed Machine Learning
- Comparative Study of Deep Learning Software Frameworks – The study is performed on several types of deep learning architectures and we evaluate the performance of the above frameworks when employed on a single machine for both (multi-threaded) CPU and GPU (Nvidia Titan X) settings
- Distributed TensorFlow with MPI – In this paper, we extend recently proposed Google TensorFlow for execution on large scale clusters using Message Passing Interface (MPI)
- Globally Normalized Transition-Based Neural Networks – This paper describes the models behind SyntaxNet.
- TensorFlow: A system for large-scale machine learning – This paper describes the TensorFlow dataflow model in contrast to existing systems and demonstrate the compelling performance
- TensorFlow: smarter machine learning, for everyone – An introduction to TensorFlow
- Announcing SyntaxNet: The World’s Most Accurate Parser Goes Open Source – Release of SyntaxNet, “an open-source neural network framework implemented in TensorFlow that provides a foundation for Natural Language Understanding systems.
- Why TensorFlow will change the Game for AI
- TensorFlow for Poets – Goes over the implementation of TensorFlow
- Introduction to Scikit Flow – Simplified Interface to TensorFlow – Key Features Illustrated
- Building Machine Learning Estimator in TensorFlow – Understanding the Internals of TensorFlow Learn Estimators
- TensorFlow – Not Just For Deep Learning
- The indico Machine Learning Team’s take on TensorFlow
- The Good, Bad, & Ugly of TensorFlow – A survey of six months rapid evolution (+ tips/hacks and code to fix the ugly stuff), Dan Kuster at Indico, May 9, 2016
- Fizz Buzz in TensorFlow – A joke by Joel Grus
- RNNs In TensorFlow, A Practical Guide And Undocumented Features – Step-by-step guide with full code examples on GitHub.
- Using TensorBoard to Visualize Image Classification Retraining in TensorFlow
- TFRecords Guide semantic segmentation and handling the TFRecord file format.
- TensorFlow Android Guide – Android TensorFlow Machine Learning Example.
- TensorFlow Optimizations on Modern Intel® Architecture – Introduces TensorFlow optimizations on Intel® Xeon® and Intel® Xeon Phi™ processor-based platforms based on an Intel/Google collaboration.
- Machine Learning with TensorFlow by Nishant Shukla, computer vision researcher at UCLA and author of Haskell Data Analysis Cookbook. This book makes the math-heavy topic of ML approachable and practicle to a newcomer.
- First Contact with TensorFlow by Jordi Torres, professor at UPC Barcelona Tech and a research manager and senior advisor at Barcelona Supercomputing Center
- Deep Learning with Python – Develop Deep Learning Models on Theano and TensorFlow Using Keras by Jason Brownlee
- TensorFlow for Machine Intelligence – Complete guide to use TensorFlow from the basics of graph computing, to deep learning models to using it in production environments – Bleeding Edge Press
- Getting Started with TensorFlow – Get up and running with the latest numerical computing library by Google and dive deeper into your data, by Giancarlo Zaccone
- Hands-On Machine Learning with Scikit-Learn and TensorFlow – by Aurélien Geron, former lead of the YouTube video classification team. Covers ML fundamentals, training and deploying deep nets across multiple servers and GPUs using TensorFlow, the latest CNN, RNN and Autoencoder architectures, and Reinforcement Learning (Deep Q).
- Building Machine Learning Projects with Tensorflow – by Rodolfo Bonnin. This book covers various projects in TensorFlow that expose what can be done with TensorFlow in different scenarios. The book provides projects on training models, machine learning, deep learning, and working with various neural networks. Each project is an engaging and insightful exercise that will teach you how to use TensorFlow and show you how layers of data can be explored by working with Tensors.
If you want to contribute to this list (please do), send me a pull request or contact me @jtoy Also, if you notice that any of the above listed repositories should be deprecated, due to any of the following reasons:
- Repository’s owner explicitly say that “this library is not maintained”.
- Not committed for long time (2~3 years).
More info on the guidelines
但有的时候我不想这样，因为这样会造成分值小的那个饥饿。所以我希望分值大的那一项经常取到，分值小的那一项也偶尔可以取到，那么我用softmax就可以了 现在还是a和b，a>b，如果我们取按照softmax来计算取a和b的概率，那a的softmax值大于b的，所以a会经常取到，而b也会偶尔取到，概率跟它们本来的大小有关。所以说不是max，而是 Soft max 那各自的概率究竟是多少呢，我们下面就来具体看一下
When a new (unseen) image is input into the ConvNet, the network would go through the forward propagation step and output a probability for each class (for a new image, the output probabilities are calculated using the weights which have been optimized to correctly classify all the previous training examples). If our training set is large enough, the network will (hopefully) generalize well to new images and classify them into correct categories.
ConvNets derive their name from the “convolution” operator. The primary purpose of Convolution in case of a ConvNet is to extract features from the input image. Convolution preserves the spatial relationship between pixels by learning image features using small squares of input data. We will not go into the mathematical details of Convolution here, but will try to understand how it works over images.
As we discussed above, every image can be considered as a matrix of pixel values. Consider a 5 x 5 image whose pixel values are only 0 and 1 (note that for a grayscale image, pixel values range from 0 to 255, the green matrix below is a special case where pixel values are only 0 and 1):
Figure 5: The Convolution operation. The output matrix is called Convolved Feature or Feature Map. Source 
Take a moment to understand how the computation above is being done. We slide the orange matrix over our original image (green) by 1 pixel (also called ‘stride’) and for every position, we compute element wise multiplication (between the two matrices) and add the multiplication outputs to get the final integer which forms a single element of the output matrix (pink). Note that the 3×3 matrix “sees” only a part of the input image in each stride.
In CNN terminology, the 3×3（orange） matrix is called a ‘filter’ or ‘kernel’（过滤器／卷积核） or ‘feature detector’ and the matrix formed by sliding the filter over the image and computing the dot product is called the ‘Convolved Feature’ or ‘Activation Map’ or the ‘Feature Map‘. It is important to note that filters acts as feature detectors from the original input image.
In the table below, we can see the effects of convolution of the above image with different filters. As shown, we can perform operations such as Edge Detection, Sharpen and Blur just by changing the numeric values of our filter matrix before the convolution operation  – this means that different filters can detect different features from an image, for example edges, curves etc. More such examples are available in Section 8.2.4 here.
The size of the Feature Map (Convolved Feature) is controlled by three parameters  that we need to decide before the convolution step is performed:
- Depth: Depth corresponds to the number of filters we use for the convolution operation. In the network shown in Figure 7, we are performing convolution of the original boat image using three distinct filters, thus producing three different feature maps as shown. You can think of these three feature maps as stacked 2d matrices, so, the ‘depth’ of the feature map would be three.
- Stride: Stride is the number of pixels by which we slide our filter matrix over the input matrix. When the stride is 1 then we move the filters one pixel at a time. When the stride is 2, then the filters jump 2 pixels at a time as we slide them around. Having a larger stride will produce smaller feature maps.
- Zero-padding: Sometimes, it is convenient to pad the input matrix with zeros around the border, so that we can apply the filter to bordering elements of our input image matrix. A nice feature of zero padding is that it allows us to control the size of the feature maps. Adding zero-padding is also called wide convolution, and not using zero-padding would be a narrow convolution. This has been explained clearly in .
在稀疏自编码章节中，我们介绍了把输入层和隐含层进行”全连接”的设计。从计算的角度来讲，在其他章节中曾经用过的相对较小的图像（如在稀疏自编码的作业中用到过的 8×8 的小块图像，在MNIST数据集中用到过的28×28 的小块图像），从整幅图像中计算特征是可行的。但是，如果是更大的图像（如 96×96 的图像），要通过这种全联通网络的这种方法来学习整幅图像上的特征，从计算角度而言，将变得非常耗时。你需要设计 10 的 4 次方（=10000）个输入单元，假设你要学习 100 个特征，那么就有 10 的 6 次方个参数需要去学习。与 28×28 的小块图像相比较， 96×96 的图像使用前向输送或者后向传导的计算方式，计算过程也会慢 10 的 2 次方（=100）倍。
下面给出一个具体的例子：假设你已经从一个 96×96 的图像中学习到了它的一个 8×8 的样本所具有的特征，假设这是由有 100 个隐含单元的自编码完成的。为了得到卷积特征，需要对 96×96 的图像的每个 8×8 的小块图像区域都进行卷积运算。也就是说，抽取 8×8 的小块区域，并且从起始坐标开始依次标记为（1，1），（1，2），…，一直到（89，89），然后对抽取的区域逐个运行训练过的稀疏自编码来得到特征的激活值。在这个例子里，显然可以得到 100 个集合，每个集合含有 89×89 个卷积特征。
在通过卷积获得了特征 (features) 之后，下一步我们希望利用这些特征去做分类。理论上讲，人们可以用所有提取得到的特征去训练分类器，例如 softmax 分类器，但这样做面临计算量的挑战。例如：对于一个 96X96 像素的图像，假设我们已经学习得到了400个定义在8X8输入上的特征，每一个特征和图像卷积都会得到一个 (96 − 8 + 1) * (96 − 8 + 1) = 7921 维的卷积特征，由于有 400 个特征，所以每个样例 (example) 都会得到一个 892 * 400 = 3,168,400 维的卷积特征向量。学习一个拥有超过 3 百万特征输入的分类器十分不便，并且容易出现过拟合 (over-fitting)。
为了解决这个问题，首先回忆一下，我们之所以决定使用卷积后的特征是因为图像具有一种”静态性”的属性，这也就意味着在一个图像区域有用的特征极有可能在另一个区域同样适用。因此，为了描述大的图像，一个很自然的想法就是对不同位置的特征进行聚合统计，例如，人们可以计算图像一个区域上的某个特定特征的平均值 (或最大值)。这些概要统计特征不仅具有低得多的维度 (相比使用所有提取得到的特征)，同时还会改善结果(不容易过拟合)。这种聚合的操作就叫做池化 (pooling)，有时也称为平均池化或者最大池化 (取决于计算池化的方法)。
如果人们选择图像中的连续范围作为池化区域，并且只是池化相同(重复)的隐藏单元产生的特征，那么，这些池化单元就具有平移不变性 (translation invariant)。这就意味着即使图像经历了一个小的平移之后，依然会产生相同的 (池化的) 特征。在很多任务中 (例如物体检测、声音识别)，我们都更希望得到具有平移不变性的特征，因为即使图像经过了平移，样例(图像)的标记仍然保持不变。例如，如果你处理一个MNIST数据集的数字，把它向左侧或右侧平移，那么不论最终的位置在哪里，你都会期望你的分类器仍然能够精确地将其分类为相同的数字。
(*MNIST 是一个手写数字库识别库: http://yann.lecun.com/exdb/mnist/)