Welcome to the Gutenberg Editor

Of Mountains & Printing Presses

The goal of this new editor is to make adding rich content to WordPress simple and enjoyable. This whole post is composed of pieces of content—somewhat similar to LEGO bricks—that you can move around and interact with. Move your cursor around and you’ll notice the different blocks light up with outlines and arrows. Press the arrows to reposition blocks quickly, without fearing about losing things in the process of copying and pasting.

What you are reading now is a text block the most basic block of all. The text block has its own controls to be moved freely around the post…

… like this one, which is right aligned.

Headings are separate blocks as well, which helps with the outline and organization of your content.

A Picture is Worth a Thousand Words

Handling images and media with the utmost care is a primary focus of the new editor. Hopefully, you’ll find aspects of adding captions or going full-width with your pictures much easier and robust than before.

Beautiful landscape
If your theme supports it, you’ll see the “wide” button on the image toolbar. Give it a try.

Try selecting and removing or editing the caption, now you don’t have to be careful about selecting the image or other text by mistake and ruining the presentation.

The Inserter Tool

Imagine everything that WordPress can do is available to you quickly and in the same place on the interface. No need to figure out HTML tags, classes, or remember complicated shortcode syntax. That’s the spirit behind the inserter—the (+) button you’ll see around the editor—which allows you to browse all available content blocks and add them into your post. Plugins and themes are able to register their own, opening up all sort of possibilities for rich editing and publishing.

Go give it a try, you may discover things WordPress can already add into your posts that you didn’t know about. Here’s a short list of what you can currently find there:

  • Text & Headings
  • Images & Videos
  • Galleries
  • Embeds, like YouTube, Tweets, or other WordPress posts.
  • Layout blocks, like Buttons, Hero Images, Separators, etc.
  • And Lists like this one of course 🙂

Visual Editing

A huge benefit of blocks is that you can edit them in place and manipulate your content directly. Instead of having fields for editing things like the source of a quote, or the text of a button, you can directly change the content. Try editing the following quote:

The editor will endeavor to create a new page and post building experience that makes writing rich posts effortless, and has “blocks” to make it easy what today might take shortcodes, custom HTML, or “mystery meat” embed discovery.

Matt Mullenweg, 2017

The information corresponding to the source of the quote is a separate text field, similar to captions under images, so the structure of the quote is protected even if you select, modify, or remove the source. It’s always easy to add it back.

Blocks can be anything you need. For instance, you may want to add a subdued quote as part of the composition of your text, or you may prefer to display a giant stylized one. All of these options are available in the inserter.

You can change the amount of columns in your galleries by dragging a slider in the block inspector in the sidebar.

Media Rich

If you combine the new wide and full-wide alignments with galleries, you can create a very media rich layout, very quickly:

Accessibility is important — don’t forget image alt attribute

Sure, the full-wide image can be pretty big. But sometimes the image is worth it.

The above is a gallery with just two images. It’s an easier way to create visually appealing layouts, without having to deal with floats. You can also easily convert the gallery back to individual images again, by using the block switcher.

Any block can opt into these alignments. The embed block has them also, and is responsive out of the box:

You can build any block you like, static or dynamic, decorative or plain. Here’s a pullquote block:

Code is Poetry

The WordPress community

If you want to learn more about how to build additional blocks, or if you are interested in helping with the project, head over to the GitHub repository.


Thanks for testing Gutenberg!

👋

HomeAssistant+HomeBridge+Siri+树莓派+ESP8266+传感器+摄像头+内外网穿透+VPS

HomeAssistant+HomeBridge+Hassio+Docker-CM+Siri+MQTT+Padavan+Breed+Shairplay+HiFiDAC+Xshell+Xftp+DHT22+树莓派+ESP8266+传感器+Arduino+继电器+摄像头mjpgstreamer+内外网穿透frp+VPS+公网转发+域名+阿里云虚拟主机+owncloud+VNC+SSH+OpenSSHforWindows+iterm2+PlayerXtreme+MSTSC+RD Client+Termius+小米系传感器网关灯+……

一个大系统,都实现了。有时间整理出来文字和图
慢慢填。
先放几个猫片,趁人不在家的时候…



Tensorboard 101

  1. 运行py
  2. 运行tensorboard –logdir=/tmp/tensorflow/mnist/logs/mnist_with_summaries


 

#%%

# Copyright 2016 The TensorFlow Authors. All Rights Reserved.

#

# Licensed under the Apache License, Version 2.0 (the “License”);

# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at

#

# http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an “AS IS” BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# ==============================================================================

import tensorflow as tf

from tensorflow.examples.tutorials.mnist import input_data


 


 

max_steps=1000

learning_rate=0.001

dropout=0.9

data_dir=’/tmp/tensorflow/mnist/input_data’

log_dir=’/tmp/tensorflow/mnist/logs/mnist_with_summaries’


 


 

# Import data

mnist = input_data.read_data_sets(data_dir,one_hot=True)


 

sess = tf.InteractiveSession()

# Create a multilayer model.


 

# Input placeholders

with tf.name_scope(‘input’):

x = tf.placeholder(tf.float32, [None, 784], name=’x-input’)

y_ = tf.placeholder(tf.float32, [None, 10], name=’y-input’)


 

with tf.name_scope(‘input_reshape’):

image_shaped_input = tf.reshape(x, [-1, 28, 28, 1])

tf.summary.image(‘input’, image_shaped_input, 10)


 

# We can’t initialize these variables to 0 – the network will get stuck.

def weight_variable(shape):

“””Create a weight variable with appropriate initialization.”””

initial = tf.truncated_normal(shape, stddev=0.1)

return tf.Variable(initial)


 

def bias_variable(shape):

“””Create a bias variable with appropriate initialization.”””

initial = tf.constant(0.1, shape=shape)

return tf.Variable(initial)


 

def variable_summaries(var):

“””Attach a lot of summaries to a Tensor (for TensorBoard visualization).”””

with tf.name_scope(‘summaries’):

mean = tf.reduce_mean(var)

tf.summary.scalar(‘mean’, mean)

with tf.name_scope(‘stddev’):

stddev = tf.sqrt(tf.reduce_mean(tf.square(var – mean)))

tf.summary.scalar(‘stddev’, stddev)

tf.summary.scalar(‘max’, tf.reduce_max(var))

tf.summary.scalar(‘min’, tf.reduce_min(var))

tf.summary.histogram(‘histogram’, var)


 

def nn_layer(input_tensor, input_dim, output_dim, layer_name, act=tf.nn.relu):

“””Reusable code for making a simple neural net layer.

It does a matrix multiply, bias add, and then uses relu to nonlinearize.

It also sets up name scoping so that the resultant graph is easy to read,

and adds a number of summary ops.

“””

# Adding a name scope ensures logical grouping of the layers in the graph.

with tf.name_scope(layer_name):

# This Variable will hold the state of the weights for the layer

with tf.name_scope(‘weights’):

weights = weight_variable([input_dim, output_dim])

variable_summaries(weights)

with tf.name_scope(‘biases’):

biases = bias_variable([output_dim])

variable_summaries(biases)

with tf.name_scope(‘Wx_plus_b’):

preactivate = tf.matmul(input_tensor, weights) + biases

tf.summary.histogram(‘pre_activations’, preactivate)

activations = act(preactivate, name=’activation’)

tf.summary.histogram(‘activations’, activations)

return activations


 

hidden1 = nn_layer(x, 784, 500, ‘layer1’)


 

with tf.name_scope(‘dropout’):

keep_prob = tf.placeholder(tf.float32)

tf.summary.scalar(‘dropout_keep_probability’, keep_prob)

dropped = tf.nn.dropout(hidden1, keep_prob)


 

# Do not apply softmax activation yet, see below.

y = nn_layer(dropped, 500, 10, ‘layer2’, act=tf.identity)


 

with tf.name_scope(‘cross_entropy’):

# The raw formulation of cross-entropy,

#

# tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(tf.softmax(y)),

# reduction_indices=[1]))

#

# can be numerically unstable.

#

# So here we use tf.nn.softmax_cross_entropy_with_logits on the

# raw outputs of the nn_layer above, and then average across

# the batch.

diff = tf.nn.softmax_cross_entropy_with_logits(logits=y, labels=y_)

with tf.name_scope(‘total’):

cross_entropy = tf.reduce_mean(diff)

tf.summary.scalar(‘cross_entropy’, cross_entropy)


 

with tf.name_scope(‘train’):

train_step = tf.train.AdamOptimizer(learning_rate).minimize(

cross_entropy)


 

with tf.name_scope(‘accuracy’):

with tf.name_scope(‘correct_prediction’):

correct_prediction = tf.equal(tf.argmax(y, 1), tf.argmax(y_, 1))

with tf.name_scope(‘accuracy’):

accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

tf.summary.scalar(‘accuracy’, accuracy)


 

# Merge all the summaries and write them out to /tmp/mnist_logs (by default)

merged = tf.summary.merge_all()

train_writer = tf.summary.FileWriter(log_dir + ‘/train’, sess.graph)

test_writer = tf.summary.FileWriter(log_dir + ‘/test’)

tf.global_variables_initializer().run()


 

# Train the model, and also write summaries.

# Every 10th step, measure test-set accuracy, and write test summaries

# All other steps, run train_step on training data, & add training summaries


 

def feed_dict(train):

“””Make a TensorFlow feed_dict: maps data onto Tensor placeholders.”””

if train:

xs, ys = mnist.train.next_batch(100)

k = dropout

else:

xs, ys = mnist.test.images, mnist.test.labels

k = 1.0

return {x: xs, y_: ys, keep_prob: k}


 


 

saver = tf.train.Saver()

for i in range(max_steps):

if i % 10 == 0: # Record summaries and test-set accuracy

summary, acc = sess.run([merged, accuracy], feed_dict=feed_dict(False))

test_writer.add_summary(summary, i)

print(‘Accuracy at step %s: %s’ % (i, acc))

else: # Record train set summaries, and train

if i % 100 == 99: # Record execution stats

run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)

run_metadata = tf.RunMetadata()

summary, _ = sess.run([merged, train_step],

feed_dict=feed_dict(True),

options=run_options,

run_metadata=run_metadata)

train_writer.add_run_metadata(run_metadata, ‘step%03d’ % i)

train_writer.add_summary(summary, i)

saver.save(sess, log_dir+”/model.ckpt”, i)

print(‘Adding run metadata for’, i)

else: # Record a summary

summary, _ = sess.run([merged, train_step], feed_dict=feed_dict(True))

train_writer.add_summary(summary, i)

train_writer.close()

test_writer.close()


 


 


 


 


 

tensorflow tutorials and projects

https://github.com/jtoy/awesome-tensorflow

Awesome TensorFlow 

 

Awesome

A curated list of awesome TensorFlow experiments, libraries, and projects. Inspired by awesome-machine-learning.

What is TensorFlow?

TensorFlow is an open source software library for numerical computation using data flow graphs. In other words, the best way to build deep learning models.

More info here.

Table of Contents

Tutorials

Models/Projects

Powered by TensorFlow

  • YOLO TensorFlow – Implementation of ‘YOLO : Real-Time Object Detection’
  • android-yolo – Real-time object detection on Android using the YOLO network, powered by TensorFlow.
  • Magenta – Research project to advance the state of the art in machine intelligence for music and art generation

Libraries

Videos

Papers

Official announcements

Blog posts

Community

Books

  • Machine Learning with TensorFlow by Nishant Shukla, computer vision researcher at UCLA and author of Haskell Data Analysis Cookbook. This book makes the math-heavy topic of ML approachable and practicle to a newcomer.
  • First Contact with TensorFlow by Jordi Torres, professor at UPC Barcelona Tech and a research manager and senior advisor at Barcelona Supercomputing Center
  • Deep Learning with Python – Develop Deep Learning Models on Theano and TensorFlow Using Keras by Jason Brownlee
  • TensorFlow for Machine Intelligence – Complete guide to use TensorFlow from the basics of graph computing, to deep learning models to using it in production environments – Bleeding Edge Press
  • Getting Started with TensorFlow – Get up and running with the latest numerical computing library by Google and dive deeper into your data, by Giancarlo Zaccone
  • Hands-On Machine Learning with Scikit-Learn and TensorFlow – by Aurélien Geron, former lead of the YouTube video classification team. Covers ML fundamentals, training and deploying deep nets across multiple servers and GPUs using TensorFlow, the latest CNN, RNN and Autoencoder architectures, and Reinforcement Learning (Deep Q).
  • Building Machine Learning Projects with Tensorflow – by Rodolfo Bonnin. This book covers various projects in TensorFlow that expose what can be done with TensorFlow in different scenarios. The book provides projects on training models, machine learning, deep learning, and working with various neural networks. Each project is an engaging and insightful exercise that will teach you how to use TensorFlow and show you how layers of data can be explored by working with Tensors.

Contributions

Your contributions are always welcome!

If you want to contribute to this list (please do), send me a pull request or contact me @jtoy Also, if you notice that any of the above listed repositories should be deprecated, due to any of the following reasons:

More info on the guidelines

Credits

Softmax 和 交叉熵

假如有两个数,a和b,并且a>b,如果取max,那么就直接取a,没有第二种可能


 

但有的时候我不想这样,因为这样会造成分值小的那个饥饿。所以我希望分值大的那一项经常取到,分值小的那一项也偶尔可以取到,那么我用softmax就可以了 现在还是a和b,a>b,如果我们取按照softmax来计算取a和b的概率,那a的softmax值大于b的,所以a会经常取到,而b也会偶尔取到,概率跟它们本来的大小有关。所以说不是max,而是 Soft max 那各自的概率究竟是多少呢,我们下面就来具体看一下

定义

假设我们有一个数组,V,Vi表示V中的第i个元素,那么这个元素的Softmax值就是

也就是说,是该元素的指数,与所有元素指数和的比值

这个定义可以说非常的直观,当然除了直观朴素好理解以外,它还有更多的优点

1.计算与标注样本的差距

在神经网络的计算当中,我们经常需要计算按照神经网络的正向传播计算的分数S1,和按照正确标注计算的分数S2,之间的差距,计算Loss,才能应用反向传播。Loss定义为交叉熵


 

用交叉熵来描述两者之间的关联度

括号内的Softmax取值在 (0,1]

负号取过来,取倒数,则L函数的取值在[0,很大)

如果Softmax越大,L越小。说明分号上下的值越接近,

说明衰减越小,


 


 


 


 

取log里面的值就是这组数据正确分类的Softmax值,它占的比重越大,这个样本的Loss也就越小,这种定义符合我们的要求


 


 

SVM只选自己喜欢的男神,Softmax把所有备胎全部拉出来评分,最后还归一化一下


 

关于SVM 和MLP