# TensorFlow是如何运行的¶

\$ pip install -r requirements.txt


# 通用TensorFlow算法概览¶

## 转换和规范化数据¶

# 低版本TensorFlow的用法
>>> data = tf.nn.batch_norm_with_global_normalization(...)
# TensorFlow 2.2的用法
>>> data = tf.nn.batch_normalization(...)


tensorflow.nn.batch_normalization用法介绍

Batch normalization.

Normalizes a tensor by mean and variance, and applies (optionally) a scale $$gamma$$ to it, as well as an offset $$beta$$:

$$frac{gamma(x-mu)}{sigma}+beta$$

mean, variance, offset and scale are all expected to be of one of two shapes:

• In all generality, they can have the same number of dimensions as the input x, with identical sizes as x for the dimensions that are not normalized over (the ‘depth’ dimension(s)), and dimension 1 for the others which are being normalized over. mean and variance in this case would typically be the outputs of tf.nn.moments(…, keepdims=True) during training, or running averages thereof during inference.
• In the common case where the ‘depth’ dimension is the last dimension in the input tensor x, they may be one dimensional tensors of the same size as the ‘depth’ dimension. This is the case for example for the common [batch, depth] layout of fully-connected layers, and [batch, height, width, depth] for convolutions. mean and variance in this case would typically be the outputs of tf.nn.moments(…, keepdims=False) during training, or running averages thereof during inference.

See equation 11 in Algorithm 2 of source: [Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift; S. Ioffe, C. Szegedy] (http://arxiv.org/abs/1502.03167).

param x: param mean: Input Tensor of arbitrary dimensionality. A mean Tensor. A variance Tensor. An offset Tensor, often denoted $$beta$$ in equations, or None. If present, will be added to the normalized tensor. A scale Tensor, often denoted $$gamma$$ in equations, or None. If present, the scale is applied to the normalized tensor. A small float number to avoid dividing by 0. A name for this operation (optional). the normalized, scaled, offset tensor.

References

Batch Normalization - Accelerating Deep Network Training by Reducing Internal Covariate Shift:

## 设置算法参数¶

>>> learning_rate = 0.01
>>> iterations = 1000


## 变量和占位符的初始化¶

TensorFlow是需要我们告诉它，哪些是可以改变的，哪些是不可以改变的。在损失函数最小化的优化过程中，TensorFlow会改变一些变量。 为了实现这些，我们需要通过占位符(placeholders)来传入数据。变量和占位符的大小和类型都是需要我们进行初始化的，这样呢，TensorFlow 就会知道应该怎么优化。例如：

>>> a_var = tf.constant(42)
>>> x_input = tf.placeholder(tf.float32, [None, input_size])
>>> y_input = tf.placeholder(tf.float32, [None, num_classes])


tensorflow.constant用法介绍

Creates a constant tensor from a tensor-like object.

Note: All eager tf.Tensor values are immutable (in contrast to tf.Variable). There is nothing especially _constant_ about the value returned from tf.constant. This function it is not fundamentally different from tf.convert_to_tensor. The name tf.constant comes from the symbolic APIs (like tf.data or keras functional models) where the value is embeded in a Const node in the tf.Graph. tf.constant is useful for asserting that the value can be embedded that way.

If the argument dtype is not specified, then the type is inferred from the type of value.

>>> # Constant 1-D Tensor from a python list.
>>> tf.constant([1, 2, 3, 4, 5, 6])
<tf.Tensor: shape=(6,), dtype=int32,
numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
>>> # Or a numpy array
>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> tf.constant(a)
<tf.Tensor: shape=(2, 3), dtype=int64, numpy=
array([[1, 2, 3],
[4, 5, 6]])>


If dtype is specified the resulting tensor values are cast to the requested dtype.

>>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64)
<tf.Tensor: shape=(6,), dtype=float64,
numpy=array([1., 2., 3., 4., 5., 6.])>


If shape is set, the value is reshaped to match. Scalars are expanded to fill the shape:

>>> tf.constant(0, shape=(2, 3))
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[0, 0, 0],
[0, 0, 0]], dtype=int32)>
>>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
[4, 5, 6]], dtype=int32)>


tf.constant has no effect if an eager Tensor is passed as the value, it even transmits gradients:

>>> v = tf.Variable([0.0])
>>> with tf.GradientTape() as g:
...     loss = tf.constant(v + v)
array([2.], dtype=float32)


But, since tf.constant embeds the value in the tf.Graph this fails for symbolic tensors:

>>> i = tf.keras.layers.Input(shape=[None, None])
>>> t = tf.constant(i)
Traceback (most recent call last):
...
NotImplementedError: ...


tf.constant will _always_ create CPU (host) tensors. In order to create tensors on other devices, use tf.identity. (If the value is an eager Tensor, however, the tensor will be returned unmodified as mentioned above.)

Related Ops:

• tf.convert_to_tensor is similar but: * It has no shape argument. * Symbolic tensors are allowed to pass through.

>>> i = tf.keras.layers.Input(shape=[None, None])
>>> t = tf.convert_to_tensor(i)

• tf.fill: differs in a few ways: * tf.constant supports arbitrary constants, not just uniform scalar

Tensors like tf.fill.

• tf.fill creates an Op in the graph that is expanded at runtime, so it can efficiently represent large tensors.
• Since tf.fill does not embed the value, it can produce dynamically sized outputs.
param value: A constant value (or list) of output type dtype. The type of the elements of the resulting tensor. Optional dimensions of resulting tensor. Optional name for the tensor. A Constant Tensor. TypeError – if shape is incorrectly specified or unsupported. ValueError – if called on a symbolic tensor.

tensorflow.float32用法介绍

Represents the type of the elements in a Tensor.

The following DType objects are defined:

• tf.float16: 16-bit half-precision floating-point.
• tf.float32: 32-bit single-precision floating-point.
• tf.float64: 64-bit double-precision floating-point.
• tf.bfloat16: 16-bit truncated floating-point.
• tf.complex64: 64-bit single-precision complex.
• tf.complex128: 128-bit double-precision complex.
• tf.int8: 8-bit signed integer.
• tf.uint8: 8-bit unsigned integer.
• tf.uint16: 16-bit unsigned integer.
• tf.uint32: 32-bit unsigned integer.
• tf.uint64: 64-bit unsigned integer.
• tf.int16: 16-bit signed integer.
• tf.int32: 32-bit signed integer.
• tf.int64: 64-bit signed integer.
• tf.bool: Boolean.
• tf.string: String.
• tf.qint8: Quantized 8-bit signed integer.
• tf.quint8: Quantized 8-bit unsigned integer.
• tf.qint16: Quantized 16-bit signed integer.
• tf.quint16: Quantized 16-bit unsigned integer.
• tf.qint32: Quantized 32-bit signed integer.
• tf.resource: Handle to a mutable resource.
• tf.variant: Values of arbitrary types.

The tf.as_dtype() function converts numpy types and string type names to a DType object.

## 定义模型结构¶

# 低版本TensorFlow的用法
>>> y_pred = tf.add(tf.mul(x_input, weight_matrix), b_matrix)
# TensorFlow2.2的用法
>>> y_pred = tf.add(tf.multiply(x_input, weight_matrix), b_matrix)


Returns x + y element-wise.

param x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string. A Tensor. Must have the same type as x. A name for the operation (optional). A Tensor. Has the same type as x.

tensorflow.multiply用法介绍

Returns an element-wise x * y.

For example:

>>> x = tf.constant(([1, 2, 3, 4]))
>>> tf.math.multiply(x, x)
<tf.Tensor: shape=(4,), dtype=..., numpy=array([ 1,  4,  9, 16], dtype=int32)>


Since tf.math.multiply will convert its arguments to Tensors, you can also pass in non-Tensor arguments:

>>> tf.math.multiply(7,6)
<tf.Tensor: shape=(), dtype=int32, numpy=42>


If x.shape is not thes same as y.shape, they will be broadcast to a compatible shape. (More about broadcasting [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)

For example:

>>> x = tf.ones([1, 2]);
>>> y = tf.ones([2, 1]);
>>> x * y  # Taking advantage of operator overriding
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1., 1.],
[1., 1.]], dtype=float32)>

param x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128. A Tensor. Must have the same type as x. A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

raises: * InvalidArgumentError – When x and y have incomptatible shapes or types.

## 声明损失函数¶

>>> loss = tf.reduce_mean(tf.square(y_actual – y_pred))


tensorflow.reduce_mean用法介绍

Computes the mean of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis by computing the mean of elements across the dimensions in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

For example:

>>> x = tf.constant([[1., 1.], [2., 2.]])
>>> tf.reduce_mean(x)
<tf.Tensor: shape=(), dtype=float32, numpy=1.5>
>>> tf.reduce_mean(x, 0)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.5, 1.5], dtype=float32)>
>>> tf.reduce_mean(x, 1)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)>

param input_tensor:
The tensor to reduce. Should have numeric type.
param axis:The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
param keepdims:If true, retains reduced dimensions with length 1.
param name:A name for the operation (optional).
returns:The reduced tensor.

@compatibility(numpy) Equivalent to np.mean

Please note that np.mean has a dtype parameter that could be used to specify the output type. By default this is dtype=float64. On the other hand, tf.reduce_mean has an aggressive type inference from input_tensor, for example:

>>> x = tf.constant([1, 0, 1, 0])
>>> tf.reduce_mean(x)
<tf.Tensor: shape=(), dtype=int32, numpy=0>
>>> y = tf.constant([1., 0., 1., 0.])
>>> tf.reduce_mean(y)
<tf.Tensor: shape=(), dtype=float32, numpy=0.5>


@end_compatibility

tensorflow.square用法介绍

Computes square of x element-wise.

I.e., $$y = x * x = x^2$$.

>>> tf.math.square([-2., 0., 3.])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([4., 0., 9.], dtype=float32)>

param x: A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128. A name for the operation (optional). A Tensor. Has the same type as x. If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.square(x.values, …), x.dense_shape)

## 模型的初始化和训练¶

>>> with tf.Session(graph=graph) as session:
...
>>> session.run(...)
...


>>> session = tf.Session(graph=graph)
>>> session.run(…)


# 你知道吗？¶

 - 人工智能：一种计算机科学分支，旨在让计算机达到人类的智慧。实现这一目标有很多方式，包括机器学习和深度学习。

-  机器学习：一系列相关技术，用于训练计算机执行特定的任务。

-  神经网络：一种机器学习结构，灵感来自人类大脑的神经元网络。神经网络是深度学习的基本概念。

-  深度学习：机器学习的一个分支，利用多层神经网络实现目标。通常“机器学习”和“深度学习”可以相互指代。