Documentation Status MIT License Python version Huawei Clodu TensorFlow 2020 年 09 月 19 日


Copyright © Wei MEI, MLMS™—all rights reserved. 🀤

TensorFlow 机器学习 Cookbook (version : 0.1.0)

TensorFlow (i.e., TF)::
在2015年的时候已经成为开源项目, 自从那之后它已经成为Github中starred最多的机器学习库. TensorFlow的受欢迎度主要归功于它能帮助程序员创造计算图(computational graphs), 自动微分 (automatic differentation) 和 可定制性 (customizability). 由于这些特性,TensorFlow是一个强有力的灵活性高的工具, 用于解决很多机器学习的问题.

本教程阐述很多机器学习算法, 以及如何把它们应用到实际情况中, 以及如何诠释所得到的结果.

重要

  • 第一章: 从TensorFlow开始 (Getting Started), 介绍主要tensorflow的对象与概念. 我们介绍张量, 变量和占位符. 我们也会展示如何在tensorflow中使用矩阵和其他的数学操作. 在本章的末尾,我们会展示如何获取数据资源.
  • 第二章: TensorFlow方式 (TF Way), 阐述如何用多种方式将第一章中所有的算法成分关联成一个计算图并创造出一个简单的分类器. 在阐述的过程中, 我们会介绍计算图 (computational graphs), 损失函数 (loss functions), 反向传播 (back propagation), 以及训练数据.
  • 第三章: 线性回归 (Linear Regression), 本章着重强调如何使用tensorflow来探索不同的线性回归技巧, 比如Deming, lasso, ridge, elastic net 和 logistic regression. 我们会在计算图中展示如何应用它们.
  • 第四章: 支持向量机 (Support Vector Machine), 介绍支持向量机 (SVMs) 然后展示如何用tensorflow去运用线性SVMs, 非线性SVMs和多类SVMs.
  • 第五章: 最近邻方法 (NNM), 展示如何运用数值度量,文本度量和比例距离函数使用最近邻技巧. 我们使用最近邻技巧来完成地址记录匹配和从MNIST数据库中对手写数字进行分类.
  • 第六章: 神经网络 (Neural Networks), 介绍了从操作门 (operational gates) 和激活函数 (activation function) 的概念开始, 在tensorflow中如何运用神经网络. 然后我们展示一个很浅神经元然后展示如何建立不同类型的层. 在本章的末尾, 我们会教tensorflow通过神经网络的方法玩井字棋(tic-tac-toe).
  • 第七章: 自然语言处理 (NLP), 本章展示了运用tensorflow不同文本的处理方法. 我们会展示如何在文本处理中使用Bag of Words (BoW) 模型和TF-IDF (Term Frequency-Inverse Document Frequency) 模型. 我们然后会用CBOW (Continuous Bag of Words) 和Skip-Gram模型来介绍神经元完了文本表达, 然后运用这些技巧到Word2Vec和Doc2Vec上, 用于解决实际结果预测.
  • 第八章: 卷积神经网络 (CNN), 通过展示如何通过使用卷积神经网络 (convolutional neural networks) CNNs模型将神经网络运用到图像处理上. 我们诠释了如何为MNIST数字识别构建一个简单卷积神经网络模型, 然后在CIFAR-10任务中把它扩展到颜色识别. 我们也会展示如何把之前训练过得图像识别模型扩展到自定义任务当中. 在本章的末尾,我们会在tensorflow中解释 stylenet/neural style和deep-dream 算法.
  • 第九章: 递归神经网络 (RNN), 会展示如何在tensorflow中运用递归神经元(recurrent neural networks). 我们会展示如何进行垃圾文本预测, 然后将递归神经网络模型扩展到基于莎士比亚文本生成. 我们也会训练段对段模型 (sequence to sequence model), 用于德语英语的翻译. 在本章的末尾, 我们也会展示Siamese递归神经网络用于地址记录匹配的用法.
  • 第十章: TensorFlow的应用技巧, 本章将会给出将TensorFlow应用到开发环境中, 如何利用多过程设备(比如GPUs), 然后将TensorFlow分布在多个机器上.
  • 第十一章: TensorFlow的更多功能, 通过阐述如何运行k-means, genetic算法来展示TensorFlow的多面性, 解决系统的常微分方程. 我们也展示Tensorboard的多处使用, 以及如何显示计算图度量.

目录

从TensorFlow开始 (Getting Started)

许可证(License)

MIT许可证请参见 MIT LICENSE

TensorFlow模块介绍

Top-level module of TensorFlow. By convention, we refer to this module as tf instead of tensorflow, following the common practice of importing TensorFlow via the command import tensorflow as tf.

The primary function of this module is to import all of the public TensorFlow interfaces into a single place. The interfaces themselves are located in sub-modules, as described below.

Note that the file __init__.py in the TensorFlow source code tree is actually only a placeholder to enable test cases to run. The TensorFlow build replaces this file with a file generated from [api_template.__init__.py](https://www.github.com/tensorflow/tensorflow/blob/master/tensorflow/api_template.__init__.py)

class tensorflow.AggregationMethod

基类:object

A class listing aggregation methods used to combine gradients.

Computing partial derivatives can require aggregating gradient contributions. This class lists the various methods that can be used to combine gradients in the graph.

The following aggregation methods are part of the stable API for aggregating gradients:

  • ADD_N: All of the gradient terms are summed as part of one operation using the “AddN” op (see tf.add_n). This method has the property that all gradients must be ready and buffered separately in memory before any aggregation is performed.
  • DEFAULT: The system-chosen default aggregation method.

The following aggregation methods are experimental and may not be supported in future releases:

  • EXPERIMENTAL_TREE: Gradient terms are summed in pairs using using the “AddN” op. This method of summing gradients may reduce performance, but it can improve memory utilization because the gradients can be released earlier.
ADD_N = 0
DEFAULT = 0
EXPERIMENTAL_ACCUMULATE_N = 2
EXPERIMENTAL_TREE = 1
tensorflow.Assert(condition, data, summarize=None, name=None)

Asserts that the given condition is true.

If condition evaluates to false, print the list of tensors in data. summarize determines how many entries of the tensors to print.

NOTE: In graph mode, to ensure that Assert executes, one usually attaches a dependency:

```python # Ensure maximum element of x is smaller or equal to 1 assert_op = tf.Assert(tf.less_equal(tf.reduce_max(x), 1.), [x]) with tf.control_dependencies([assert_op]):

… code using x …

```

参数:
  • condition – The condition to evaluate.
  • data – The tensors to print out when condition is false.
  • summarize – Print this many entries of each tensor.
  • name – A name for this operation (optional).
返回:

An Operation that, when executed, raises a tf.errors.InvalidArgumentError if condition is not true. @compatibility(eager) returns None @end_compatibility

返回类型:

assert_op

Raises:
  • @compatibility(eager)
  • tf.errors.InvalidArgumentError if condition is not true
  • @end_compatibility

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

class tensorflow.CriticalSection(name=None, shared_name=None, critical_section_def=None, import_scope=None)

基类:object

Critical section.

A CriticalSection object is a resource in the graph which executes subgraphs in serial order. A common example of a subgraph one may wish to run exclusively is the one given by the following function:

```python v = resource_variable_ops.ResourceVariable(0.0, name=”v”)

def count():

value = v.read_value() with tf.control_dependencies([value]):

with tf.control_dependencies([v.assign_add(1)]):
return tf.identity(value)

```

Here, a snapshot of v is captured in value; and then v is updated. The snapshot value is returned.

If multiple workers or threads all execute count in parallel, there is no guarantee that access to the variable v is atomic at any point within any thread’s calculation of count. In fact, even implementing an atomic counter that guarantees that the user will see each value 0, 1, …, is currently impossible.

The solution is to ensure any access to the underlying resource v is only processed through a critical section:

`python cs = CriticalSection() f1 = cs.execute(count) f2 = cs.execute(count) output = f1 + f2 session.run(output) ` The functions f1 and f2 will be executed serially, and updates to v will be atomic.

NOTES

All resource objects, including the critical section and any captured variables of functions executed on that critical section, will be colocated to the same device (host and cpu/gpu).

When using multiple critical sections on the same resources, there is no guarantee of exclusive access to those resources. This behavior is disallowed by default (but see the kwarg exclusive_resource_access).

For example, running the same function in two separate critical sections will not ensure serial execution:

```python v = tf.compat.v1.get_variable(“v”, initializer=0.0, use_resource=True) def accumulate(up):

x = v.read_value() with tf.control_dependencies([x]):

with tf.control_dependencies([v.assign_add(up)]):
return tf.identity(x)
ex1 = CriticalSection().execute(
accumulate, 1.0, exclusive_resource_access=False)
ex2 = CriticalSection().execute(
accumulate, 1.0, exclusive_resource_access=False)

bad_sum = ex1 + ex2 sess.run(v.initializer) sess.run(bad_sum) # May return 0.0 ```

Creates a critical section.

execute(fn, exclusive_resource_access=True, name=None)

Execute function fn() inside the critical section.

fn should not accept any arguments. To add extra arguments to when calling fn in the critical section, create a lambda:

`python critical_section.execute(lambda: fn(*my_args, **my_kwargs)) `

参数:
  • fn – The function to execute. Must return at least one tensor.
  • exclusive_resource_access – Whether the resources required by fn should be exclusive to this CriticalSection. Default: True. You may want to set this to False if you will be accessing a resource in read-only mode in two different CriticalSections.
  • name – The name to use when creating the execute operation.
返回:

The tensors returned from fn().

Raises:
  • ValueError – If fn attempts to lock this CriticalSection in any nested or lazy way that may cause a deadlock.
  • ValueError – If exclusive_resource_access == True and another CriticalSection has an execution requesting the same resources as fn`. Note, even if exclusive_resource_access is True, if another execution in another CriticalSection was created without exclusive_resource_access=True, a ValueError will be raised.
name
class tensorflow.DType(self: tensorflow.python._dtypes.DType, arg0: object) → None

基类:tensorflow.python._dtypes.DType

Represents the type of the elements in a Tensor.

The following DType objects are defined:

  • tf.float16: 16-bit half-precision floating-point.
  • tf.float32: 32-bit single-precision floating-point.
  • tf.float64: 64-bit double-precision floating-point.
  • tf.bfloat16: 16-bit truncated floating-point.
  • tf.complex64: 64-bit single-precision complex.
  • tf.complex128: 128-bit double-precision complex.
  • tf.int8: 8-bit signed integer.
  • tf.uint8: 8-bit unsigned integer.
  • tf.uint16: 16-bit unsigned integer.
  • tf.uint32: 32-bit unsigned integer.
  • tf.uint64: 64-bit unsigned integer.
  • tf.int16: 16-bit signed integer.
  • tf.int32: 32-bit signed integer.
  • tf.int64: 64-bit signed integer.
  • tf.bool: Boolean.
  • tf.string: String.
  • tf.qint8: Quantized 8-bit signed integer.
  • tf.quint8: Quantized 8-bit unsigned integer.
  • tf.qint16: Quantized 16-bit signed integer.
  • tf.quint16: Quantized 16-bit unsigned integer.
  • tf.qint32: Quantized 32-bit signed integer.
  • tf.resource: Handle to a mutable resource.
  • tf.variant: Values of arbitrary types.

The tf.as_dtype() function converts numpy types and string type names to a DType object.

as_numpy_dtype

Returns a Python type object based on this DType.

base_dtype

Returns a non-reference DType based on this DType.

is_compatible_with(other)

Returns True if the other DType will be converted to this DType.

The conversion rules are as follows:

`python DType(T)       .is_compatible_with(DType(T))        == True `

参数:other – A DType (or object that may be converted to a DType).
返回:True if a Tensor of the other DType will be implicitly converted to this DType.
limits

Return intensity limits, i.e.

(min, max) tuple, of the dtype. :param clip_negative: bool, optional If True, clip the negative range (i.e.

return 0 for min intensity) even if the image dtype allows negative values. Returns
参数:max (min,) – tuple Lower and upper intensity limits.
max

Returns the maximum representable value in this data type.

Raises:TypeError – if this is a non-numeric, unordered, or quantized type.
min

Returns the minimum representable value in this data type.

Raises:TypeError – if this is a non-numeric, unordered, or quantized type.
real_dtype

Returns the DType corresponding to this DType’s real part.

tensorflow.DeviceSpec

tensorflow.python.framework.device_spec.DeviceSpecV2 的别名

class tensorflow.GradientTape(persistent=False, watch_accessed_variables=True)

基类:object

Record operations for automatic differentiation.

Operations are recorded if they are executed within this context manager and at least one of their inputs is being “watched”.

Trainable variables (created by tf.Variable or tf.compat.v1.get_variable, where trainable=True is default in both cases) are automatically watched. Tensors can be manually watched by invoking the watch method on this context manager.

For example, consider the function y = x * x. The gradient at x = 3.0 can be computed as:

```python x = tf.constant(3.0) with tf.GradientTape() as g:

g.watch(x) y = x * x

dy_dx = g.gradient(y, x) # Will compute to 6.0 ```

GradientTapes can be nested to compute higher-order derivatives. For example,

```python x = tf.constant(3.0) with tf.GradientTape() as g:

g.watch(x) with tf.GradientTape() as gg:

gg.watch(x) y = x * x

dy_dx = gg.gradient(y, x) # Will compute to 6.0

d2y_dx2 = g.gradient(dy_dx, x) # Will compute to 2.0 ```

By default, the resources held by a GradientTape are released as soon as GradientTape.gradient() method is called. To compute multiple gradients over the same computation, create a persistent gradient tape. This allows multiple calls to the gradient() method as resources are released when the tape object is garbage collected. For example:

```python x = tf.constant(3.0) with tf.GradientTape(persistent=True) as g:

g.watch(x) y = x * x z = y * y

dz_dx = g.gradient(z, x) # 108.0 (4*x^3 at x = 3) dy_dx = g.gradient(y, x) # 6.0 del g # Drop the reference to the tape ```

By default GradientTape will automatically watch any trainable variables that are accessed inside the context. If you want fine grained control over which variables are watched you can disable automatic tracking by passing watch_accessed_variables=False to the tape constructor:

```python with tf.GradientTape(watch_accessed_variables=False) as tape:

tape.watch(variable_a) y = variable_a ** 2 # Gradients will be available for variable_a. z = variable_b ** 3 # No gradients will be available since variable_b is

# not being watched.

```

Note that when using models you should ensure that your variables exist when using watch_accessed_variables=False. Otherwise it’s quite easy to make your first iteration not have any gradients:

```python a = tf.keras.layers.Dense(32) b = tf.keras.layers.Dense(32)

with tf.GradientTape(watch_accessed_variables=False) as tape:
tape.watch(a.variables) # Since a.build has not been called at this point
# a.variables will return an empty list and the # tape will not be watching anything.

result = b(a(inputs)) tape.gradient(result, a.variables) # The result of this computation will be

# a list of `None`s since a’s variables # are not being watched.

```

Note that only tensors with real or complex dtypes are differentiable.

Creates a new GradientTape.

参数:
  • persistent – Boolean controlling whether a persistent gradient tape is created. False by default, which means at most one call can be made to the gradient() method on this object.
  • watch_accessed_variables – Boolean controlling whether the tape will automatically watch any (trainable) variables accessed while the tape is active. Defaults to True meaning gradients can be requested from any result computed in the tape derived from reading a trainable Variable. If False users must explicitly watch any `Variable`s they want to request gradients from.
batch_jacobian(target, source, unconnected_gradients=<UnconnectedGradients.NONE: 'none'>, parallel_iterations=None, experimental_use_pfor=True)

Computes and stacks per-example jacobians.

See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the definition of a Jacobian. This function is essentially an efficient implementation of the following:

tf.stack([self.jacobian(y[i], x[i]) for i in range(x.shape[0])]).

Note that compared to GradientTape.jacobian which computes gradient of each output value w.r.t each input value, this function is useful when target[i,…] is independent of source[j,…] for j != i. This assumption allows more efficient computation as compared to GradientTape.jacobian. The output, as well as intermediate activations, are lower dimensional and avoid a bunch of redundant zeros which would result in the jacobian computation given the independence assumption.

Example usage:

```python with tf.GradientTape() as g:

x = tf.constant([[1., 2.], [3., 4.]], dtype=tf.float32) g.watch(x) y = x * x

batch_jacobian = g.batch_jacobian(y, x) # batch_jacobian is [[[2, 0], [0, 4]], [[6, 0], [0, 8]]] ```

参数:
  • target – A tensor with rank 2 or higher and with shape [b, y1, …, y_n]. target[i,…] should only depend on source[i,…].
  • source – A tensor with rank 2 or higher and with shape [b, x1, …, x_m].
  • unconnected_gradients – a value which can either hold ‘none’ or ‘zero’ and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in ‘UnconnectedGradients’ and it defaults to ‘none’.
  • parallel_iterations – A knob to control how many iterations are dispatched in parallel. This knob can be used to control the total memory usage.
  • experimental_use_pfor – If true, uses pfor for computing the Jacobian. Else uses a tf.while_loop.
返回:

A tensor t with shape [b, y_1, …, y_n, x1, …, x_m] where t[i, …] is the jacobian of target[i, …] w.r.t. source[i, …], i.e. stacked per-example jacobians.

Raises:
  • RuntimeError – If called on a non-persistent tape with eager execution enabled and without enabling experimental_use_pfor.
  • ValueError – If vectorization of jacobian computation fails or if first dimension of target and source do not match.
gradient(target, sources, output_gradients=None, unconnected_gradients=<UnconnectedGradients.NONE: 'none'>)

Computes the gradient using operations recorded in context of this tape.

参数:
  • target – a list or nested structure of Tensors or Variables to be differentiated.
  • sources – a list or nested structure of Tensors or Variables. target will be differentiated against elements in sources.
  • output_gradients – a list of gradients, one for each element of target. Defaults to None.
  • unconnected_gradients – a value which can either hold ‘none’ or ‘zero’ and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in ‘UnconnectedGradients’ and it defaults to ‘none’.
返回:

a list or nested structure of Tensors (or IndexedSlices, or None), one for each element in sources. Returned structure is the same as the structure of sources.

Raises:
  • RuntimeError – if called inside the context of the tape, or if called more than once on a non-persistent tape.
  • ValueError – if the target is a variable or if unconnected gradients is called with an unknown value.
jacobian(target, sources, unconnected_gradients=<UnconnectedGradients.NONE: 'none'>, parallel_iterations=None, experimental_use_pfor=True)

Computes the jacobian using operations recorded in context of this tape.

See [wikipedia article](http://en.wikipedia.org/wiki/jacobian_matrix_and_determinant) for the definition of a Jacobian.

Example usage:

```python with tf.GradientTape() as g:

x = tf.constant([1.0, 2.0]) g.watch(x) y = x * x

jacobian = g.jacobian(y, x) # jacobian value is [[2., 0.], [0., 4.]] ```

参数:
  • target – Tensor to be differentiated.
  • sources – a list or nested structure of Tensors or Variables. target will be differentiated against elements in sources.
  • unconnected_gradients – a value which can either hold ‘none’ or ‘zero’ and alters the value which will be returned if the target and sources are unconnected. The possible values and effects are detailed in ‘UnconnectedGradients’ and it defaults to ‘none’.
  • parallel_iterations – A knob to control how many iterations are dispatched in parallel. This knob can be used to control the total memory usage.
  • experimental_use_pfor – If true, vectorizes the jacobian computation. Else falls back to a sequential while_loop. Vectorization can sometimes fail or lead to excessive memory usage. This option can be used to disable vectorization in such cases.
返回:

A list or nested structure of Tensors (or None), one for each element in sources. Returned structure is the same as the structure of sources. Note if any gradient is sparse (IndexedSlices), jacobian function currently makes it dense and returns a Tensor instead. This may change in the future.

Raises:
  • RuntimeError – If called on a non-persistent tape with eager execution enabled and without enabling experimental_use_pfor.
  • ValueError – If vectorization of jacobian computation fails.
reset()

Clears all information stored in this tape.

Equivalent to exiting and reentering the tape context manager with a new tape. For example, the two following code blocks are equivalent:

``` with tf.GradientTape() as t:

loss = loss_fn()
with tf.GradientTape() as t:
loss += other_loss_fn()

t.gradient(loss, …) # Only differentiates other_loss_fn, not loss_fn

# The following is equivalent to the above with tf.GradientTape() as t:

loss = loss_fn() t.reset() loss += other_loss_fn()

t.gradient(loss, …) # Only differentiates other_loss_fn, not loss_fn ```

This is useful if you don’t want to exit the context manager for the tape, or can’t because the desired reset point is inside a control flow construct:

``` with tf.GradientTape() as t:

loss = … if loss > k:

t.reset()

```

stop_recording()

Temporarily stops recording operations on this tape.

Operations executed while this context manager is active will not be recorded on the tape. This is useful for reducing the memory used by tracing all computations.

For example:

```
with tf.GradientTape(persistent=True) as t:

loss = compute_loss(model) with t.stop_recording():

# The gradient computation below is not traced, saving memory. grads = t.gradient(loss, model.variables)

```

Yields:None
Raises:RuntimeError – if the tape is not currently recording.
watch(tensor)

Ensures that tensor is being traced by this tape.

参数:tensor – a Tensor or list of Tensors.
Raises:ValueError – if it encounters something that is not a tensor.
watched_variables()

Returns variables watched by this tape in order of construction.

class tensorflow.Graph

基类:object

A TensorFlow computation, represented as a dataflow graph.

Graphs are used by tf.function`s to represent the function’s computations. Each graph contains a set of `tf.Operation objects, which represent units of computation; and tf.Tensor objects, which represent the units of data that flow between operations.

### Using graphs directly (deprecated)

A tf.Graph can be constructed and used directly without a tf.function, as was required in TensorFlow 1, but this is deprecated and it is recommended to use a tf.function instead. If a graph is directly used, other deprecated TensorFlow 1 classes are also required to execute the graph, such as a tf.compat.v1.Session.

A default graph can be registered with the tf.Graph.as_default context manager. Then, operations will be added to the graph instead of being executed eagerly. For example:

```python g = tf.Graph() with g.as_default():

# Define operations and tensors in g. c = tf.constant(30.0) assert c.graph is g

```

tf.compat.v1.get_default_graph() can be used to obtain the default graph.

Important note: This class is not thread-safe for graph construction. All operations should be created from a single thread, or external synchronization must be provided. Unless otherwise specified, all methods are not thread-safe.

A Graph instance supports an arbitrary number of “collections” that are identified by name. For convenience when building a large graph, collections can store groups of related objects: for example, the tf.Variable uses a collection (named tf.GraphKeys.GLOBAL_VARIABLES) for all variables that are created during the construction of a graph. The caller may define additional collections by specifying a new name.

Creates a new, empty Graph.

add_to_collection(name, value)

Stores value in the collection with the given name.

Note that collections are not sets, so it is possible to add a value to a collection several times.

参数:
  • name – The key for the collection. The GraphKeys class contains many standard names for collections.
  • value – The value to add to the collection.
add_to_collections(names, value)

Stores value in the collections given by names.

Note that collections are not sets, so it is possible to add a value to a collection several times. This function makes sure that duplicates in names are ignored, but it will not check for pre-existing membership of value in any of the collections in names.

names can be any iterable, but if names is a string, it is treated as a single collection name.

参数:
  • names – The keys for the collections to add to. The GraphKeys class contains many standard names for collections.
  • value – The value to add to the collections.
as_default()

Returns a context manager that makes this Graph the default graph.

This method should be used if you want to create multiple graphs in the same process. For convenience, a global default graph is provided, and all ops will be added to this graph if you do not create a new graph explicitly.

Use this method with the with keyword to specify that ops created within the scope of a block should be added to this graph. In this case, once the scope of the with is exited, the previous default graph is set again as default. There is a stack, so it’s ok to have multiple nested levels of as_default calls.

The default graph is a property of the current thread. If you create a new thread, and wish to use the default graph in that thread, you must explicitly add a with g.as_default(): in that thread’s function.

The following code examples are equivalent:

```python # 1. Using Graph.as_default(): g = tf.Graph() with g.as_default():

c = tf.constant(5.0) assert c.graph is g

# 2. Constructing and making default: with tf.Graph().as_default() as g:

c = tf.constant(5.0) assert c.graph is g

```

If eager execution is enabled ops created under this context manager will be added to the graph instead of executed eagerly.

返回:A context manager for using this graph as the default graph.
as_graph_def(from_version=None, add_shapes=False)

Returns a serialized GraphDef representation of this graph.

The serialized GraphDef can be imported into another Graph (using tf.import_graph_def) or used with the [C++ Session API](../../api_docs/cc/index.md).

This method is thread-safe.

参数:
  • from_version – Optional. If this is set, returns a GraphDef containing only the nodes that were added to this graph since its version property had the given value.
  • add_shapes – If true, adds an “_output_shapes” list attr to each node with the inferred shapes of each of its outputs.
返回:

A [GraphDef](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer.

Raises:

ValueError – If the graph_def would be too large.

as_graph_element(obj, allow_tensor=True, allow_operation=True)

Returns the object referred to by obj, as an Operation or Tensor.

This function validates that obj represents an element of this graph, and gives an informative error message if it is not.

This function is the canonical way to get/validate an object of one of the allowed types from an external argument reference in the Session API.

This method may be called concurrently from multiple threads.

参数:
  • obj – A Tensor, an Operation, or the name of a tensor or operation. Can also be any object with an _as_graph_element() method that returns a value of one of these types. Note: _as_graph_element will be called inside the graph’s lock and so may not modify the graph.
  • allow_tensor – If true, obj may refer to a Tensor.
  • allow_operation – If true, obj may refer to an Operation.
返回:

The Tensor or Operation in the Graph corresponding to obj.

Raises:
  • TypeError – If obj is not a type we support attempting to convert to types.
  • ValueError – If obj is of an appropriate type but invalid. For example, an invalid string.
  • KeyError – If obj is not an object in the graph.
building_function

Returns True iff this graph represents a function.

clear_collection(name)

Clears all values in a collection.

参数:name – The key for the collection. The GraphKeys class contains many standard names for collections.
collections

Returns the names of the collections known to this graph.

colocate_with(op, ignore_existing=False)

Returns a context manager that specifies an op to colocate with.

Note: this function is not for public use, only for internal libraries.

For example:

```python a = tf.Variable([1.0]) with g.colocate_with(a):

b = tf.constant(1.0) c = tf.add(a, b)

```

b and c will always be colocated with a, no matter where a is eventually placed.

NOTE Using a colocation scope resets any existing device constraints.

If op is None then ignore_existing must be True and the new scope resets all colocation and device constraints.

参数:
  • op – The op to colocate all created ops with, or None.
  • ignore_existing – If true, only applies colocation of this op within the context, rather than applying all colocation properties on the stack. If op is None, this value must be True.
Raises:

ValueError – if op is None but ignore_existing is False.

Yields:

A context manager that specifies the op with which to colocate newly created ops.

container(container_name)

Returns a context manager that specifies the resource container to use.

Stateful operations, such as variables and queues, can maintain their states on devices so that they can be shared by multiple processes. A resource container is a string name under which these stateful operations are tracked. These resources can be released or cleared with tf.Session.reset().

For example:

```python with g.container(‘experiment0’):

# All stateful Operations constructed in this context will be placed # in resource container “experiment0”. v1 = tf.Variable([1.0]) v2 = tf.Variable([2.0]) with g.container(“experiment1”):

# All stateful Operations constructed in this context will be # placed in resource container “experiment1”. v3 = tf.Variable([3.0]) q1 = tf.queue.FIFOQueue(10, tf.float32)

# All stateful Operations constructed in this context will be # be created in the “experiment0”. v4 = tf.Variable([4.0]) q1 = tf.queue.FIFOQueue(20, tf.float32) with g.container(“”):

# All stateful Operations constructed in this context will be # be placed in the default resource container. v5 = tf.Variable([5.0]) q3 = tf.queue.FIFOQueue(30, tf.float32)

# Resets container “experiment0”, after which the state of v1, v2, v4, q1 # will become undefined (such as uninitialized). tf.Session.reset(target, [“experiment0”]) ```

参数:container_name – container name string.
返回:
A context manager for defining resource containers for stateful ops,
yields the container name.
control_dependencies(control_inputs)

Returns a context manager that specifies control dependencies.

Use with the with keyword to specify that all operations constructed within the context should have control dependencies on control_inputs. For example:

```python with g.control_dependencies([a, b, c]):

# d and e will only run after a, b, and c have executed. d = … e = …

```

Multiple calls to control_dependencies() can be nested, and in that case a new Operation will have control dependencies on the union of control_inputs from all active contexts.

```python with g.control_dependencies([a, b]):

# Ops constructed here run after a and b. with g.control_dependencies([c, d]):

# Ops constructed here run after a, b, c, and d.

```

You can pass None to clear the control dependencies:

```python with g.control_dependencies([a, b]):

# Ops constructed here run after a and b. with g.control_dependencies(None):

# Ops constructed here run normally, not waiting for either a or b. with g.control_dependencies([c, d]):

# Ops constructed here run after c and d, also not waiting # for either a or b.

```

N.B. The control dependencies context applies only to ops that are constructed within the context. Merely using an op or tensor in the context does not add a control dependency. The following example illustrates this point:

```python # WRONG def my_func(pred, tensor):

t = tf.matmul(tensor, tensor) with tf.control_dependencies([pred]):

# The matmul op is created outside the context, so no control # dependency will be added. return t

# RIGHT def my_func(pred, tensor):

with tf.control_dependencies([pred]):
# The matmul op is created in the context, so a control dependency # will be added. return tf.matmul(tensor, tensor)

```

Also note that though execution of ops created under this scope will trigger execution of the dependencies, the ops created under this scope might still be pruned from a normal tensorflow graph. For example, in the following snippet of code the dependencies are never executed:

```python

loss = model.loss() with tf.control_dependencies(dependencies):

loss = loss + tf.constant(1) # note: dependencies ignored in the
# backward pass

return tf.gradients(loss, model.variables)

```

This is because evaluating the gradient graph does not require evaluating the constant(1) op created in the forward pass.

参数:control_inputs – A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies.
返回:A context manager that specifies control dependencies for all operations constructed within the context.
Raises:TypeError – If control_inputs is not a list of Operation or Tensor objects.
create_op(op_type, inputs, dtypes=None, input_types=None, name=None, attrs=None, op_def=None, compute_shapes=True, compute_device=True)

Creates an Operation in this graph. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: (compute_shapes). They will be removed in a future version. Instructions for updating: Shapes are always computed; don’t use the compute_shapes as it has no effect.

This is a low-level interface for creating an Operation. Most programs will not call this method directly, and instead use the Python op constructors, such as tf.constant(), which add ops to the default graph.

参数:
  • op_type – The Operation type to create. This corresponds to the OpDef.name field for the proto that defines the operation.
  • inputs – A list of Tensor objects that will be inputs to the Operation.
  • dtypes – (Optional) A list of DType objects that will be the types of the tensors that the operation produces.
  • input_types – (Optional.) A list of DType`s that will be the types of the tensors that the operation consumes. By default, uses the base `DType of each input in inputs. Operations that expect reference-typed inputs must specify input_types explicitly.
  • name – (Optional.) A string name for the operation. If not specified, a name is generated based on op_type.
  • attrs – (Optional.) A dictionary where the key is the attribute name (a string) and the value is the respective attr attribute of the NodeDef proto that will represent the operation (an AttrValue proto).
  • op_def – (Optional.) The OpDef proto that describes the op_type that the operation will have.
  • compute_shapes – (Optional.) Deprecated. Has no effect (shapes are always computed).
  • compute_device – (Optional.) If True, device functions will be executed to compute the device property of the Operation.
Raises:
  • TypeError – if any of the inputs is not a Tensor.
  • ValueError – if colocation conflicts with existing device assignment.
返回:

An Operation object.

device(device_name_or_function)

Returns a context manager that specifies the default device to use.

The device_name_or_function argument may either be a device name string, a device function, or None:

  • If it is a device name string, all operations constructed in this context will be assigned to the device with that name, unless overridden by a nested device() context.
  • If it is a function, it will be treated as a function from Operation objects to device name strings, and invoked each time a new Operation is created. The Operation will be assigned to the device with the returned name.
  • If it is None, all device() invocations from the enclosing context will be ignored.

For information about the valid syntax of device name strings, see the documentation in [DeviceNameUtils](https://www.tensorflow.org/code/tensorflow/core/util/device_name_utils.h).

For example:

```python with g.device(‘/device:GPU:0’):

# All operations constructed in this context will be placed # on GPU 0. with g.device(None):

# All operations constructed in this context will have no # assigned device.

# Defines a function from Operation to device string. def matmul_on_gpu(n):

if n.type == “MatMul”:
return “/device:GPU:0”
else:
return “/cpu:0”
with g.device(matmul_on_gpu):
# All operations of type “MatMul” constructed in this context # will be placed on GPU 0; all other operations will be placed # on CPU 0.

```

N.B. The device scope may be overridden by op wrappers or other library code. For example, a variable assignment op v.assign() must be colocated with the tf.Variable v, and incompatible device scopes will be ignored.

参数:device_name_or_function – The device name or function to use in the context.
Yields:A context manager that specifies the default device to use for newly created ops.
Raises:RuntimeError – If device scopes are not properly nested.
finalize()

Finalizes this graph, making it read-only.

After calling g.finalize(), no new operations can be added to g. This method is used to ensure that no operations are added to a graph when it is shared between multiple threads, for example when using a tf.compat.v1.train.QueueRunner.

finalized

True if this graph has been finalized.

get_all_collection_keys()

Returns a list of collections used in this graph.

get_collection(name, scope=None)

Returns a list of values in the collection with the given name.

This is different from get_collection_ref() which always returns the actual collection list if it exists in that it returns a new list each time it is called.

参数:
  • name – The key for the collection. For example, the GraphKeys class contains many standard names for collections.
  • scope – (Optional.) A string. If supplied, the resulting list is filtered to include only items whose name attribute matches scope using re.match. Items without a name attribute are never returned if a scope is supplied. The choice of re.match means that a scope without special tokens filters by prefix.
返回:

The list of values in the collection with the given name, or an empty list if no value has been added to that collection. The list contains the values in the order under which they were collected.

get_collection_ref(name)

Returns a list of values in the collection with the given name.

If the collection exists, this returns the list itself, which can be modified in place to change the collection. If the collection does not exist, it is created as an empty list and the list is returned.

This is different from get_collection() which always returns a copy of the collection list if it exists and never creates an empty collection.

参数:name – The key for the collection. For example, the GraphKeys class contains many standard names for collections.
返回:The list of values in the collection with the given name, or an empty list if no value has been added to that collection.
get_name_scope()

Returns the current name scope.

For example:

```python with tf.name_scope(‘scope1’):

with tf.name_scope(‘scope2’):
print(tf.compat.v1.get_default_graph().get_name_scope())

``` would print the string scope1/scope2.

返回:A string representing the current name scope.
get_operation_by_name(name)

Returns the Operation with the given name.

This method may be called concurrently from multiple threads.

参数:

name – The name of the Operation to return.

返回:

The Operation with the given name.

Raises:
  • TypeError – If name is not a string.
  • KeyError – If name does not correspond to an operation in this graph.
get_operations()

Return the list of operations in the graph.

You can modify the operations in place, but modifications to the list such as inserts/delete have no effect on the list of operations known to the graph.

This method may be called concurrently from multiple threads.

返回:A list of Operations.
get_tensor_by_name(name)

Returns the Tensor with the given name.

This method may be called concurrently from multiple threads.

参数:

name – The name of the Tensor to return.

返回:

The Tensor with the given name.

Raises:
  • TypeError – If name is not a string.
  • KeyError – If name does not correspond to a tensor in this graph.
gradient_override_map(op_type_map)

EXPERIMENTAL: A context manager for overriding gradient functions.

This context manager can be used to override the gradient function that will be used for ops within the scope of the context.

For example:

```python @tf.RegisterGradient(“CustomSquare”) def _custom_square_grad(op, grad):

# …
with tf.Graph().as_default() as g:

c = tf.constant(5.0) s_1 = tf.square(c) # Uses the default gradient for tf.square. with g.gradient_override_map({“Square”: “CustomSquare”}):

s_2 = tf.square(s_2) # Uses _custom_square_grad to compute the
# gradient of s_2.

```

参数:op_type_map – A dictionary mapping op type strings to alternative op type strings.
返回:A context manager that sets the alternative op type to be used for one or more ops created in that context.
Raises:TypeError – If op_type_map is not a dictionary mapping strings to strings.
graph_def_versions

The GraphDef version information of this graph.

For details on the meaning of each version, see [GraphDef](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto).

返回:A VersionDef.
is_feedable(tensor)

Returns True if and only if tensor is feedable.

is_fetchable(tensor_or_op)

Returns True if and only if tensor_or_op is fetchable.

name_scope(name)

Returns a context manager that creates hierarchical names for operations.

A graph maintains a stack of name scopes. A with name_scope(…): statement pushes a new name onto the stack for the lifetime of the context.

The name argument will be interpreted as follows:

  • A string (not ending with ‘/’) will create a new name scope, in which name is appended to the prefix of all operations created in the context. If name has been used before, it will be made unique by calling self.unique_name(name).
  • A scope previously captured from a with g.name_scope(…) as scope: statement will be treated as an “absolute” name scope, which makes it possible to re-enter existing scopes.
  • A value of None or the empty string will reset the current name scope to the top-level (empty) name scope.

For example:

```python with tf.Graph().as_default() as g:

c = tf.constant(5.0, name=”c”) assert c.op.name == “c” c_1 = tf.constant(6.0, name=”c”) assert c_1.op.name == “c_1”

# Creates a scope called “nested” with g.name_scope(“nested”) as scope:

nested_c = tf.constant(10.0, name=”c”) assert nested_c.op.name == “nested/c”

# Creates a nested scope called “inner”. with g.name_scope(“inner”):

nested_inner_c = tf.constant(20.0, name=”c”) assert nested_inner_c.op.name == “nested/inner/c”

# Create a nested scope called “inner_1”. with g.name_scope(“inner”):

nested_inner_1_c = tf.constant(30.0, name=”c”) assert nested_inner_1_c.op.name == “nested/inner_1/c”

# Treats scope as an absolute name scope, and # switches to the “nested/” scope. with g.name_scope(scope):

nested_d = tf.constant(40.0, name=”d”) assert nested_d.op.name == “nested/d”

with g.name_scope(“”):
e = tf.constant(50.0, name=”e”) assert e.op.name == “e”

```

The name of the scope itself can be captured by with g.name_scope(…) as scope:, which stores the name of the scope in the variable scope. This value can be used to name an operation that represents the overall result of executing the ops in a scope. For example:

```python inputs = tf.constant(…) with g.name_scope(‘my_layer’) as scope:

weights = tf.Variable(…, name=”weights”) biases = tf.Variable(…, name=”biases”) affine = tf.matmul(inputs, weights) + biases output = tf.nn.relu(affine, name=scope)

```

NOTE: This constructor validates the given name. Valid scope names match one of the following regular expressions:

[A-Za-z0-9.][A-Za-z0-9_.-/]* (for scopes at the root) [A-Za-z0-9_.-/]* (for other scopes)
参数:name – A name for the scope.
返回:A context manager that installs name as a new name scope.
Raises:ValueError – If name is not a valid scope name, according to the rules above.
prevent_feeding(tensor)

Marks the given tensor as unfeedable in this graph.

prevent_fetching(op)

Marks the given op as unfetchable in this graph.

seed

The graph-level random seed of this graph.

switch_to_thread_local()

Make device, colocation and dependencies stacks thread-local.

Device, colocation and dependencies stacks are not thread-local be default. If multiple threads access them, then the state is shared. This means that one thread may affect the behavior of another thread.

After this method is called, the stacks become thread-local. If multiple threads access them, then the state is not shared. Each thread uses its own value; a thread doesn’t affect other threads by mutating such a stack.

The initial value for every thread’s stack is set to the current value of the stack when switch_to_thread_local() was first called.

unique_name(name, mark_as_used=True)

Return a unique operation name for name.

Note: You rarely need to call unique_name() directly. Most of the time you just need to create with g.name_scope() blocks to generate structured names.

unique_name is used to generate structured names, separated by “/”, to help identify operations when debugging a graph. Operation names are displayed in error messages reported by the TensorFlow runtime, and in various visualization tools such as TensorBoard.

If mark_as_used is set to True, which is the default, a new unique name is created and marked as in use. If it’s set to False, the unique name is returned without actually being marked as used. This is useful when the caller simply wants to know what the name to be created will be.

参数:
  • name – The name for an operation.
  • mark_as_used – Whether to mark this name as being used.
返回:

A string to be passed to create_op() that will be used to name the operation being created.

version

Returns a version number that increases as ops are added to the graph.

Note that this is unrelated to the tf.Graph.graph_def_versions.

返回:An integer version that increases as ops are added to the graph.
class tensorflow.IndexedSlices(values, indices, dense_shape=None)

基类:tensorflow.python.framework.tensor_like._TensorLike, tensorflow.python.framework.composite_tensor.CompositeTensor

A sparse representation of a set of tensor slices at given indices.

This class is a simple wrapper for a pair of Tensor objects:

  • values: A Tensor of any dtype with shape [D0, D1, …, Dn].
  • indices: A 1-D integer Tensor with shape [D0].

An IndexedSlices is typically used to represent a subset of a larger tensor dense of shape [LARGE0, D1, .. , DN] where LARGE0 >> D0. The values in indices are the indices in the first dimension of the slices that have been extracted from the larger tensor.

The dense tensor dense represented by an IndexedSlices slices has

`python dense[slices.indices[i], :, :, :, ...] = slices.values[i, :, :, :, ...] `

The IndexedSlices class is used principally in the definition of gradients for operations that have sparse gradients (e.g. tf.gather).

Contrast this representation with tf.SparseTensor, which uses multi-dimensional indices and scalar values.

Creates an IndexedSlices.

consumers()
dense_shape

A 1-D Tensor containing the shape of the corresponding dense tensor.

device

The name of the device on which values will be produced, or None.

dtype

The DType of elements in this tensor.

graph

The Graph that contains the values, indices, and shape tensors.

indices

A 1-D Tensor containing the indices of the slices.

name

The name of this IndexedSlices.

op

The Operation that produces values as an output.

shape

Gets the tf.TensorShape representing the shape of the dense tensor.

返回:A tf.TensorShape object.
values

A Tensor containing the values of the slices.

class tensorflow.IndexedSlicesSpec(shape=None, dtype=tf.float32, indices_dtype=tf.int64, dense_shape_dtype=None, indices_shape=None)

基类:tensorflow.python.framework.type_spec.TypeSpec

Type specification for a tf.IndexedSlices.

Constructs a type specification for a tf.IndexedSlices.

参数:
  • shape – The dense shape of the IndexedSlices, or None to allow any dense shape.
  • dtypetf.DType of values in the IndexedSlices.
  • indices_dtypetf.DType of the indices in the IndexedSlices. One of tf.int32 or tf.int64.
  • dense_shape_dtypetf.DType of the dense_shape in the IndexedSlices. One of tf.int32, tf.int64, or None (if the IndexedSlices has no dense_shape tensor).
  • indices_shape – The shape of the indices component, which indicates how many slices are in the IndexedSlices.
value_type
class tensorflow.Module(name=None)

基类:tensorflow.python.training.tracking.tracking.AutoTrackable

Base neural network module class.

A module is a named container for tf.Variable`s, other `tf.Module`s and functions which apply to user input. For example a dense layer in a neural network might be implemented as a `tf.Module:

>>> class Dense(tf.Module):
...   def __init__(self, in_features, out_features, name=None):
...     super(Dense, self).__init__(name=name)
...     self.w = tf.Variable(
...       tf.random.normal([in_features, out_features]), name='w')
...     self.b = tf.Variable(tf.zeros([out_features]), name='b')
...   def __call__(self, x):
...     y = tf.matmul(x, self.w) + self.b
...     return tf.nn.relu(y)

You can use the Dense layer as you would expect:

>>> d = Dense(in_features=3, out_features=2)
>>> d(tf.ones([1, 3]))
<tf.Tensor: shape=(1, 2), dtype=float32, numpy=..., dtype=float32)>

By subclassing tf.Module instead of object any tf.Variable or tf.Module instances assigned to object properties can be collected using the variables, trainable_variables or submodules property:

>>> d.variables
    (<tf.Variable 'b:0' shape=(2,) dtype=float32, numpy=...,
    dtype=float32)>,
    <tf.Variable 'w:0' shape=(3, 2) dtype=float32, numpy=..., dtype=float32)>)

Subclasses of tf.Module can also take advantage of the _flatten method which can be used to implement tracking of any other types.

All tf.Module classes have an associated tf.name_scope which can be used to group operations in TensorBoard and create hierarchies for variable names which can help with debugging. We suggest using the name scope when creating nested submodules/parameters or for forward methods whose graph you might want to inspect in TensorBoard. You can enter the name scope explicitly using with self.name_scope: or you can annotate methods (apart from __init__) with @tf.Module.with_name_scope.

```python class MLP(tf.Module):

def __init__(self, input_size, sizes, name=None):

super(MLP, self).__init__(name=name) self.layers = [] with self.name_scope:

for size in sizes:
self.layers.append(Dense(input_size=input_size, output_size=size)) input_size = size

@tf.Module.with_name_scope def __call__(self, x):

for layer in self.layers:
x = layer(x)

return x

```

name

Returns the name of this module as passed or determined in the ctor.

NOTE: This is not the same as the self.name_scope.name which includes parent module names.

name_scope

Returns a tf.name_scope instance for this class.

submodules

Sequence of all sub-modules.

Submodules are modules which are properties of this module, or found as properties of modules which are properties of this module (and so on).

>>> a = tf.Module()
>>> b = tf.Module()
>>> c = tf.Module()
>>> a.b = b
>>> b.c = c
>>> list(a.submodules) == [b, c]
True
>>> list(b.submodules) == [c]
True
>>> list(c.submodules) == []
True
返回:A sequence of all submodules.
trainable_variables

Sequence of trainable variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

返回:A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
variables

Sequence of variables owned by this module and its submodules.

Note: this method uses reflection to find variables on the current instance and submodules. For performance reasons you may wish to cache the result of calling this method if you don’t expect the return value to change.

返回:A sequence of variables for the current module (sorted by attribute name) followed by variables from all submodules recursively (breadth first).
classmethod with_name_scope(method)

Decorator to automatically enter the module name scope.

>>> class MyModule(tf.Module):
...   @tf.Module.with_name_scope
...   def __call__(self, x):
...     if not hasattr(self, 'w'):
...       self.w = tf.Variable(tf.random.normal([x.shape[1], 3]))
...     return tf.matmul(x, self.w)

Using the above module would produce `tf.Variable`s and `tf.Tensor`s whose names included the module name:

>>> mod = MyModule()
>>> mod(tf.ones([1, 2]))
<tf.Tensor: shape=(1, 3), dtype=float32, numpy=..., dtype=float32)>
>>> mod.w
<tf.Variable 'my_module/Variable:0' shape=(2, 3) dtype=float32,
numpy=..., dtype=float32)>
参数:method – The method to wrap.
返回:The original method wrapped such that it enters the module’s name scope.
class tensorflow.Operation(node_def, g, inputs=None, output_types=None, control_inputs=None, input_types=None, original_op=None, op_def=None)

基类:object

Represents a graph node that performs computation on tensors.

An Operation is a node in a tf.Graph that takes zero or more Tensor objects as input, and produces zero or more Tensor objects as output. Objects of type Operation are created by calling a Python op constructor (such as tf.matmul) within a tf.function or under a tf.Graph.as_default context manager.

For example, within a tf.function, c = tf.matmul(a, b) creates an Operation of type “MatMul” that takes tensors a and b as input, and produces c as output.

If a tf.compat.v1.Session is used, an Operation of a tf.Graph can be executed by passing it to tf.Session.run. op.run() is a shortcut for calling tf.compat.v1.get_default_session().run(op).

Creates an Operation.

NOTE: This constructor validates the name of the Operation (passed as node_def.name). Valid Operation names match the following regular expression:

[A-Za-z0-9.][A-Za-z0-9_.-/]*
参数:
  • node_defnode_def_pb2.NodeDef. NodeDef for the Operation. Used for attributes of node_def_pb2.NodeDef, typically name, op, and device. The input attribute is irrelevant here as it will be computed when generating the model.
  • gGraph. The parent graph.
  • inputs – list of Tensor objects. The inputs to this Operation.
  • output_types – list of DType objects. List of the types of the Tensors computed by this operation. The length of this list indicates the number of output endpoints of the Operation.
  • control_inputs – list of operations or tensors from which to have a control dependency.
  • input_types – List of DType objects representing the types of the tensors accepted by the Operation. By default uses [x.dtype.base_dtype for x in inputs]. Operations that expect reference-typed inputs must specify these explicitly.
  • original_op – Optional. Used to associate the new Operation with an existing Operation (for example, a replica with the op that was replicated).
  • op_def – Optional. The op_def_pb2.OpDef proto that describes the op type that this Operation represents.
Raises:
  • TypeError – if control inputs are not Operations or Tensors, or if node_def is not a NodeDef, or if g is not a Graph, or if inputs are not tensors, or if inputs and input_types are incompatible.
  • ValueError – if the node_def name is not valid.
colocation_groups()

Returns the list of colocation groups of the op.

control_inputs

The Operation objects on which this op has a control dependency.

Before this op is executed, TensorFlow will ensure that the operations in self.control_inputs have finished executing. This mechanism can be used to run ops sequentially for performance reasons, or to ensure that the side effects of an op are observed in the correct order.

返回:A list of Operation objects.
device

The name of the device to which this op has been assigned, if any.

返回:The string name of the device to which this op has been assigned, or an empty string if it has not been assigned to a device.
get_attr(name)

Returns the value of the attr of this op with the given name.

参数:name – The name of the attr to fetch.
返回:The value of the attr, as a Python object.
Raises:ValueError – If this op does not have an attr with the given name.
graph

The Graph that contains this operation.

inputs

The sequence of Tensor objects representing the data inputs of this op.

name

The full name of this operation.

node_def

Returns the NodeDef representation of this operation.

返回:A [NodeDef](https://www.tensorflow.org/code/tensorflow/core/framework/node_def.proto) protocol buffer.
op_def

Returns the OpDef proto that represents the type of this op.

返回:An [OpDef](https://www.tensorflow.org/code/tensorflow/core/framework/op_def.proto) protocol buffer.
outputs

The list of Tensor objects representing the outputs of this op.

run(feed_dict=None, session=None)

Runs this operation in a Session.

Calling this method will execute all preceding operations that produce the inputs needed for this operation.

N.B. Before invoking Operation.run(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.

参数:
  • feed_dict – A dictionary that maps Tensor objects to feed values. See tf.Session.run for a description of the valid feed values.
  • session – (Optional.) The Session to be used to run to this operation. If none, the default session will be used.
traceback

Returns the call stack from when this operation was constructed.

type

The type of the op (e.g. “MatMul”).

values()

DEPRECATED: Use outputs.

class tensorflow.OptionalSpec(value_structure)

基类:tensorflow.python.framework.type_spec.TypeSpec

Represents an optional potentially containing a structured value.

static from_value(value)
value_type

The Python type for values that are compatible with this TypeSpec.

class tensorflow.RaggedTensor(values, row_splits, cached_row_lengths=None, cached_value_rowids=None, cached_nrows=None, internal=False, uniform_row_length=None)

基类:tensorflow.python.framework.composite_tensor.CompositeTensor

Represents a ragged tensor.

A RaggedTensor is a tensor with one or more ragged dimensions, which are dimensions whose slices may have different lengths. For example, the inner (column) dimension of rt=[[3, 1, 4, 1], [], [5, 9, 2], [6], []] is ragged, since the column slices (rt[0, :], …, rt[4, :]) have different lengths. Dimensions whose slices all have the same length are called uniform dimensions. The outermost dimension of a RaggedTensor is always uniform, since it consists of a single slice (and so there is no possibility for differing slice lengths).

The total number of dimensions in a RaggedTensor is called its rank, and the number of ragged dimensions in a RaggedTensor is called its ragged-rank. A RaggedTensor’s ragged-rank is fixed at graph creation time: it can’t depend on the runtime values of `Tensor`s, and can’t vary dynamically for different session runs.

### Potentially Ragged Tensors

Many ops support both Tensor`s and `RaggedTensor`s. The term “potentially ragged tensor” may be used to refer to a tensor that might be either a `Tensor or a RaggedTensor. The ragged-rank of a Tensor is zero.

### Documenting RaggedTensor Shapes

When documenting the shape of a RaggedTensor, ragged dimensions can be indicated by enclosing them in parentheses. For example, the shape of a 3-D RaggedTensor that stores the fixed-size word embedding for each word in a sentence, for each sentence in a batch, could be written as [num_sentences, (num_words), embedding_size]. The parentheses around (num_words) indicate that dimension is ragged, and that the length of each element list in that dimension may vary for each item.

### Component Tensors

Internally, a RaggedTensor consists of a concatenated list of values that are partitioned into variable-length rows. In particular, each RaggedTensor consists of:

  • A values tensor, which concatenates the variable-length rows into a flattened list. For example, the values tensor for [[3, 1, 4, 1], [], [5, 9, 2], [6], []] is [3, 1, 4, 1, 5, 9, 2, 6].
  • A row_splits vector, which indicates how those flattened values are divided into rows. In particular, the values for row rt[i] are stored in the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]].

Example:

>>> print(tf.RaggedTensor.from_row_splits(
...       values=[3, 1, 4, 1, 5, 9, 2, 6],
...       row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>

### Alternative Row-Partitioning Schemes

In addition to row_splits, ragged tensors provide support for four other row-partitioning schemes:

  • row_lengths: a vector with shape [nrows], which specifies the length of each row.
  • value_rowids and nrows: value_rowids is a vector with shape [nvals], corresponding one-to-one with values, which specifies each value’s row index. In particular, the row rt[row] consists of the values rt.values[j] where value_rowids[j]==row. nrows is an integer scalar that specifies the number of rows in the RaggedTensor. (nrows is used to indicate trailing empty rows.)
  • row_starts: a vector with shape [nrows], which specifies the start offset of each row. Equivalent to row_splits[:-1].
  • row_limits: a vector with shape [nrows], which specifies the stop offset of each row. Equivalent to row_splits[1:].
  • uniform_row_length: A scalar tensor, specifying the length of every row. This row-partitioning scheme may only be used if all rows have the same length.

Example: The following ragged tensors are equivalent, and all represent the nested list [[3, 1, 4, 1], [], [5, 9, 2], [6], []].

>>> values = [3, 1, 4, 1, 5, 9, 2, 6]
>>> rt1 = RaggedTensor.from_row_splits(values, row_splits=[0, 4, 4, 7, 8, 8])
>>> rt2 = RaggedTensor.from_row_lengths(values, row_lengths=[4, 0, 3, 1, 0])
>>> rt3 = RaggedTensor.from_value_rowids(
...     values, value_rowids=[0, 0, 0, 0, 2, 2, 2, 3], nrows=5)
>>> rt4 = RaggedTensor.from_row_starts(values, row_starts=[0, 4, 4, 7, 8])
>>> rt5 = RaggedTensor.from_row_limits(values, row_limits=[4, 4, 7, 8, 8])

### Multiple Ragged Dimensions

RaggedTensor`s with multiple ragged dimensions can be defined by using a nested `RaggedTensor for the values tensor. Each nested RaggedTensor adds a single ragged dimension.

>>> inner_rt = RaggedTensor.from_row_splits(  # =rt1 from above
...     values=[3, 1, 4, 1, 5, 9, 2, 6], row_splits=[0, 4, 4, 7, 8, 8])
>>> outer_rt = RaggedTensor.from_row_splits(
...     values=inner_rt, row_splits=[0, 3, 3, 5])
>>> print(outer_rt.to_list())
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]
>>> print(outer_rt.ragged_rank)
2

The factory function RaggedTensor.from_nested_row_splits may be used to construct a RaggedTensor with multiple ragged dimensions directly, by providing a list of row_splits tensors:

>>> RaggedTensor.from_nested_row_splits(
...     flat_values=[3, 1, 4, 1, 5, 9, 2, 6],
...     nested_row_splits=([0, 3, 3, 5], [0, 4, 4, 7, 8, 8])).to_list()
[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]

### Uniform Inner Dimensions

RaggedTensor`s with uniform inner dimensions can be defined by using a multidimensional `Tensor for values.

>>> rt = RaggedTensor.from_row_splits(values=tf.ones([5, 3], tf.int32),
...                                   row_splits=[0, 2, 5])
>>> print(rt.to_list())
[[[1, 1, 1], [1, 1, 1]],
 [[1, 1, 1], [1, 1, 1], [1, 1, 1]]]
>>> print(rt.shape)
(2, None, 3)

### Uniform Outer Dimensions

RaggedTensor`s with uniform outer dimensions can be defined by using one or more `RaggedTensor with a uniform_row_length row-partitioning tensor. For example, a RaggedTensor with shape [2, 2, None] can be constructed with this method from a RaggedTensor values with shape [4, None]:

>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
>>> print(values.shape)
(4, None)
>>> rt6 = tf.RaggedTensor.from_uniform_row_length(values, 2)
>>> print(rt6)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
>>> print(rt6.shape)
(2, 2, None)

Note that rt6 only contains one ragged dimension (the innermost dimension). In contrast, if from_row_splits is used to construct a similar RaggedTensor, then that RaggedTensor will have two ragged dimensions:

>>> rt7 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
>>> print(rt7.shape)
(2, None, None)

Uniform and ragged outer dimensions may be interleaved, meaning that a tensor with any combination of ragged and uniform dimensions may be created. For example, a RaggedTensor t4 with shape [3, None, 4, 8, None, 2] could be constructed as follows:

`python t0 = tf.zeros([1000, 2])                           # Shape:         [1000, 2] t1 = RaggedTensor.from_row_lengths(t0, [...])      #           [160, None, 2] t2 = RaggedTensor.from_uniform_row_length(t1, 8)   #         [20, 8, None, 2] t3 = RaggedTensor.from_uniform_row_length(t2, 4)   #       [5, 4, 8, None, 2] t4 = RaggedTensor.from_row_lengths(t3, [...])      # [3, None, 4, 8, None, 2] `

Creates a RaggedTensor with a specified partitioning for values.

This constructor is private – please use one of the following ops to build `RaggedTensor`s:

  • tf.RaggedTensor.from_row_lengths
  • tf.RaggedTensor.from_value_rowids
  • tf.RaggedTensor.from_row_splits
  • tf.RaggedTensor.from_row_starts
  • tf.RaggedTensor.from_row_limits
  • tf.RaggedTensor.from_nested_row_splits
  • tf.RaggedTensor.from_nested_row_lengths
  • tf.RaggedTensor.from_nested_value_rowids
参数:
  • values – A potentially ragged tensor of any dtype and shape [nvals, …].
  • row_splits – A 1-D integer tensor with shape [nrows+1].
  • cached_row_lengths – A 1-D integer tensor with shape [nrows]
  • cached_value_rowids – A 1-D integer tensor with shape [nvals].
  • cached_nrows – A 1-D integer scalar tensor.
  • internal – True if the constructor is being called by one of the factory methods. If false, an exception will be raised.
  • uniform_row_length – A scalar tensor.
Raises:
  • TypeError – If a row partitioning tensor has an inappropriate dtype.
  • TypeError – If exactly one row partitioning argument was not specified.
  • ValueError – If a row partitioning tensor has an inappropriate shape.
  • ValueError – If multiple partitioning arguments are specified.
  • ValueError – If nrows is specified but value_rowids is not None.
bounding_shape(axis=None, name=None, out_type=None)

Returns the tight bounding box shape for this RaggedTensor.

参数:
  • axis – An integer scalar or vector indicating which axes to return the bounding box for. If not specified, then the full bounding box is returned.
  • name – A name prefix for the returned tensor (optional).
  • out_typedtype for the returned tensor. Defaults to self.row_splits.dtype.
返回:

An integer Tensor (dtype=self.row_splits.dtype). If axis is not specified, then output is a vector with output.shape=[self.shape.ndims]. If axis is a scalar, then the output is a scalar. If axis is a vector, then output is a vector, where output[i] is the bounding size for dimension axis[i].

#### Example:

>>> rt = tf.ragged.constant([[1, 2, 3, 4], [5], [], [6, 7, 8, 9], [10]])
>>> rt.bounding_shape().numpy()
array([5, 4])
consumers()
dtype

The DType of values in this tensor.

flat_values

The innermost values tensor for this ragged tensor.

Concretely, if rt.values is a Tensor, then rt.flat_values is rt.values; otherwise, rt.flat_values is rt.values.flat_values.

Conceptually, flat_values is the tensor formed by flattening the outermost dimension and all of the ragged dimensions into a single dimension.

rt.flat_values.shape = [nvals] + rt.shape[rt.ragged_rank + 1:] (where nvals is the number of items in the flattened dimensions).

返回:A Tensor.

#### Example:

>>> rt = tf.ragged.constant([[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]])
>>> print(rt.flat_values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
classmethod from_nested_row_lengths(flat_values, nested_row_lengths, name=None, validate=True)

Creates a RaggedTensor from a nested list of row_lengths tensors.

Equivalent to:

```python result = flat_values for row_lengths in reversed(nested_row_lengths):

result = from_row_lengths(result, row_lengths)

```

参数:
  • flat_values – A potentially ragged tensor.
  • nested_row_lengths – A list of 1-D integer tensors. The i`th tensor is used as the `row_lengths for the `i`th ragged dimension.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor (or flat_values if nested_row_lengths is empty).

classmethod from_nested_row_splits(flat_values, nested_row_splits, name=None, validate=True)

Creates a RaggedTensor from a nested list of row_splits tensors.

Equivalent to:

```python result = flat_values for row_splits in reversed(nested_row_splits):

result = from_row_splits(result, row_splits)

```

参数:
  • flat_values – A potentially ragged tensor.
  • nested_row_splits – A list of 1-D integer tensors. The i`th tensor is used as the `row_splits for the `i`th ragged dimension.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor (or flat_values if nested_row_splits is empty).

classmethod from_nested_value_rowids(flat_values, nested_value_rowids, nested_nrows=None, name=None, validate=True)

Creates a RaggedTensor from a nested list of value_rowids tensors.

Equivalent to:

```python result = flat_values for (rowids, nrows) in reversed(zip(nested_value_rowids, nested_nrows)):

result = from_value_rowids(result, rowids, nrows)

```

参数:
  • flat_values – A potentially ragged tensor.
  • nested_value_rowids – A list of 1-D integer tensors. The i`th tensor is used as the `value_rowids for the `i`th ragged dimension.
  • nested_nrows – A list of integer scalars. The i`th scalar is used as the `nrows for the `i`th ragged dimension.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor (or flat_values if nested_value_rowids is empty).

Raises:

ValueError – If len(nested_values_rowids) != len(nested_nrows).

classmethod from_row_lengths(values, row_lengths, name=None, validate=True)

Creates a RaggedTensor with rows partitioned by row_lengths.

The returned RaggedTensor corresponds with the python list defined by:

```python result = [[values.pop(0) for i in range(length)]

for length in row_lengths]

```

参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • row_lengths – A 1-D integer tensor with shape [nrows]. Must be nonnegative. sum(row_lengths) must be nvals.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor. result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

#### Example:

>>> print(tf.RaggedTensor.from_row_lengths(
...     values=[3, 1, 4, 1, 5, 9, 2, 6],
...     row_lengths=[4, 0, 3, 1, 0]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
classmethod from_row_limits(values, row_limits, name=None, validate=True)

Creates a RaggedTensor with rows partitioned by row_limits.

Equivalent to: from_row_splits(values, concat([0, row_limits])).

参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • row_limits – A 1-D integer tensor with shape [nrows]. Must be sorted in ascending order. If nrows>0, then row_limits[-1] must be nvals.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor. result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

#### Example:

>>> print(tf.RaggedTensor.from_row_limits(
...     values=[3, 1, 4, 1, 5, 9, 2, 6],
...     row_limits=[4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
classmethod from_row_splits(values, row_splits, name=None, validate=True)

Creates a RaggedTensor with rows partitioned by row_splits.

The returned RaggedTensor corresponds with the python list defined by:

```python result = [values[row_splits[i]:row_splits[i + 1]]

for i in range(len(row_splits) - 1)]

```

参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • row_splits – A 1-D integer tensor with shape [nrows+1]. Must not be empty, and must be sorted in ascending order. row_splits[0] must be zero and row_splits[-1] must be nvals.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor. result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

Raises:

ValueError – If row_splits is an empty list.

#### Example:

>>> print(tf.RaggedTensor.from_row_splits(
...     values=[3, 1, 4, 1, 5, 9, 2, 6],
...     row_splits=[0, 4, 4, 7, 8, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
classmethod from_row_starts(values, row_starts, name=None, validate=True)

Creates a RaggedTensor with rows partitioned by row_starts.

Equivalent to: from_row_splits(values, concat([row_starts, nvals])).

参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • row_starts – A 1-D integer tensor with shape [nrows]. Must be nonnegative and sorted in ascending order. If nrows>0, then row_starts[0] must be zero.
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor. result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

#### Example:

>>> print(tf.RaggedTensor.from_row_starts(
...     values=[3, 1, 4, 1, 5, 9, 2, 6],
...     row_starts=[0, 4, 4, 7, 8]))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
classmethod from_sparse(st_input, name=None, row_splits_dtype=tf.int64)

Converts a 2D tf.SparseTensor to a RaggedTensor.

Each row of the output RaggedTensor will contain the explicit values from the same row in st_input. st_input must be ragged-right. If not it is not ragged-right, then an error will be generated.

Example:

>>> st = tf.SparseTensor(indices=[[0, 0], [0, 1], [0, 2], [1, 0], [3, 0]],
...                      values=[1, 2, 3, 4, 5],
...                      dense_shape=[4, 3])
>>> tf.RaggedTensor.from_sparse(st).to_list()
[[1, 2, 3], [4], [], [5]]

Currently, only two-dimensional SparseTensors are supported.

参数:
  • st_input – The sparse tensor to convert. Must have rank 2.
  • name – A name prefix for the returned tensors (optional).
  • row_splits_dtypedtype for the returned RaggedTensor’s row_splits tensor. One of tf.int32 or tf.int64.
返回:

A RaggedTensor with the same values as st_input. output.ragged_rank = rank(st_input) - 1. output.shape = [st_input.dense_shape[0], None].

Raises:

ValueError – If the number of dimensions in st_input is not known statically, or is not two.

classmethod from_tensor(tensor, lengths=None, padding=None, ragged_rank=1, name=None, row_splits_dtype=tf.int64)

Converts a tf.Tensor into a RaggedTensor.

The set of absent/default values may be specified using a vector of lengths or a padding value (but not both). If lengths is specified, then the output tensor will satisfy output[row] = tensor[row][:lengths[row]]. If ‘lengths’ is a list of lists or tuple of lists, those lists will be used as nested row lengths. If padding is specified, then any row suffix consisting entirely of padding will be excluded from the returned RaggedTensor. If neither lengths nor padding is specified, then the returned RaggedTensor will have no absent/default values.

Examples:

>>> dt = tf.constant([[5, 7, 0], [0, 3, 0], [6, 0, 0]])
>>> tf.RaggedTensor.from_tensor(dt)
<tf.RaggedTensor [[5, 7, 0], [0, 3, 0], [6, 0, 0]]>
>>> tf.RaggedTensor.from_tensor(dt, lengths=[1, 0, 3])
<tf.RaggedTensor [[5], [], [6, 0, 0]]>
>>> tf.RaggedTensor.from_tensor(dt, padding=0)
<tf.RaggedTensor [[5, 7], [0, 3], [6]]>
>>> dt = tf.constant([[[5, 0], [7, 0], [0, 0]],
...                   [[0, 0], [3, 0], [0, 0]],
...                   [[6, 0], [0, 0], [0, 0]]])
>>> tf.RaggedTensor.from_tensor(dt, lengths=([2, 0, 3], [1, 1, 2, 0, 1]))
<tf.RaggedTensor [[[5], [7]], [], [[6, 0], [], [0]]]>
参数:
  • tensor – The Tensor to convert. Must have rank ragged_rank + 1 or higher.
  • lengths – An optional set of row lengths, specified using a 1-D integer Tensor whose length is equal to tensor.shape[0] (the number of rows in tensor). If specified, then output[row] will contain tensor[row][:lengths[row]]. Negative lengths are treated as zero. You may optionally pass a list or tuple of lengths to this argument, which will be used as nested row lengths to construct a ragged tensor with multiple ragged dimensions.
  • padding – An optional padding value. If specified, then any row suffix consisting entirely of padding will be excluded from the returned RaggedTensor. padding is a Tensor with the same dtype as tensor and with shape=tensor.shape[ragged_rank + 1:].
  • ragged_rank – Integer specifying the ragged rank for the returned RaggedTensor. Must be greater than zero.
  • name – A name prefix for the returned tensors (optional).
  • row_splits_dtypedtype for the returned RaggedTensor’s row_splits tensor. One of tf.int32 or tf.int64.
返回:

A RaggedTensor with the specified ragged_rank. The shape of the returned ragged tensor is compatible with the shape of tensor.

Raises:

ValueError – If both lengths and padding are specified.

classmethod from_uniform_row_length(values, uniform_row_length, nrows=None, validate=True, name=None)

Creates a RaggedTensor with rows partitioned by uniform_row_length.

This method can be used to create RaggedTensor`s with multiple uniform outer dimensions. For example, a `RaggedTensor with shape [2, 2, None] can be constructed with this method from a RaggedTensor values with shape [4, None]:

>>> values = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
>>> print(values.shape)
(4, None)
>>> rt1 = tf.RaggedTensor.from_uniform_row_length(values, 2)
>>> print(rt1)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
>>> print(rt1.shape)
(2, 2, None)

Note that rt1 only contains one ragged dimension (the innermost dimension). In contrast, if from_row_splits is used to construct a similar RaggedTensor, then that RaggedTensor will have two ragged dimensions:

>>> rt2 = tf.RaggedTensor.from_row_splits(values, [0, 2, 4])
>>> print(rt2.shape)
(2, None, None)
参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • uniform_row_length – A scalar integer tensor. Must be nonnegative. The size of the outer axis of values must be evenly divisible by uniform_row_length.
  • nrows – The number of rows in the constructed RaggedTensor. If not specified, then it defaults to nvals/uniform_row_length (or 0 if uniform_row_length==0). nrows only needs to be specified if uniform_row_length might be zero. uniform_row_length*nrows must be nvals.
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
  • name – A name prefix for the RaggedTensor (optional).
返回:

```python result = [[values.pop(0) for i in range(uniform_row_length)]

for _ in range(nrows)]

```

result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

返回类型:

A RaggedTensor that corresponds with the python list defined by

classmethod from_value_rowids(values, value_rowids, nrows=None, name=None, validate=True)

Creates a RaggedTensor with rows partitioned by value_rowids.

The returned RaggedTensor corresponds with the python list defined by:

```python result = [[values[i] for i in range(len(values)) if value_rowids[i] == row]

for row in range(nrows)]

```

参数:
  • values – A potentially ragged tensor with shape [nvals, …].
  • value_rowids – A 1-D integer tensor with shape [nvals], which corresponds one-to-one with values, and specifies each value’s row index. Must be nonnegative, and must be sorted in ascending order.
  • nrows – An integer scalar specifying the number of rows. This should be specified if the RaggedTensor may containing empty training rows. Must be greater than value_rowids[-1] (or zero if value_rowids is empty). Defaults to value_rowids[-1] (or zero if value_rowids is empty).
  • name – A name prefix for the RaggedTensor (optional).
  • validate – If true, then use assertions to check that the arguments form a valid RaggedTensor. Note: these assertions incur a runtime cost, since they must be checked for each tensor value.
返回:

A RaggedTensor. result.rank = values.rank + 1. result.ragged_rank = values.ragged_rank + 1.

Raises:

ValueError – If nrows is incompatible with value_rowids.

#### Example:

>>> print(tf.RaggedTensor.from_value_rowids(
...     values=[3, 1, 4, 1, 5, 9, 2, 6],
...     value_rowids=[0, 0, 0, 0, 2, 2, 2, 3],
...     nrows=5))
<tf.RaggedTensor [[3, 1, 4, 1], [], [5, 9, 2], [6], []]>
merge_dims(outer_axis, inner_axis)

Merges outer_axis…inner_axis into a single dimension.

Returns a copy of this RaggedTensor with the specified range of dimensions flattened into a single dimension, with elements in row-major order.

#### Examples:

>>> rt = tf.ragged.constant([[[1, 2], [3]], [[4, 5, 6]]])
>>> print(rt.merge_dims(0, 1))
<tf.RaggedTensor [[1, 2], [3], [4, 5, 6]]>
>>> print(rt.merge_dims(1, 2))
<tf.RaggedTensor [[1, 2, 3], [4, 5, 6]]>
>>> print(rt.merge_dims(0, 2))
tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32)

To mimic the behavior of np.flatten (which flattens all dimensions), use rt.merge_dims(0, -1). To mimic the behavior of `tf.layers.Flatten (which flattens all dimensions except the outermost batch dimension), use rt.merge_dims(1, -1).

参数:
  • outer_axisint: The first dimension in the range of dimensions to merge. May be negative if self.shape.rank is statically known.
  • inner_axisint: The last dimension in the range of dimensions to merge. May be negative if self.shape.rank is statically known.
返回:

A copy of this tensor, with the specified dimensions merged into a single dimension. The shape of the returned tensor will be self.shape[:outer_axis] + [N] + self.shape[inner_axis + 1:], where N is the total number of slices in the merged dimensions.

nested_row_lengths(name=None)

Returns a tuple containing the row_lengths for all ragged dimensions.

rt.nested_row_lengths() is a tuple containing the row_lengths tensors for all ragged dimensions in rt, ordered from outermost to innermost.

参数:name – A name prefix for the returned tensors (optional).
返回:A tuple of 1-D integer Tensors. The length of the tuple is equal to self.ragged_rank.
nested_row_splits

A tuple containing the row_splits for all ragged dimensions.

rt.nested_row_splits is a tuple containing the row_splits tensors for all ragged dimensions in rt, ordered from outermost to innermost. In particular, rt.nested_row_splits = (rt.row_splits,) + value_splits where:

  • value_splits = () if rt.values is a Tensor.
  • value_splits = rt.values.nested_row_splits otherwise.
返回:A tuple of 1-D integer `Tensor`s.

#### Example:

>>> rt = tf.ragged.constant(
...     [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]])
>>> for i, splits in enumerate(rt.nested_row_splits):
...   print('Splits for dimension %d: %s' % (i+1, splits.numpy()))
Splits for dimension 1: [0 3]
Splits for dimension 2: [0 3 3 5]
Splits for dimension 3: [0 4 4 7 8 8]
nested_value_rowids(name=None)

Returns a tuple containing the value_rowids for all ragged dimensions.

rt.nested_value_rowids is a tuple containing the value_rowids tensors for all ragged dimensions in rt, ordered from outermost to innermost. In particular, rt.nested_value_rowids = (rt.value_rowids(),) + value_ids where:

  • value_ids = () if rt.values is a Tensor.
  • value_ids = rt.values.nested_value_rowids otherwise.
参数:name – A name prefix for the returned tensors (optional).
返回:A tuple of 1-D integer `Tensor`s.

#### Example:

>>> rt = tf.ragged.constant(
...     [[[[3, 1, 4, 1], [], [5, 9, 2]], [], [[6], []]]])
>>> for i, ids in enumerate(rt.nested_value_rowids()):
...   print('row ids for dimension %d: %s' % (i+1, ids.numpy()))
row ids for dimension 1: [0 0 0]
row ids for dimension 2: [0 0 0 2 2]
row ids for dimension 3: [0 0 0 0 2 2 2 3]
nrows(out_type=None, name=None)

Returns the number of rows in this ragged tensor.

I.e., the size of the outermost dimension of the tensor.

参数:
  • out_typedtype for the returned tensor. Defaults to self.row_splits.dtype.
  • name – A name prefix for the returned tensor (optional).
返回:

A scalar Tensor with dtype out_type.

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.nrows())  # rt has 5 rows.
tf.Tensor(5, shape=(), dtype=int64)
numpy()

Returns a numpy array with the values for this RaggedTensor.

Requires that this RaggedTensor was constructed in eager execution mode.

Ragged dimensions are encoded using numpy arrays with dtype=object and rank=1, where each element is a single row.

#### Examples

In the following example, the value returned by RaggedTensor.numpy() contains three numpy array objects: one for each row (with rank=1 and dtype=int64), and one to combine them (with rank=1 and dtype=object):

>>> tf.ragged.constant([[1, 2, 3], [4, 5]], dtype=tf.int64).numpy()
array([array([1, 2, 3]), array([4, 5])], dtype=object)

Uniform dimensions are encoded using multidimensional numpy array`s. In the following example, the value returned by `RaggedTensor.numpy() contains a single numpy array object, with rank=2 and dtype=int64:

>>> tf.ragged.constant([[1, 2, 3], [4, 5, 6]], dtype=tf.int64).numpy()
array([[1, 2, 3], [4, 5, 6]])
返回:A numpy array.
ragged_rank

The number of ragged dimensions in this ragged tensor.

返回:A Python int indicating the number of ragged dimensions in this ragged tensor. The outermost dimension is not considered ragged.
row_lengths(axis=1, name=None)

Returns the lengths of the rows in this ragged tensor.

rt.row_lengths()[i] indicates the number of values in the i`th row of `rt.

参数:
  • axis – An integer constant indicating the axis whose row lengths should be returned.
  • name – A name prefix for the returned tensor (optional).
返回:

axis]`.

返回类型:

A potentially ragged integer Tensor with shape `self.shape[

Raises:

ValueError – If axis is out of bounds.

#### Example:

>>> rt = tf.ragged.constant(
...     [[[3, 1, 4], [1]], [], [[5, 9], [2]], [[6]], []])
>>> print(rt.row_lengths())  # lengths of rows in rt
tf.Tensor([2 0 2 1 0], shape=(5,), dtype=int64)
>>> print(rt.row_lengths(axis=2))  # lengths of axis=2 rows.
<tf.RaggedTensor [[3, 1], [], [2, 1], [1], []]>
row_limits(name=None)

Returns the limit indices for rows in this ragged tensor.

These indices specify where the values for each row end in self.values. rt.row_limits(self) is equal to rt.row_splits[:-1].

参数:name – A name prefix for the returned tensor (optional).
返回:A 1-D integer Tensor with shape [nrows]. The returned tensor is nonnegative, and is sorted in ascending order.

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.row_limits())  # indices of row limits in rt.values
tf.Tensor([4 4 7 8 8], shape=(5,), dtype=int64)
row_splits

The row-split indices for this ragged tensor’s values.

rt.row_splits specifies where the values for each row begin and end in rt.values. In particular, the values for row rt[i] are stored in the slice rt.values[rt.row_splits[i]:rt.row_splits[i+1]].

返回:A 1-D integer Tensor with shape [self.nrows+1]. The returned tensor is non-empty, and is sorted in ascending order. self.row_splits[0] is zero, and self.row_splits[-1] is equal to self.values.shape[0].

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.row_splits)  # indices of row splits in rt.values
tf.Tensor([0 4 4 7 8 8], shape=(6,), dtype=int64)
row_starts(name=None)

Returns the start indices for rows in this ragged tensor.

These indices specify where the values for each row begin in self.values. rt.row_starts() is equal to rt.row_splits[:-1].

参数:name – A name prefix for the returned tensor (optional).
返回:A 1-D integer Tensor with shape [nrows]. The returned tensor is nonnegative, and is sorted in ascending order.

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.row_starts())  # indices of row starts in rt.values
tf.Tensor([0 4 4 7 8], shape=(5,), dtype=int64)
shape

The statically known shape of this ragged tensor.

返回:A TensorShape containing the statically known shape of this ragged tensor. Ragged dimensions have a size of None.

Examples:

>>> tf.ragged.constant([[0], [1, 2]]).shape
TensorShape([2, None])
>>> tf.ragged.constant([[[0, 1]], [[1, 2], [3, 4]]], ragged_rank=1).shape
TensorShape([2, None, 2])
to_list()

Returns a nested Python list with the values for this RaggedTensor.

Requires that rt was constructed in eager execution mode.

返回:A nested Python list.
to_sparse(name=None)

Converts this RaggedTensor into a tf.SparseTensor.

Example:

>>> rt = tf.ragged.constant([[1, 2, 3], [4], [], [5, 6]])
>>> print(rt.to_sparse())
SparseTensor(indices=tf.Tensor(
                 [[0 0] [0 1] [0 2] [1 0] [3 0] [3 1]],
                 shape=(6, 2), dtype=int64),
             values=tf.Tensor([1 2 3 4 5 6], shape=(6,), dtype=int32),
             dense_shape=tf.Tensor([4 3], shape=(2,), dtype=int64))
参数:name – A name prefix for the returned tensors (optional).
返回:A SparseTensor with the same values as self.
to_tensor(default_value=None, name=None, shape=None)

Converts this RaggedTensor into a tf.Tensor.

If shape is specified, then the result is padded and/or truncated to the specified shape.

Examples:

>>> rt = tf.ragged.constant([[9, 8, 7], [], [6, 5], [4]])
>>> print(rt.to_tensor())
tf.Tensor(
    [[9 8 7] [0 0 0] [6 5 0] [4 0 0]], shape=(4, 3), dtype=int32)
>>> print(rt.to_tensor(shape=[5, 2]))
tf.Tensor(
    [[9 8] [0 0] [6 5] [4 0] [0 0]], shape=(5, 2), dtype=int32)
参数:
  • default_value – Value to set for indices not specified in self. Defaults to zero. default_value must be broadcastable to self.shape[self.ragged_rank + 1:].
  • name – A name prefix for the returned tensors (optional).
  • shape – The shape of the resulting dense tensor. In particular, result.shape[i] is shape[i] (if shape[i] is not None), or self.bounding_shape(i) (otherwise).`shape.rank` must be None or equal to self.rank.
返回:

A Tensor with shape ragged.bounding_shape(self) and the values specified by the non-empty values in self. Empty values are assigned default_value.

uniform_row_length

The length of each row in this ragged tensor, or None if rows are ragged.

>>> rt1 = tf.ragged.constant([[1, 2, 3], [4], [5, 6], [7, 8, 9, 10]])
>>> print(rt1.uniform_row_length)  # rows are ragged.
None
>>> rt2 = tf.RaggedTensor.from_uniform_row_length(
...     values=rt1, uniform_row_length=2)
>>> print(rt2)
<tf.RaggedTensor [[[1, 2, 3], [4]], [[5, 6], [7, 8, 9, 10]]]>
>>> print(rt2.uniform_row_length)  # rows are not ragged (all have size 2).
tf.Tensor(2, shape=(), dtype=int64)

A RaggedTensor’s rows are only considered to be uniform (i.e. non-ragged) if it can be determined statically (at graph construction time) that the rows all have the same length.

返回:A scalar integer Tensor, specifying the length of every row in this ragged tensor (for ragged tensors whose rows are uniform); or None (for ragged tensors whose rows are ragged).
value_rowids(name=None)

Returns the row indices for the values in this ragged tensor.

rt.value_rowids() corresponds one-to-one with the outermost dimension of rt.values, and specifies the row containing each value. In particular, the row rt[row] consists of the values rt.values[j] where rt.value_rowids()[j] == row.

参数:name – A name prefix for the returned tensor (optional).
返回:1]`. The returned tensor is nonnegative, and is sorted in ascending order.
返回类型:A 1-D integer Tensor with shape `self.values.shape[

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
>>> print(rt.value_rowids())  # corresponds 1:1 with rt.values
tf.Tensor([0 0 0 0 2 2 2 3], shape=(8,), dtype=int64)
values

The concatenated rows for this ragged tensor.

rt.values is a potentially ragged tensor formed by flattening the two outermost dimensions of rt into a single dimension.

rt.values.shape = [nvals] + rt.shape[2:] (where nvals is the number of items in the outer two dimensions of rt).

rt.ragged_rank = self.ragged_rank - 1

返回:A potentially ragged tensor.

#### Example:

>>> rt = tf.ragged.constant([[3, 1, 4, 1], [], [5, 9, 2], [6], []])
>>> print(rt.values)
tf.Tensor([3 1 4 1 5 9 2 6], shape=(8,), dtype=int32)
with_flat_values(new_values)

Returns a copy of self with flat_values replaced by new_value.

Preserves cached row-partitioning tensors such as self.cached_nrows and self.cached_value_rowids if they have values.

参数:
  • new_values – Potentially ragged tensor that should replace
  • Must have rank > 0, and must have the same (self.flat_values.) –
  • of rows as self.flat_values. (number) –
返回:

A RaggedTensor. result.rank = self.ragged_rank + new_values.rank. result.ragged_rank = self.ragged_rank + new_values.ragged_rank.

with_row_splits_dtype(dtype)

Returns a copy of this RaggedTensor with the given row_splits dtype.

For RaggedTensors with multiple ragged dimensions, the row_splits for all nested RaggedTensor objects are cast to the given dtype.

参数:dtype – The dtype for row_splits. One of tf.int32 or tf.int64.
返回:A copy of this RaggedTensor, with the row_splits cast to the given type.
with_values(new_values)

Returns a copy of self with values replaced by new_value.

Preserves cached row-partitioning tensors such as self.cached_nrows and self.cached_value_rowids if they have values.

参数:new_values – Potentially ragged tensor to use as the values for the returned RaggedTensor. Must have rank > 0, and must have the same number of rows as self.values.
返回:A RaggedTensor. result.rank = 1 + new_values.rank. result.ragged_rank = 1 + new_values.ragged_rank
class tensorflow.RaggedTensorSpec(shape=None, dtype=tf.float32, ragged_rank=None, row_splits_dtype=tf.int64)

基类:tensorflow.python.framework.type_spec.BatchableTypeSpec

Type specification for a tf.RaggedTensor.

Constructs a type specification for a tf.RaggedTensor.

参数:
  • shape – The shape of the RaggedTensor, or None to allow any shape. If a shape is specified, then all ragged dimensions must have size None.
  • dtypetf.DType of values in the RaggedTensor.
  • ragged_rank – Python integer, the ragged rank of the RaggedTensor to be described. Defaults to shape.ndims - 1.
  • row_splits_dtypedtype for the RaggedTensor’s row_splits tensor. One of tf.int32 or tf.int64.
classmethod from_value(value)
value_type

The Python type for values that are compatible with this TypeSpec.

class tensorflow.RegisterGradient(op_type)

基类:object

A decorator for registering the gradient function for an op type.

This decorator is only used when defining a new op type. For an op with m inputs and n outputs, the gradient function is a function that takes the original Operation and n Tensor objects (representing the gradients with respect to each output of the op), and returns m Tensor objects (representing the partial gradients with respect to each input of the op).

For example, assuming that operations of type “Sub” take two inputs x and y, and return a single output x - y, the following gradient function would be registered:

```python @tf.RegisterGradient(“Sub”) def _sub_grad(unused_op, grad):

return grad, tf.negative(grad)

```

The decorator argument op_type is the string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.

Creates a new decorator with op_type as the Operation type.

参数:op_type – The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.
Raises:TypeError – If op_type is not string.
class tensorflow.SparseTensor(indices, values, dense_shape)

基类:tensorflow.python.framework.tensor_like._TensorLike, tensorflow.python.framework.composite_tensor.CompositeTensor

Represents a sparse tensor.

TensorFlow represents a sparse tensor as three separate dense tensors: indices, values, and dense_shape. In Python, the three tensors are collected into a SparseTensor class for ease of use. If you have separate indices, values, and dense_shape tensors, wrap them in a SparseTensor object before passing to the ops below.

Concretely, the sparse tensor SparseTensor(indices, values, dense_shape) comprises the following components, where N and ndims are the number of values and number of dimensions in the SparseTensor, respectively:

  • indices: A 2-D int64 tensor of shape [N, ndims], which specifies the indices of the elements in the sparse tensor that contain nonzero values (elements are zero-indexed). For example, indices=[[1,3], [2,4]] specifies that the elements with indexes of [1,3] and [2,4] have nonzero values.
  • values: A 1-D tensor of any type and shape [N], which supplies the values for each element in indices. For example, given indices=[[1,3], [2,4]], the parameter values=[18, 3.6] specifies that element [1,3] of the sparse tensor has a value of 18, and element [2,4] of the tensor has a value of 3.6.
  • dense_shape: A 1-D int64 tensor of shape [ndims], which specifies the dense_shape of the sparse tensor. Takes a list indicating the number of elements in each dimension. For example, dense_shape=[3,6] specifies a two-dimensional 3x6 tensor, dense_shape=[2,3,4] specifies a three-dimensional 2x3x4 tensor, and dense_shape=[9] specifies a one-dimensional tensor with 9 elements.

The corresponding dense tensor satisfies:

`python dense.shape = dense_shape dense[tuple(indices[i])] = values[i] `

By convention, indices should be sorted in row-major order (or equivalently lexicographic order on the tuples indices[i]). This is not enforced when SparseTensor objects are constructed, but most ops assume correct ordering. If the ordering of sparse tensor st is wrong, a fixed version can be obtained by calling tf.sparse.reorder(st).

Example: The sparse tensor

`python SparseTensor(indices=[[0, 0], [1, 2]], values=[1, 2], dense_shape=[3, 4]) `

represents the dense tensor

```python [[1, 0, 0, 0]

[0, 0, 2, 0] [0, 0, 0, 0]]

```

Creates a SparseTensor.

参数:
  • indices – A 2-D int64 tensor of shape [N, ndims].
  • values – A 1-D tensor of any type and shape [N].
  • dense_shape – A 1-D int64 tensor of shape [ndims].
Raises:

ValueError – When building an eager SparseTensor if dense_shape is unknown or contains unknown elements (None or -1).

consumers()
dense_shape

A 1-D Tensor of int64 representing the shape of the dense tensor.

dtype

The DType of elements in this tensor.

eval(feed_dict=None, session=None)

Evaluates this sparse tensor in a Session.

Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.

N.B. Before invoking SparseTensor.eval(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.

参数:
  • feed_dict – A dictionary that maps Tensor objects to feed values. See tf.Session.run for a description of the valid feed values.
  • session – (Optional.) The Session to be used to evaluate this sparse tensor. If none, the default session will be used.
返回:

A SparseTensorValue object.

classmethod from_value(sparse_tensor_value)
get_shape()

Get the TensorShape representing the shape of the dense tensor.

返回:A TensorShape object.
graph

The Graph that contains the index, value, and dense_shape tensors.

indices

The indices of non-zero values in the represented dense tensor.

返回:
A 2-D Tensor of int64 with dense_shape [N, ndims], where N is the
number of non-zero values in the tensor, and ndims is the rank.
op

The Operation that produces values as an output.

shape

Get the TensorShape representing the shape of the dense tensor.

返回:A TensorShape object.
values

The non-zero values in the represented dense tensor.

返回:A 1-D Tensor of any data type.
class tensorflow.SparseTensorSpec(shape=None, dtype=tf.float32)

基类:tensorflow.python.framework.type_spec.BatchableTypeSpec

Type specification for a tf.SparseTensor.

Constructs a type specification for a tf.SparseTensor.

参数:
  • shape – The dense shape of the SparseTensor, or None to allow any dense shape.
  • dtypetf.DType of values in the SparseTensor.
dtype

The tf.dtypes.DType specified by this type for the SparseTensor.

classmethod from_value(value)
shape

The tf.TensorShape specified by this type for the SparseTensor.

value_type
class tensorflow.Tensor(op, value_index, dtype)

基类:tensorflow.python.framework.tensor_like._TensorLike

A tensor represents a rectangular array of data.

When writing a TensorFlow program, the main object you manipulate and pass around is the tf.Tensor. A tf.Tensor object represents a rectangular array of arbitrary dimension, filled with data of a specific data type.

A tf.Tensor has the following properties:

  • a data type (float32, int32, or string, for example)
  • a shape

Each element in the Tensor has the same data type, and the data type is always known.

In eager execution, which is the default mode in TensorFlow, results are calculated immediately.

>>> # Compute some values using a Tensor
>>> c = tf.constant([[1.0, 2.0], [3.0, 4.0]])
>>> d = tf.constant([[1.0, 1.0], [0.0, 1.0]])
>>> e = tf.matmul(c, d)
>>> print(e)
tf.Tensor(
[[1. 3.]
 [3. 7.]], shape=(2, 2), dtype=float32)

Note that during eager execution, you may discover your Tensors are actually of type EagerTensor. This is an internal detail, but it does give you access to a useful function, numpy:

>>> type(e)
<class '...ops.EagerTensor'>
>>> print(e.numpy())
  [[1. 3.]
   [3. 7.]]

TensorFlow can define computations without immediately executing them, most commonly inside `tf.function`s, as well as in (legacy) Graph mode. In those cases, the shape (that is, the rank of the Tensor and the size of each dimension) might be only partially known.

Most operations produce tensors of fully-known shapes if the shapes of their inputs are also fully known, but in some cases it’s only possible to find the shape of a tensor at execution time.

There are specialized tensors; for these, see tf.Variable, tf.constant, tf.placeholder, tf.SparseTensor, and tf.RaggedTensor.

For more on Tensors, see the [guide](https://tensorflow.org/guide/tensor`).

Creates a new Tensor.

参数:
  • op – An Operation. Operation that computes this tensor.
  • value_index – An int. Index of the operation’s endpoint that produces this tensor.
  • dtype – A DType. Type of elements stored in this tensor.
Raises:

TypeError – If the op is not an Operation.

OVERLOADABLE_OPERATORS = {'__abs__', '__add__', '__and__', '__div__', '__eq__', '__floordiv__', '__ge__', '__getitem__', '__gt__', '__invert__', '__le__', '__lt__', '__matmul__', '__mod__', '__mul__', '__ne__', '__neg__', '__or__', '__pow__', '__radd__', '__rand__', '__rdiv__', '__rfloordiv__', '__rmatmul__', '__rmod__', '__rmul__', '__ror__', '__rpow__', '__rsub__', '__rtruediv__', '__rxor__', '__sub__', '__truediv__', '__xor__'}
consumers()

Returns a list of `Operation`s that consume this tensor.

返回:A list of `Operation`s.
device

The name of the device on which this tensor will be produced, or None.

dtype

The DType of elements in this tensor.

eval(feed_dict=None, session=None)

Evaluates this tensor in a Session.

Note: If you are not using compat.v1 libraries, you should not need this, (or feed_dict or Session). In eager execution (or within tf.function) you do not need to call eval.

Calling this method will execute all preceding operations that produce the inputs needed for the operation that produces this tensor.

N.B. Before invoking Tensor.eval(), its graph must have been launched in a session, and either a default session must be available, or session must be specified explicitly.

参数:
  • feed_dict – A dictionary that maps Tensor objects to feed values. See tf.Session.run for a description of the valid feed values.
  • session – (Optional.) The Session to be used to evaluate this tensor. If none, the default session will be used.
返回:

A numpy array corresponding to the value of this tensor.

experimental_ref()

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use ref() instead.

get_shape()

Alias of tf.Tensor.shape.

graph

The Graph that contains this tensor.

name

The string name of this tensor.

op

The Operation that produces this tensor as an output.

ref()

Returns a hashable reference object to this Tensor.

The primary use case for this API is to put tensors in a set/dictionary. We can’t put tensors in a set/dictionary as tensor.__hash__() is no longer available starting Tensorflow 2.0.

The following will raise an exception starting 2.0

>>> x = tf.constant(5)
>>> y = tf.constant(10)
>>> z = tf.constant(10)
>>> tensor_set = {x, y, z}
Traceback (most recent call last):
  ...
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.
>>> tensor_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):
  ...
TypeError: Tensor is unhashable. Instead, use tensor.ref() as the key.

Instead, we can use tensor.ref().

>>> tensor_set = {x.ref(), y.ref(), z.ref()}
>>> x.ref() in tensor_set
True
>>> tensor_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
>>> tensor_dict[y.ref()]
'ten'

Also, the reference object provides .deref() function that returns the original Tensor.

>>> x = tf.constant(5)
>>> x.ref().deref()
<tf.Tensor: shape=(), dtype=int32, numpy=5>
set_shape(shape)

Updates the shape of this tensor.

This method can be called multiple times, and will merge the given shape with the current shape of this tensor. It can be used to provide additional information about the shape of this tensor that cannot be inferred from the graph alone. For example, this can be used to provide additional information about the shapes of images:

```python _, image_data = tf.compat.v1.TFRecordReader(…).read(…) image = tf.image.decode_png(image_data, channels=3)

# The height and width dimensions of image are data dependent, and # cannot be computed without executing the op. print(image.shape) ==> TensorShape([Dimension(None), Dimension(None), Dimension(3)])

# We know that each image in this dataset is 28 x 28 pixels. image.set_shape([28, 28, 3]) print(image.shape) ==> TensorShape([Dimension(28), Dimension(28), Dimension(3)]) ```

NOTE: This shape is not enforced at runtime. Setting incorrect shapes can result in inconsistencies between the statically-known graph and the runtime value of tensors. For runtime validation of the shape, use tf.ensure_shape instead.

参数:shape – A TensorShape representing the shape of this tensor, a TensorShapeProto, a list, a tuple, or None.
Raises:ValueError – If shape is not compatible with the current shape of this tensor.
shape

Returns the TensorShape that represents the shape of this tensor.

The shape is computed using shape inference functions that are registered in the Op for each Operation. See tf.TensorShape for more details of what a shape represents.

The inferred shape of a tensor is used to provide shape information without having to execute the underlying kernel. This can be used for debugging and providing early error messages. For example:

```python >>> c = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]]) >>> print(c.shape) # will be TensorShape([2, 3]) (2, 3)

>>> d = tf.constant([[1.0, 0.0], [0.0, 1.0], [1.0, 0.0], [0.0, 1.0]])
>>> print(d.shape)
(4, 2)

# Raises a ValueError, because c and d do not have compatible # inner dimensions. >>> e = tf.matmul(c, d) Traceback (most recent call last):

tensorflow.python.framework.errors_impl.InvalidArgumentError: Matrix size-incompatible: In[0]: [2,3], In[1]: [4,2] [Op:MatMul] name: MatMul/

# This works because we have compatible shapes. >>> f = tf.matmul(c, d, transpose_a=True, transpose_b=True) >>> print(f.shape) (3, 4)

```

In some cases, the inferred shape may have unknown dimensions. If the caller has additional information about the values of these dimensions, Tensor.set_shape() can be used to augment the inferred shape.

返回:A tf.TensorShape representing the shape of this tensor.
value_index

The index of this tensor in the outputs of its Operation.

class tensorflow.TensorArray(dtype, size=None, dynamic_size=None, clear_after_read=None, tensor_array_name=None, handle=None, flow=None, infer_shape=True, element_shape=None, colocate_with_first_write_call=True, name=None)

基类:object

Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays.

This class is meant to be used with dynamic iteration primitives such as while_loop and map_fn. It supports gradient back-propagation via special “flow” control flow dependencies.

Example 1: Plain reading and writing.

>>> ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True, clear_after_read=False)
>>> ta = ta.write(0, 10)
>>> ta = ta.write(1, 20)
>>> ta = ta.write(2, 30)
>>>
>>> ta.read(0)
<tf.Tensor: shape=(), dtype=float32, numpy=10.0>
>>> ta.read(1)
<tf.Tensor: shape=(), dtype=float32, numpy=20.0>
>>> ta.read(2)
<tf.Tensor: shape=(), dtype=float32, numpy=30.0>
>>> ta.stack()
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([10., 20., 30.],
dtype=float32)>

Example 2: Fibonacci sequence algorithm that writes in a loop then returns.

>>> @tf.function
... def fibonacci(n):
...   ta = tf.TensorArray(tf.float32, size=0, dynamic_size=True)
...   ta = ta.unstack([0., 1.])
...
...   for i in range(2, n):
...     ta = ta.write(i, ta.read(i - 1) + ta.read(i - 2))
...
...   return ta.stack()
>>>
>>> fibonacci(7)
<tf.Tensor: shape=(7,), dtype=float32,
numpy=array([0., 1., 1., 2., 3., 5., 8.], dtype=float32)>

Example 3: A simple loop interacting with a tf.Variable.

>>> v = tf.Variable(1)
>>>
>>> @tf.function
... def f(x):
...   ta = tf.TensorArray(tf.int32, size=0, dynamic_size=True)
...
...   for i in tf.range(x):
...     v.assign_add(i)
...     ta = ta.write(i, v)
...
...   return ta.stack()
>>>
>>> f(5)
<tf.Tensor: shape=(5,), dtype=int32, numpy=array([ 1,  2,  4,  7, 11],
dtype=int32)>

Construct a new TensorArray or wrap an existing TensorArray handle.

A note about the parameter name:

The name of the TensorArray (even if passed in) is uniquified: each time a new TensorArray is created at runtime it is assigned its own name for the duration of the run. This avoids name collisions if a TensorArray is created within a while_loop.

参数:
  • dtype – (required) data type of the TensorArray.
  • size – (optional) int32 scalar Tensor: the size of the TensorArray. Required if handle is not provided.
  • dynamic_size – (optional) Python bool: If true, writes to the TensorArray can grow the TensorArray past its initial size. Default: False.
  • clear_after_read – Boolean (optional, default: True). If True, clear TensorArray values after reading them. This disables read-many semantics, but allows early release of memory.
  • tensor_array_name – (optional) Python string: the name of the TensorArray. This is used when creating the TensorArray handle. If this value is set, handle should be None.
  • handle – (optional) A Tensor handle to an existing TensorArray. If this is set, tensor_array_name should be None. Only supported in graph mode.
  • flow – (optional) A float Tensor scalar coming from an existing TensorArray.flow. Only supported in graph mode.
  • infer_shape – (optional, default: True) If True, shape inference is enabled. In this case, all elements must have the same shape.
  • element_shape – (optional, default: None) A TensorShape object specifying the shape constraints of each of the elements of the TensorArray. Need not be fully defined.
  • colocate_with_first_write_call – If True, the TensorArray will be colocated on the same device as the Tensor used on its first write (write operations include write, unstack, and split). If False, the TensorArray will be placed on the device determined by the device context available during its initialization.
  • name – A name for the operation (optional).
Raises:
  • ValueError – if both handle and tensor_array_name are provided.
  • TypeError – if handle is provided but is not a Tensor.
close(name=None)

Close the current TensorArray.

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

concat(name=None)

Return the values in the TensorArray as a concatenated Tensor.

All of the values must have been written, their ranks must match, and and their shapes must all match for all dimensions except the first.

参数:name – A name for the operation (optional).
返回:All the tensors in the TensorArray concatenated into one tensor.
dtype

The data type of this TensorArray.

dynamic_size

Python bool; if True the TensorArray can grow dynamically.

element_shape

The tf.TensorShape of elements in this TensorArray.

flow

The flow Tensor forcing ops leading to this TensorArray state.

gather(indices, name=None)

Return selected values in the TensorArray as a packed Tensor.

All of selected values must have been written and their shapes must all match.

参数:
  • indices – A 1-D Tensor taking values in [0, max_value). If the TensorArray is not dynamic, max_value=size().
  • name – A name for the operation (optional).
返回:

The tensors in the TensorArray selected by indices, packed into one tensor.

grad(source, flow=None, name=None)
handle

The reference to the TensorArray.

identity()

Returns a TensorArray with the same content and properties.

返回:A new TensorArray object with flow that ensures the control dependencies from the contexts will become control dependencies for writes, reads, etc. Use this object all for subsequent operations.
read(index, name=None)

Read the value at location index in the TensorArray.

参数:
  • index – 0-D. int32 tensor with the index to read from.
  • name – A name for the operation (optional).
返回:

The tensor at index index.

scatter(indices, value, name=None)

Scatter the values of a Tensor in specific indices of a TensorArray.

Args:
indices: A 1-D Tensor taking values in [0, max_value). If
the TensorArray is not dynamic, max_value=size().

value: (N+1)-D. Tensor of type dtype. The Tensor to unpack. name: A name for the operation (optional).

Returns:
A new TensorArray object with flow that ensures the scatter occurs. Use this object all for subsequent operations.
Raises:
ValueError: if the shape inference fails.

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

size(name=None)

Return the size of the TensorArray.

split(value, lengths, name=None)

Split the values of a Tensor into the TensorArray.

Args:

value: (N+1)-D. Tensor of type dtype. The Tensor to split. lengths: 1-D. int32 vector with the lengths to use when splitting

value along its first dimension.

name: A name for the operation (optional).

Returns:
A new TensorArray object with flow that ensures the split occurs. Use this object all for subsequent operations.
Raises:
ValueError: if the shape inference fails.

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

stack(name=None)

Return the values in the TensorArray as a stacked Tensor.

All of the values must have been written and their shapes must all match. If input shapes have rank-R, then output shape will have rank-(R+1).

参数:name – A name for the operation (optional).
返回:All the tensors in the TensorArray stacked into one tensor.
unstack(value, name=None)

Unstack the values of a Tensor in the TensorArray.

If input value shapes have rank-R, then the output TensorArray will contain elements whose shapes are rank-(R-1).

Args:
value: (N+1)-D. Tensor of type dtype. The Tensor to unstack. name: A name for the operation (optional).
Returns:
A new TensorArray object with flow that ensures the unstack occurs. Use this object all for subsequent operations.
Raises:
ValueError: if the shape inference fails.

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

write(index, value, name=None)

Write value into index index of the TensorArray.

Args:
index: 0-D. int32 scalar with the index to write to. value: N-D. Tensor of type dtype. The Tensor to write to this index. name: A name for the operation (optional).
Returns:
A new TensorArray object with flow that ensures the write occurs. Use this object all for subsequent operations.
Raises:
ValueError: if there are more writers than specified.

NOTE The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method.

class tensorflow.TensorArraySpec(element_shape=None, dtype=tf.float32, dynamic_size=False, infer_shape=True)

基类:tensorflow.python.framework.type_spec.TypeSpec

Type specification for a tf.TensorArray.

Constructs a type specification for a tf.TensorArray.

参数:
  • element_shape – The shape of each element in the TensorArray.
  • dtype – Data type of the TensorArray.
  • dynamic_size – Whether the TensorArray can grow past its initial size.
  • infer_shape – Whether shape inference is enabled.
static from_value(value)
is_compatible_with(other)

Returns true if spec_or_value is compatible with this TypeSpec.

most_specific_compatible_type(other)

Returns the most specific TypeSpec compatible with self and other.

参数:other – A TypeSpec.
Raises:ValueError – If there is no TypeSpec that is compatible with both self and other.
value_type
class tensorflow.TensorShape(dims)

基类:object

Represents the shape of a Tensor.

A TensorShape represents a possibly-partial shape specification for a Tensor. It may be one of the following:

  • Fully-known shape: has a known number of dimensions and a known size for each dimension. e.g. TensorShape([16, 256])
  • Partially-known shape: has a known number of dimensions, and an unknown size for one or more dimension. e.g. TensorShape([None, 256])
  • Unknown shape: has an unknown number of dimensions, and an unknown size in all dimensions. e.g. TensorShape(None)

If a tensor is produced by an operation of type “Foo”, its shape may be inferred if there is a registered shape function for “Foo”. See [Shape functions](https://tensorflow.org/extend/adding_an_op#shape_functions_in_c) for details of shape functions and how to register them. Alternatively, the shape may be set explicitly using tf.Tensor.set_shape.

Creates a new TensorShape with the given dimensions.

参数:dims – A list of Dimensions, or None if the shape is unspecified.
Raises:TypeError – If dims cannot be converted to a list of dimensions.
as_list()

Returns a list of integers or None for each dimension.

返回:A list of integers or None for each dimension.
Raises:ValueError – If self is an unknown shape with an unknown rank.
as_proto()

Returns this shape as a TensorShapeProto.

assert_has_rank(rank)

Raises an exception if self is not compatible with the given rank.

参数:rank – An integer.
Raises:ValueError – If self does not represent a shape with the given rank.
assert_is_compatible_with(other)

Raises exception if self and other do not represent the same shape.

This method can be used to assert that there exists a shape that both self and other represent.

参数:other – Another TensorShape.
Raises:ValueError – If self and other do not represent the same shape.
assert_is_fully_defined()

Raises an exception if self is not fully defined in every dimension.

Raises:ValueError – If self does not have a known value for every dimension.
assert_same_rank(other)

Raises an exception if self and other do not have compatible ranks.

参数:other – Another TensorShape.
Raises:ValueError – If self and other do not represent shapes with the same rank.
concatenate(other)

Returns the concatenation of the dimension in self and other.

N.B. If either self or other is completely unknown, concatenation will discard information about the other shape. In future, we might support concatenation that preserves this information for use with slicing.

参数:other – Another TensorShape.
返回:A TensorShape whose dimensions are the concatenation of the dimensions in self and other.
dims

Deprecated. Returns list of dimensions for this shape.

Suggest TensorShape.as_list instead.

返回:A list containing `tf.compat.v1.Dimension`s, or None if the shape is unspecified.
is_compatible_with(other)

Returns True iff self is compatible with other.

Two possibly-partially-defined shapes are compatible if there exists a fully-defined shape that both shapes can represent. Thus, compatibility allows the shape inference code to reason about partially-defined shapes. For example:

  • TensorShape(None) is compatible with all shapes.
  • TensorShape([None, None]) is compatible with all two-dimensional shapes, such as TensorShape([32, 784]), and also TensorShape(None). It is not compatible with, for example, TensorShape([None]) or TensorShape([None, None, None]).
  • TensorShape([32, None]) is compatible with all two-dimensional shapes with size 32 in the 0th dimension, and also TensorShape([None, None]) and TensorShape(None). It is not compatible with, for example, TensorShape([32]), TensorShape([32, None, 1]) or TensorShape([64, None]).
  • TensorShape([32, 784]) is compatible with itself, and also TensorShape([32, None]), TensorShape([None, 784]), TensorShape([None, None]) and TensorShape(None). It is not compatible with, for example, TensorShape([32, 1, 784]) or TensorShape([None]).

The compatibility relation is reflexive and symmetric, but not transitive. For example, TensorShape([32, 784]) is compatible with TensorShape(None), and TensorShape(None) is compatible with TensorShape([4, 4]), but TensorShape([32, 784]) is not compatible with TensorShape([4, 4]).

参数:other – Another TensorShape.
返回:True iff self is compatible with other.
is_fully_defined()

Returns True iff self is fully defined in every dimension.

merge_with(other)

Returns a TensorShape combining the information in self and other.

The dimensions in self and other are merged elementwise, according to the rules defined for Dimension.merge_with().

参数:other – Another TensorShape.
返回:A TensorShape containing the combined information of self and other.
Raises:ValueError – If self and other are not compatible.
most_specific_compatible_shape(other)

Returns the most specific TensorShape compatible with self and other.

  • TensorShape([None, 1]) is the most specific TensorShape compatible with both TensorShape([2, 1]) and TensorShape([5, 1]). Note that TensorShape(None) is also compatible with above mentioned TensorShapes.
  • TensorShape([1, 2, 3]) is the most specific TensorShape compatible with both TensorShape([1, 2, 3]) and TensorShape([1, 2, 3]). There are more less specific TensorShapes compatible with above mentioned TensorShapes, e.g. TensorShape([1, 2, None]), TensorShape(None).
参数:other – Another TensorShape.
返回:A TensorShape which is the most specific compatible shape of self and other.
ndims

Deprecated accessor for rank.

num_elements()

Returns the total number of elements, or none for incomplete shapes.

rank

Returns the rank of this shape, or None if it is unspecified.

with_rank(rank)

Returns a shape based on self with the given rank.

This method promotes a completely unknown shape to one with a known rank.

参数:rank – An integer.
返回:A shape that is at least as specific as self with the given rank.
Raises:ValueError – If self does not represent a shape with the given rank.
with_rank_at_least(rank)

Returns a shape based on self with at least the given rank.

参数:rank – An integer.
返回:A shape that is at least as specific as self with at least the given rank.
Raises:ValueError – If self does not represent a shape with at least the given rank.
with_rank_at_most(rank)

Returns a shape based on self with at most the given rank.

参数:rank – An integer.
返回:A shape that is at least as specific as self with at most the given rank.
Raises:ValueError – If self does not represent a shape with at most the given rank.
class tensorflow.TensorSpec(shape, dtype=tf.float32, name=None)

基类:tensorflow.python.framework.tensor_spec.DenseSpec, tensorflow.python.framework.type_spec.BatchableTypeSpec

Describes a tf.Tensor.

Metadata for describing the tf.Tensor objects accepted or returned by some TensorFlow APIs.

Creates a TensorSpec.

参数:
  • shape – Value convertible to tf.TensorShape. The shape of the tensor.
  • dtype – Value convertible to tf.DType. The type of the tensor values.
  • name – Optional name for the Tensor.
Raises:

TypeError – If shape is not convertible to a tf.TensorShape, or dtype is not convertible to a tf.DType.

classmethod from_tensor(tensor, name=None)
is_compatible_with(spec_or_tensor)

Returns True if spec_or_tensor is compatible with this TensorSpec.

Two tensors are considered compatible if they have the same dtype and their shapes are compatible (see tf.TensorShape.is_compatible_with).

参数:spec_or_tensor – A tf.TensorSpec or a tf.Tensor
返回:True if spec_or_tensor is compatible with self.
value_type
class tensorflow.TypeSpec

基类:object

Specifies a TensorFlow value type.

A tf.TypeSpec provides metadata describing an object accepted or returned by TensorFlow APIs. Concrete subclasses, such as tf.TensorSpec and tf.RaggedTensorSpec, are used to describe different value types.

For example, tf.function’s input_signature argument accepts a list (or nested structure) of `TypeSpec`s.

Creating new subclasses of TypeSpec (outside of TensorFlow core) is not currently supported. In particular, we may make breaking changes to the private methods and properties defined by this base class.

is_compatible_with(spec_or_value)

Returns true if spec_or_value is compatible with this TypeSpec.

most_specific_compatible_type(other)

Returns the most specific TypeSpec compatible with self and other.

参数:other – A TypeSpec.
Raises:ValueError – If there is no TypeSpec that is compatible with both self and other.
value_type

The Python type for values that are compatible with this TypeSpec.

class tensorflow.UnconnectedGradients

基类:enum.Enum

Controls how gradient computation behaves when y does not depend on x.

The gradient of y with respect to x can be zero in two different ways: there could be no differentiable path in the graph connecting x to y (and so we can statically prove that the gradient is zero) or it could be that runtime values of tensors in a particular execution lead to a gradient of zero (say, if a relu unit happens to not be activated). To allow you to distinguish between these two cases you can choose what value gets returned for the gradient when there is no path in the graph from x to y:

  • NONE: Indicates that [None] will be returned if there is no path from x to y
  • ZERO: Indicates that a zero tensor will be returned in the shape of x.
NONE = 'none'
ZERO = 'zero'
class tensorflow.Variable(initial_value=None, trainable=None, validate_shape=True, caching_device=None, name=None, variable_def=None, dtype=None, import_scope=None, constraint=None, synchronization=<VariableSynchronization.AUTO: 0>, aggregation=<VariableAggregation.NONE: 0>, shape=None)

基类:tensorflow.python.training.tracking.base.Trackable

See the [variable guide](https://tensorflow.org/guide/variable).

A variable maintains shared, persistent state manipulated by a program.

The Variable() constructor requires an initial value for the variable, which can be a Tensor of any type and shape. This initial value defines the type and shape of the variable. After construction, the type and shape of the variable are fixed. The value can be changed using one of the assign methods.

>>> v = tf.Variable(1.)
>>> v.assign(2.)
<tf.Variable ... shape=() dtype=float32, numpy=2.0>
>>> v.assign_add(0.5)
<tf.Variable ... shape=() dtype=float32, numpy=2.5>

The shape argument to Variable’s constructor allows you to construct a variable with a less defined shape than its initial_value:

>>> v = tf.Variable(1., shape=tf.TensorShape(None))
>>> v.assign([[1.]])
<tf.Variable ... shape=<unknown> dtype=float32, numpy=array([[1.]], ...)>

Just like any Tensor, variables created with Variable() can be used as inputs to operations. Additionally, all the operators overloaded for the Tensor class are carried over to variables.

>>> w = tf.Variable([[1.], [2.]])
>>> x = tf.constant([[3., 4.]])
>>> tf.matmul(w, x)
<tf.Tensor:... shape=(2, 2), ... numpy=
  array([[3., 4.],
         [6., 8.]], dtype=float32)>
>>> tf.sigmoid(w + x)
<tf.Tensor:... shape=(2, 2), ...>

When building a machine learning model it is often convenient to distinguish between variables holding trainable model parameters and other variables such as a step variable used to count training steps. To make this easier, the variable constructor supports a trainable=<bool> parameter. tf.GradientTape watches trainable variables by default:

>>> with tf.GradientTape(persistent=True) as tape:
...   trainable = tf.Variable(1.)
...   non_trainable = tf.Variable(2., trainable=False)
...   x1 = trainable * 2.
...   x2 = non_trainable * 3.
>>> tape.gradient(x1, trainable)
<tf.Tensor:... shape=(), dtype=float32, numpy=2.0>
>>> assert tape.gradient(x2, non_trainable) is None  # Unwatched

Variables are automatically tracked when assigned to attributes of types inheriting from tf.Module.

>>> m = tf.Module()
>>> m.v = tf.Variable([1.])
>>> m.trainable_variables
(<tf.Variable ... shape=(1,) ... numpy=array([1.], dtype=float32)>,)

This tracking then allows saving variable values to [training checkpoints](https://www.tensorflow.org/guide/checkpoint), or to [SavedModels](https://www.tensorflow.org/guide/saved_model) which include serialized TensorFlow graphs.

Variables are often captured and manipulated by `tf.function`s. This works the same way the un-decorated function would have:

>>> v = tf.Variable(0.)
>>> read_and_decrement = tf.function(lambda: v.assign_sub(0.1))
>>> read_and_decrement()
<tf.Tensor: shape=(), dtype=float32, numpy=-0.1>
>>> read_and_decrement()
<tf.Tensor: shape=(), dtype=float32, numpy=-0.2>

Variables created inside a tf.function must be owned outside the function and be created only once:

>>> class M(tf.Module):
...   @tf.function
...   def __call__(self, x):
...     if not hasattr(self, "v"):  # Or set self.v to None in __init__
...       self.v = tf.Variable(x)
...     return self.v * x
>>> m = M()
>>> m(2.)
<tf.Tensor: shape=(), dtype=float32, numpy=4.0>
>>> m(3.)
<tf.Tensor: shape=(), dtype=float32, numpy=6.0>
>>> m.v
<tf.Variable ... shape=() dtype=float32, numpy=2.0>

See the tf.function documentation for details.

Creates a new variable with value initial_value. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: (caching_device). They will be removed in a future version. Instructions for updating: A variable’s value can be manually cached by calling tf.Variable.read_value() under a tf.device scope. The caching_device argument does not work properly.

参数:
  • initial_value – A Tensor, or Python object convertible to a Tensor, which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.)
  • trainable – If True, GradientTapes automatically watch uses of this variable. Defaults to True, unless synchronization is set to ON_READ, in which case it defaults to False.
  • validate_shape – If False, allows the variable to be initialized with a value of unknown shape. If True, the default, the shape of initial_value must be known.
  • caching_device – Optional device string describing where the Variable should be cached for reading. Defaults to the Variable’s device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements.
  • name – Optional name for the variable. Defaults to ‘Variable’ and gets uniquified automatically.
  • variable_defVariableDef protocol buffer. If not None, recreates the Variable object with its contents, referencing the variable’s nodes in the graph, which must already exist. The graph is not changed. variable_def and the other arguments are mutually exclusive.
  • dtype – If set, initial_value will be converted to the given type. If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide.
  • import_scope – Optional string. Name scope to add to the Variable. Only used when initializing from protocol buffer.
  • constraint – An optional projection function to be applied to the variable after being updated by an Optimizer (e.g. used to implement norm constraints or value constraints for layer weights). The function must take as input the unprojected Tensor representing the value of the variable and return the Tensor for the projected value (which must have the same shape). Constraints are not safe to use when doing asynchronous distributed training.
  • synchronization – Indicates when a distributed a variable will be aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize.
  • aggregation – Indicates how a distributed variable will be aggregated. Accepted values are constants defined in the class tf.VariableAggregation.
  • shape – (optional) The shape of this variable. If None, the shape of initial_value will be used. When setting this argument to tf.TensorShape(None) (representing an unspecified shape), the variable can be assigned with values of different shapes.
Raises:
  • ValueError – If both variable_def and initial_value are specified.
  • ValueError – If the initial value is not specified, or does not have a shape and validate_shape is True.
class SaveSliceInfo(full_name=None, full_shape=None, var_offset=None, var_shape=None, save_slice_info_def=None, import_scope=None)

基类:object

Information on how to save this Variable as a slice.

Provides internal support for saving variables as slices of a larger variable. This API is not public and is subject to change.

Available properties:

  • full_name
  • full_shape
  • var_offset
  • var_shape

Create a SaveSliceInfo.

参数:
  • full_name – Name of the full variable of which this Variable is a slice.
  • full_shape – Shape of the full variable, as a list of int.
  • var_offset – Offset of this Variable into the full variable, as a list of int.
  • var_shape – Shape of this Variable, as a list of int.
  • save_slice_info_defSaveSliceInfoDef protocol buffer. If not None, recreates the SaveSliceInfo object its contents. save_slice_info_def and other arguments are mutually exclusive.
  • import_scope – Optional string. Name scope to add. Only used when initializing from protocol buffer.
spec

Computes the spec string used for saving.

to_proto(export_scope=None)

Returns a SaveSliceInfoDef() proto.

参数:export_scope – Optional string. Name scope to remove.
返回:A SaveSliceInfoDef protocol buffer, or None if the Variable is not in the specified name scope.
aggregation
assign(value, use_locking=False, name=None, read_value=True)

Assigns a new value to the variable.

This is essentially a shortcut for assign(self, value).

参数:
  • value – A Tensor. The new value for this variable.
  • use_locking – If True, use locking during the assignment.
  • name – The name of the operation to be created
  • read_value – if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
返回:

The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

assign_add(delta, use_locking=False, name=None, read_value=True)

Adds a value to this variable.

This is essentially a shortcut for assign_add(self, delta).
参数:
  • delta – A Tensor. The value to add to this variable.
  • use_locking – If True, use locking during the operation.
  • name – The name of the operation to be created
  • read_value – if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
返回:

The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

assign_sub(delta, use_locking=False, name=None, read_value=True)

Subtracts a value from this variable.

This is essentially a shortcut for assign_sub(self, delta).

参数:
  • delta – A Tensor. The value to subtract from this variable.
  • use_locking – If True, use locking during the operation.
  • name – The name of the operation to be created
  • read_value – if True, will return something which evaluates to the new value of the variable; if False will return the assign op.
返回:

The updated variable. If read_value is false, instead returns None in Eager mode and the assign op in graph mode.

batch_scatter_update(sparse_delta, use_locking=False, name=None)

Assigns tf.IndexedSlices to this variable batch-wise.

Analogous to batch_gather. This assumes that this variable and the sparse_delta IndexedSlices have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following:

num_prefix_dims = sparse_delta.indices.ndims - 1 batch_dim = num_prefix_dims + 1 `sparse_delta.updates.shape = sparse_delta.indices.shape + var.shape[

batch_dim:]`

where

sparse_delta.updates.shape[:num_prefix_dims] == sparse_delta.indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims]

And the operation performed can be expressed as:

`var[i_1, …, i_n,
sparse_delta.indices[i_1, …, i_n, j]] = sparse_delta.updates[
i_1, …, i_n, j]`

When sparse_delta.indices is a 1D tensor, this operation is equivalent to scatter_update.

To avoid this operation one can looping over the first ndims of the variable and using scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation.

参数:
  • sparse_deltatf.IndexedSlices to be assigned to this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

constraint

Returns the constraint function associated with this variable.

返回:The constraint function that was passed to the variable constructor. Can be None if no constraint was passed.
count_up_to(limit)

Increments this variable until it reaches limit. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Dataset.range instead.

When that Op is run it tries to increment the variable by 1. If incrementing the variable would bring it above limit then the Op raises the exception OutOfRangeError.

If no error is raised, the Op outputs the value of the variable before the increment.

This is essentially a shortcut for count_up_to(self, limit).

参数:limit – value at which incrementing the variable raises an error.
返回:A Tensor that will hold the variable value before the increment. If no other Op modifies this variable, the values produced will all be distinct.
device

The device of this variable.

dtype

The DType of this variable.

eval(session=None)

In a session, computes and returns the value of this variable.

This is not a graph construction method, it does not add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions.

```python v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer()

with tf.compat.v1.Session() as sess:
sess.run(init) # Usage passing the session explicitly. print(v.eval(sess)) # Usage with the default session. The ‘with’ block # above makes ‘sess’ the default session. print(v.eval())

```

参数:session – The session to use to evaluate this variable. If none, the default session is used.
返回:A numpy ndarray with a copy of the value of this variable.
experimental_ref()

DEPRECATED FUNCTION

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use ref() instead.

static from_proto(variable_def, import_scope=None)

Returns a Variable object created from variable_def.

gather_nd(indices, name=None)

Gather slices from params into a Tensor with shape specified by indices.

See tf.gather_nd for details.

参数:
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as params.

get_shape()

Alias of Variable.shape.

graph

The Graph of this variable.

initial_value

Returns the Tensor used as the initial value for the variable.

Note that this is different from initialized_value() which runs the op that initializes the variable before returning its value. This method returns the tensor that is used by the op that initializes the variable.

返回:A Tensor.
initialized_value()

Returns the value of the initialized variable. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Use Variable.read_value. Variables in 2.X are initialized automatically both in eager and graph (inside tf.defun) contexts.

You should use this instead of the variable itself to initialize another variable with a value that depends on the value of this variable.

`python # Initialize 'v' with a random tensor. v = tf.Variable(tf.random.truncated_normal([10, 40])) # Use `initialized_value` to guarantee that `v` has been # initialized before its value is used to initialize `w`. # The random values are picked only once. w = tf.Variable(v.initialized_value() * 2.0) `

返回:A Tensor holding the value of this variable after its initializer has run.
initializer

The initializer operation for this variable.

load(value, session=None)

Load new value into this variable. (deprecated)

Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Prefer Variable.assign which has equivalent behavior in 2.X.

Writes new value to variable’s memory. Doesn’t add ops to the graph.

This convenience method requires a session where the graph containing this variable has been launched. If no session is passed, the default session is used. See tf.compat.v1.Session for more information on launching a graph and on sessions.

```python v = tf.Variable([1, 2]) init = tf.compat.v1.global_variables_initializer()

with tf.compat.v1.Session() as sess:
sess.run(init) # Usage passing the session explicitly. v.load([2, 3], sess) print(v.eval(sess)) # prints [2 3] # Usage with the default session. The ‘with’ block # above makes ‘sess’ the default session. v.load([3, 4], sess) print(v.eval()) # prints [3 4]

```

参数:
  • value – New variable value
  • session – The session to use to evaluate this variable. If none, the default session is used.
Raises:

ValueError – Session is not passed and no default session

name

The name of this variable.

op

The Operation of this variable.

read_value()

Returns the value of this variable, read in the current context.

Can be different from value() if it’s on another device, with control dependencies, etc.

返回:A Tensor containing the value of the variable.
ref()

Returns a hashable reference object to this Variable.

The primary use case for this API is to put variables in a set/dictionary. We can’t put variables in a set/dictionary as variable.__hash__() is no longer available starting Tensorflow 2.0.

The following will raise an exception starting 2.0

>>> x = tf.Variable(5)
>>> y = tf.Variable(10)
>>> z = tf.Variable(10)
>>> variable_set = {x, y, z}
Traceback (most recent call last):
  ...
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.
>>> variable_dict = {x: 'five', y: 'ten'}
Traceback (most recent call last):
  ...
TypeError: Variable is unhashable. Instead, use tensor.ref() as the key.

Instead, we can use variable.ref().

>>> variable_set = {x.ref(), y.ref(), z.ref()}
>>> x.ref() in variable_set
True
>>> variable_dict = {x.ref(): 'five', y.ref(): 'ten', z.ref(): 'ten'}
>>> variable_dict[y.ref()]
'ten'

Also, the reference object provides .deref() function that returns the original Variable.

>>> x = tf.Variable(5)
>>> x.ref().deref()
<tf.Variable 'Variable:0' shape=() dtype=int32, numpy=5>
scatter_add(sparse_delta, use_locking=False, name=None)

Adds tf.IndexedSlices to this variable.

参数:
  • sparse_deltatf.IndexedSlices to be added to this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_div(sparse_delta, use_locking=False, name=None)

Divide this variable by tf.IndexedSlices.

参数:
  • sparse_deltatf.IndexedSlices to divide this variable by.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_max(sparse_delta, use_locking=False, name=None)

Updates this variable with the max of tf.IndexedSlices and itself.

参数:
  • sparse_deltatf.IndexedSlices to use as an argument of max with this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_min(sparse_delta, use_locking=False, name=None)

Updates this variable with the min of tf.IndexedSlices and itself.

参数:
  • sparse_deltatf.IndexedSlices to use as an argument of min with this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_mul(sparse_delta, use_locking=False, name=None)

Multiply this variable by tf.IndexedSlices.

参数:
  • sparse_deltatf.IndexedSlices to multiply this variable by.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_nd_add(indices, updates, name=None)

Applies sparse addition to individual values or slices in a Variable.

The Variable has rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into self. It must be shape [d_0, …, d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the `K`th dimension of self.

updates is Tensor of rank Q-1+P-K with shape:

` [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. `

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

```python

v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) add = v.scatter_nd_add(indices, updates) with tf.compat.v1.Session() as sess:

print sess.run(add)

```

The resulting update to v would look like this:

[1, 13, 3, 14, 14, 6, 7, 20]

See tf.scatter_nd for more details about how to make updates to slices.

参数:
  • indices – The indices to be used in the operation.
  • updates – The values to be used in the operation.
  • name – the name of the operation.
返回:

The updated variable.

scatter_nd_sub(indices, updates, name=None)

Applies sparse subtraction to individual values or slices in a Variable.

Assuming the variable has rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into self. It must be shape [d_0, …, d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the `K`th dimension of self.

updates is Tensor of rank Q-1+P-K with shape:

` [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. `

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

```python

v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_sub(indices, updates) with tf.compat.v1.Session() as sess:

print sess.run(op)

```

The resulting update to v would look like this:

[1, -9, 3, -6, -6, 6, 7, -4]

See tf.scatter_nd for more details about how to make updates to slices.

参数:
  • indices – The indices to be used in the operation.
  • updates – The values to be used in the operation.
  • name – the name of the operation.
返回:

The updated variable.

scatter_nd_update(indices, updates, name=None)

Applies sparse assignment to individual values or slices in a Variable.

The Variable has rank P and indices is a Tensor of rank Q.

indices must be integer tensor, containing indices into self. It must be shape [d_0, …, d_{Q-2}, K] where 0 < K <= P.

The innermost dimension of indices (with length K) corresponds to indices into elements (if K = P) or slices (if K < P) along the `K`th dimension of self.

updates is Tensor of rank Q-1+P-K with shape:

` [d_0, ..., d_{Q-2}, self.shape[K], ..., self.shape[P-1]]. `

For example, say we want to add 4 scattered elements to a rank-1 tensor to 8 elements. In Python, that update would look like this:

```python

v = tf.Variable([1, 2, 3, 4, 5, 6, 7, 8]) indices = tf.constant([[4], [3], [1] ,[7]]) updates = tf.constant([9, 10, 11, 12]) op = v.scatter_nd_assign(indices, updates) with tf.compat.v1.Session() as sess:

print sess.run(op)

```

The resulting update to v would look like this:

[1, 11, 3, 10, 9, 6, 7, 12]

See tf.scatter_nd for more details about how to make updates to slices.

参数:
  • indices – The indices to be used in the operation.
  • updates – The values to be used in the operation.
  • name – the name of the operation.
返回:

The updated variable.

scatter_sub(sparse_delta, use_locking=False, name=None)

Subtracts tf.IndexedSlices from this variable.

参数:
  • sparse_deltatf.IndexedSlices to be subtracted from this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

scatter_update(sparse_delta, use_locking=False, name=None)

Assigns tf.IndexedSlices to this variable.

参数:
  • sparse_deltatf.IndexedSlices to be assigned to this variable.
  • use_locking – If True, use locking during the operation.
  • name – the name of the operation.
返回:

The updated variable.

Raises:

TypeError – if sparse_delta is not an IndexedSlices.

set_shape(shape)

Overrides the shape for this variable.

参数:shape – the TensorShape representing the overridden shape.
shape

The TensorShape of this variable.

返回:A TensorShape.
sparse_read(indices, name=None)

Gather slices from params axis axis according to indices.

This function supports a subset of tf.gather, see tf.gather for details on usage.

参数:
  • indices – The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]).
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as params.

synchronization
to_proto(export_scope=None)

Converts a Variable to a VariableDef protocol buffer.

参数:export_scope – Optional string. Name scope to remove.
返回:A VariableDef protocol buffer, or None if the Variable is not in the specified name scope.
trainable
value()

Returns the last snapshot of this variable.

You usually do not need to call this method as all ops that need the value of the variable call it automatically through a convert_to_tensor() call.

Returns a Tensor which holds the value of the variable. You can not assign a new value to this tensor as it is not a reference to the variable.

To avoid copies, if the consumer of the returned value is on the same device as the variable, this actually returns the live value of the variable, not a copy. Updates to the variable are seen by the consumer. If the consumer is on a different device it will get a copy of the variable.

返回:A Tensor containing the value of the variable.
tensorflow.VariableAggregation

tensorflow.python.ops.variables.VariableAggregationV2 的别名

class tensorflow.VariableSynchronization

基类:enum.Enum

Indicates when a distributed variable will be synced.

  • AUTO: Indicates that the synchronization will be determined by the current DistributionStrategy (eg. With MirroredStrategy this would be ON_WRITE).
  • NONE: Indicates that there will only be one copy of the variable, so there is no need to sync.
  • ON_WRITE: Indicates that the variable will be updated across devices every time it is written.
  • ON_READ: Indicates that the variable will be aggregated across devices when it is read (eg. when checkpointing or when evaluating an op that uses the variable).
AUTO = 0
NONE = 1
ON_READ = 3
ON_WRITE = 2
tensorflow.abs(x, name=None)

Computes the absolute value of a tensor.

Given a tensor of integer or floating-point values, this operation returns a tensor of the same type, where each element contains the absolute value of the corresponding element in the input.

Given a tensor x of complex numbers, this operation returns a tensor of type float32 or float64 that is the absolute value of each element in x. For a complex number \(a + bj\), its absolute value is computed as \(sqrt{a^2 + b^2}\). For example:

>>> x = tf.constant([[-2.25 + 4.75j], [-3.25 + 5.75j]])
>>> tf.abs(x)
<tf.Tensor: shape=(2, 1), dtype=float64, numpy=
array([[5.25594901],
       [6.60492241]])>
参数:
  • x – A Tensor or SparseTensor of type float16, float32, float64, int32, int64, complex64 or complex128.
  • name – A name for the operation (optional).
返回:

A Tensor or SparseTensor of the same size, type and sparsity as x,

with absolute values. Note, for complex64 or complex128 input, the returned Tensor will be of type float32 or float64, respectively.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.abs(x.values, …), x.dense_shape)

tensorflow.acos(x, name=None)

Computes acos of x element-wise.

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.acosh(x, name=None)

Computes inverse hyperbolic cosine of x element-wise.

Given an input tensor, the function computes inverse hyperbolic cosine of every element. Input range is [1, inf]. It returns nan if the input lies outside the range.

`python x = tf.constant([-2, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.acosh(x) ==> [nan nan 0. 0.62236255 5.9914584 9.903487 inf] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.add(x, y, name=None)

Returns x + y element-wise.

NOTE: math.add supports broadcasting. AddN does not. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, int16, int32, int64, complex64, complex128, string.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.add_n(inputs, name=None)

Adds all input tensors element-wise.

tf.math.add_n performs the same operation as tf.math.accumulate_n, but it waits for all of its inputs to be ready before beginning to sum. This buffering can result in higher memory consumption when inputs are ready at different times, since the minimum temporary storage required is proportional to the input size rather than the output size.

This op does not [broadcast]( https://docs.scipy.org/doc/numpy-1.13.0/user/basics.broadcasting.html) its inputs. If you need broadcasting, use tf.math.add (or the + operator) instead.

For example:

>>> a = tf.constant([[3, 5], [4, 8]])
>>> b = tf.constant([[1, 6], [2, 9]])
>>> tf.math.add_n([a, b, a])
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 7, 16],
       [10, 25]], dtype=int32)>
参数:
  • inputs – A list of tf.Tensor or tf.IndexedSlices objects, each with the same shape and type. tf.IndexedSlices objects will be converted into dense tensors prior to adding.
  • name – A name for the operation (optional).
返回:

A tf.Tensor of the same shape and type as the elements of inputs.

Raises:
  • ValueError – If inputs don’t all have same shape and dtype or the shape
  • cannot be inferred.
tensorflow.argmax(input, axis=None, output_type=tf.int64, name=None)

Returns the index with the largest value across axes of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

For example:

>>> A = tf.constant([2, 20, 30, 3, 6])
>>> tf.math.argmax(A)  # A[2] is maximum in tensor A
<tf.Tensor: shape=(), dtype=int64, numpy=2>
>>> B = tf.constant([[2, 20, 30, 3, 6], [3, 11, 16, 1, 8],
...                  [14, 45, 23, 5, 27]])
>>> tf.math.argmax(B, 0)
<tf.Tensor: shape=(5,), dtype=int64, numpy=array([2, 2, 0, 2, 2])>
>>> tf.math.argmax(B, 1)
<tf.Tensor: shape=(3,), dtype=int64, numpy=array([2, 2, 1])>
参数:
  • input – A Tensor.
  • axis – An integer, the axis to reduce across. Default to 0.
  • output_type – An optional output dtype (tf.int32 or tf.int64). Defaults to tf.int64.
  • name – An optional name for the operation.
返回:

A Tensor of type output_type.

tensorflow.argmin(input, axis=None, output_type=tf.int64, name=None)

Returns the index with the smallest value across axes of a tensor.

Note that in case of ties the identity of the return value is not guaranteed.

参数:
  • input – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64.
  • axis – A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range -rank(input), rank(input)). Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
  • output_type – An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
  • name – A name for the operation (optional).
返回:

A Tensor of type output_type.

Usage: `python import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.math.argmin(input = a) c = tf.keras.backend.eval(b) # c = 0 # here a[0] = 1 which is the smallest element of a across axis 0 `

tensorflow.argsort(values, axis=-1, direction='ASCENDING', stable=False, name=None)

Returns the indices of a tensor that give its sorted order along an axis.

For a 1D tensor, tf.gather(values, tf.argsort(values)) is equivalent to tf.sort(values). For higher dimensions, the output has the same shape as values, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position.

Usage:

`python import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None) c = tf.keras.backend.eval(b) # Here, c = [0 3 1 2 5 4] `

参数:
  • values – 1-D or higher numeric Tensor.
  • axis – The axis along which to sort. The default is -1, which sorts the last axis.
  • direction – The direction in which to sort the values (‘ASCENDING’ or ‘DESCENDING’).
  • stable – If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass stable=True for forwards compatibility.
  • name – Optional name for the operation.
返回:

An int32 Tensor with the same shape as values. The indices that would

sort each slice of the given values along the given axis.

Raises:

ValueError – If axis is not a constant scalar, or the direction is invalid.

tensorflow.as_dtype(type_value)

Converts the given type_value to a DType.

参数:type_value

A value that can be converted to a tf.DType object. This may currently be a tf.DType object, a [DataType enum](https://www.tensorflow.org/code/tensorflow/core/framework/types.proto),

a string type name, or a numpy.dtype.
返回:A DType corresponding to type_value.
Raises:TypeError – If type_value cannot be converted to a DType.
tensorflow.as_string(input, precision=-1, scientific=False, shortest=False, width=-1, fill='', name=None)

Converts each entry in the given tensor to strings.

Supports many numeric types and boolean.

For Unicode, see the [https://www.tensorflow.org/tutorials/representation/unicode](Working with Unicode text) tutorial.

Examples:

>>> tf.strings.as_string([3, 2])
<tf.Tensor: shape=(2,), dtype=string, numpy=array([b'3', b'2'], dtype=object)>
>>> tf.strings.as_string([3.1415926, 2.71828], precision=2).numpy()
array([b'3.14', b'2.72'], dtype=object)
参数:
  • input – A Tensor. Must be one of the following types: int8, int16, int32, int64, complex64, complex128, float32, float64, bool.
  • precision – An optional int. Defaults to -1. The post-decimal precision to use for floating point numbers. Only used if precision > -1.
  • scientific – An optional bool. Defaults to False. Use scientific notation for floating point numbers.
  • shortest – An optional bool. Defaults to False. Use shortest representation (either scientific or standard) for floating point numbers.
  • width – An optional int. Defaults to -1. Pad pre-decimal numbers to this width. Applies to both floating point and integer numbers. Only used if width > -1.
  • fill – An optional string. Defaults to “”. The value to pad if width > -1. If empty, pads with spaces. Another typical value is ‘0’. String cannot be longer than 1 character.
  • name – A name for the operation (optional).
返回:

A Tensor of type string.

tensorflow.asin(x, name=None)

Computes the trignometric inverse sine of x element-wise.

The tf.math.asin operation returns the inverse of tf.math.sin, such that if y = tf.math.sin(x) then, x = tf.math.asin(y).

Note: The output of tf.math.asin will lie within the invertible range of sine, i.e [-pi/2, pi/2].

For example:

```python # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] x = tf.constant([1.047, 0.785]) y = tf.math.sin(x) # [0.8659266, 0.7068252]

tf.math.asin(y) # [1.047, 0.785] = x ```

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.asinh(x, name=None)

Computes inverse hyperbolic sine of x element-wise.

Given an input tensor, this function computes inverse hyperbolic sine for every element in the tensor. Both input and output has a range of [-inf, inf].

`python x = tf.constant([-float("inf"), -2, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.asinh(x) ==> [-inf -1.4436355 -0.4812118 0.8813736 1.0159732 5.991471 9.903487 inf] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.assert_equal(x, y, message=None, summarize=None, name=None)

Assert the condition x == y holds element-wise.

This Op checks that x[i] == y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied.

If x and y are not equal, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.

参数:
  • x – Numeric Tensor.
  • y – Numeric Tensor, same dtype as and broadcastable to x.
  • message – A string to prefix to the default message.
  • summarize – Print this many entries of each tensor.
  • name – A name for this operation (optional). Defaults to “assert_equal”.
返回:

Op that raises InvalidArgumentError if x == y is False. This can be

used with tf.control_dependencies inside of `tf.function`s to block followup computation until the check has executed.

@compatibility(eager) returns None @end_compatibility

Raises:

InvalidArgumentError – if the check can be performed immediately and x == y is False. The check can be performed immediately during eager execution or if x and y are statically known.

tensorflow.assert_greater(x, y, message=None, summarize=None, name=None)

Assert the condition x > y holds element-wise.

This Op checks that x[i] > y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied.

If x is not greater than y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.

参数:
  • x – Numeric Tensor.
  • y – Numeric Tensor, same dtype as and broadcastable to x.
  • message – A string to prefix to the default message.
  • summarize – Print this many entries of each tensor.
  • name – A name for this operation (optional). Defaults to “assert_greater”.
返回:

Op that raises InvalidArgumentError if x > y is False. This can be

used with tf.control_dependencies inside of `tf.function`s to block followup computation until the check has executed.

@compatibility(eager) returns None @end_compatibility

Raises:

InvalidArgumentError – if the check can be performed immediately and x > y is False. The check can be performed immediately during eager execution or if x and y are statically known.

tensorflow.assert_less(x, y, message=None, summarize=None, name=None)

Assert the condition x < y holds element-wise.

This Op checks that x[i] < y[i] holds for every pair of (possibly broadcast) elements of x and y. If both x and y are empty, this is trivially satisfied.

If x is not less than y element-wise, message, as well as the first summarize entries of x and y are printed, and InvalidArgumentError is raised.

参数:
  • x – Numeric Tensor.
  • y – Numeric Tensor, same dtype as and broadcastable to x.
  • message – A string to prefix to the default message.
  • summarize – Print this many entries of each tensor.
  • name – A name for this operation (optional). Defaults to “assert_less”.
返回:

Op that raises InvalidArgumentError if x < y is False. This can be used with tf.control_dependencies inside of `tf.function`s to block followup computation until the check has executed. @compatibility(eager) returns None @end_compatibility

Raises:

InvalidArgumentError – if the check can be performed immediately and x < y is False. The check can be performed immediately during eager execution or if x and y are statically known.

tensorflow.assert_rank(x, rank, message=None, name=None)

Assert that x has rank equal to rank.

This Op checks that the rank of x is equal to rank.

If x has a different rank, message, as well as the shape of x are printed, and InvalidArgumentError is raised.

参数:
  • xTensor.
  • rank – Scalar integer Tensor.
  • message – A string to prefix to the default message.
  • name – A name for this operation (optional). Defaults to “assert_rank”.
返回:

Op raising InvalidArgumentError unless x has specified rank. If static checks determine x has correct rank, a no_op is returned. This can be used with tf.control_dependencies inside of `tf.function`s to block followup computation until the check has executed. @compatibility(eager) returns None @end_compatibility

Raises:

InvalidArgumentError – if the check can be performed immediately and x does not have rank rank. The check can be performed immediately during eager execution or if the shape of x is statically known.

tensorflow.atan(x, name=None)

Computes the trignometric inverse tangent of x element-wise.

The tf.math.atan operation returns the inverse of tf.math.tan, such that if y = tf.math.tan(x) then, x = tf.math.atan(y).

Note: The output of tf.math.atan will lie within the invertible range of tan, i.e (-pi/2, pi/2).

For example:

```python # Note: [1.047, 0.785] ~= [(pi/3), (pi/4)] x = tf.constant([1.047, 0.785]) y = tf.math.tan(x) # [1.731261, 0.99920404]

tf.math.atan(y) # [1.047, 0.785] = x ```

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.atan2(y, x, name=None)

Computes arctangent of y/x element-wise, respecting signs of the arguments.

This is the angle ( theta in [-pi, pi] ) such that [ x = r cos(theta) ] and [ y = r sin(theta) ] where (r = sqrt(x^2 + y^2) ).

参数:
  • y – A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
  • x – A Tensor. Must have the same type as y.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as y.

tensorflow.atanh(x, name=None)

Computes inverse hyperbolic tangent of x element-wise.

Given an input tensor, this function computes inverse hyperbolic tangent for every element in the tensor. Input range is [-1,1] and output range is [-inf, inf]. If input is -1, output will be -inf and if the input is 1, output will be inf. Values outside the range will have nan as output.

`python x = tf.constant([-float("inf"), -1, -0.5, 1, 0, 0.5, 10, float("inf")]) tf.math.atanh(x) ==> [nan -inf -0.54930615 inf  0. 0.54930615 nan nan] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.batch_to_space(input, block_shape, crops, name=None)

BatchToSpace for N-D tensors of type T.

This operation reshapes the “batch” dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, …, M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch (see tf.space_to_batch).

参数:
  • input – A N-D Tensor with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
  • block_shape – A 1-D Tensor with shape [M]. Must be one of the following types: int32, int64. All values must be >= 1. For backwards compatibility with TF 1.0, this parameter may be an int, in which case it is converted to numpy.array([block_shape, block_shape], dtype=numpy.int64).
  • crops

    A 2-D Tensor with shape [M, 2]. Must be one of the following types: int32, int64. All values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]. This operation is equivalent to the following steps: 1. Reshape input to reshaped of shape: [block_shape[0], …,

    block_shape[M-1], batch / prod(block_shape), input_shape[1], …, input_shape[N-1]]
    1. Permute dimensions of reshaped to produce permuted of shape [batch / prod(block_shape), input_shape[1], block_shape[0], …, input_shape[M], block_shape[M-1], input_shape[M+1],
    …, input_shape[N-1]]
    1. Reshape permuted to produce reshaped_permuted of shape [batch / prod(block_shape), input_shape[1] * block_shape[0], …, input_shape[M] * block_shape[M-1], input_shape[M+1], …, input_shape[N-1]]
    2. Crop the start and end of dimensions [1, …, M] of reshaped_permuted according to crops to produce the output of shape: [batch / prod(block_shape), input_shape[1] *
      block_shape[0] - crops[0,0] - crops[0,1], …, input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], …, input_shape[N-1]]

    Some Examples: (1) For the following input of shape [4, 1, 1, 1],

    block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: ```python [[[[1]]],
    [[[2]]], [[[3]]], [[[4]]]]

    ` The output tensor has shape `[1, 2, 2, 1]` and value: ` x = [[[[1], [2]],

    [[3], [4]]]] ```
    1. For the following input of shape [4, 1, 1, 3],
    block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: ```python [[[1, 2, 3]],
    [[4, 5, 6]], [[7, 8, 9]], [[10, 11, 12]]]

    ``` The output tensor has shape [1, 2, 2, 3] and value: ```python x = [[[[1, 2, 3], [4, 5, 6 ]],

    [[7, 8, 9], [10, 11, 12]]]]

    ```

    1. For the following
    input of shape [4, 2, 2, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: ```python x = [[[[1], [3]], [[ 9], [11]]],
    [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

    ``` The output tensor has shape [1, 4, 4, 1] and value: ```python x = [[[1], [2], [ 3], [ 4]],

    [[5], [6], [ 7], [ 8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]

    ```

    1. For the following input of shape
    [8, 1, 3, 1], block_shape = [2, 2], and crops = [[0, 0], [2, 0]]: ```python x = [[[[0], [ 1], [ 3]]],
    [[[0], [ 9], [11]]], [[[0], [ 2], [ 4]]], [[[0], [10], [12]]], [[[0], [ 5], [ 7]]], [[[0], [13], [15]]], [[[0], [ 6], [ 8]]], [[[0], [14], [16]]]]

    ``` The output tensor has shape [2, 2, 4, 1] and value: ```python x = [[[[ 1], [ 2], [ 3], [ 4]],

    [[ 5], [ 6], [ 7], [ 8]]],
    [[[ 9], [10], [11], [12]],
    [[13], [14], [15], [16]]]] ```
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.bitcast(input, type, name=None)

Bitcasts a tensor from one type to another without copying data.

Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type.

If the input datatype T is larger than the output datatype type then the shape changes from […] to […, sizeof(T)/sizeof(type)].

If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from […, sizeof(type)/sizeof(T)] to […].

tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example,

Example 1:

>>> a = [1., 2., 3.]
>>> equality_bitcast = tf.bitcast(a, tf.complex128)
Traceback (most recent call last):
...
InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast]
>>> equality_cast = tf.cast(a, tf.complex128)
>>> print(equality_cast)
tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)

Example 2:

>>> tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)
<tf.Tensor: shape=(4,), dtype=uint8, numpy=array([255, 255, 255, 255], dtype=uint8)>

Example 3:

>>> x = [1., 2., 3.]
>>> y = [0., 2., 3.]
>>> equality= tf.equal(x,y)
>>> equality_cast = tf.cast(equality,tf.float32)
>>> equality_bitcast = tf.bitcast(equality_cast,tf.uint8)
>>> print(equality)
tf.Tensor([False True True], shape=(3,), dtype=bool)
>>> print(equality_cast)
tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32)
>>> print(equality_bitcast)
tf.Tensor(
    [[  0   0   0   0]
     [  0   0 128  63]
     [  0   0 128  63]], shape=(3, 4), dtype=uint8)

NOTE: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.

参数:
  • input – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int64, int32, uint8, uint16, uint32, uint64, int8, int16, complex64, complex128, qint8, quint8, qint16, quint16, qint32.
  • type – A tf.DType from: tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32.
  • name – A name for the operation (optional).
返回:

A Tensor of type type.

tensorflow.boolean_mask(tensor, mask, axis=None, name='boolean_mask')

Apply boolean mask to tensor.

Numpy equivalent is tensor[mask].

`python # 1-D example tensor = [0, 1, 2, 3] mask = np.array([True, False, True, False]) boolean_mask(tensor, mask)  # [0, 2] `

In general, 0 < dim(mask) = K <= dim(tensor), and mask’s shape must match the first K dimensions of tensor’s shape. We then have:

boolean_mask(tensor, mask)[i, j1,…,jd] = tensor[i1,…,iK,j1,…,jd]

where (i1,…,iK) is the ith True entry of mask (row-major order). The axis could be used with mask to indicate the axis to mask from. In that case, axis + dim(mask) <= dim(tensor) and mask’s shape must match the first axis + dim(mask) dimensions of tensor’s shape.

See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of tensor (rather than flattening them, as tf.boolean_mask does).

参数:
  • tensor – N-D tensor.
  • mask – K-D boolean tensor, K <= N and K must be known statically.
  • axis – A 0-D int Tensor representing the axis in tensor to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
  • name – A name for this operation (optional).
返回:

(N-K+1)-dimensional tensor populated by entries in tensor corresponding to True values in mask.

Raises:

ValueError – If shapes do not conform.

Examples:

`python # 2-D example tensor = [[1, 2], [3, 4], [5, 6]] mask = np.array([True, False, True]) boolean_mask(tensor, mask)  # [[1, 2], [5, 6]] `

tensorflow.broadcast_dynamic_shape(shape_x, shape_y)

Computes the shape of a broadcast given symbolic shapes.

When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a Tensor whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes.

参数:
  • shape_x – A rank 1 integer Tensor, representing the shape of x.
  • shape_y – A rank 1 integer Tensor, representing the shape of y.
返回:

A rank 1 integer Tensor representing the broadcasted shape.

tensorflow.broadcast_static_shape(shape_x, shape_y)

Computes the shape of a broadcast given known shapes.

When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y.

For example, if shape_x is [1, 2, 3] and shape_y is [5, 1, 3], the result is a TensorShape whose value is [5, 2, 3].

This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes.

参数:
  • shape_x – A TensorShape
  • shape_y – A TensorShape
返回:

A TensorShape representing the broadcasted shape.

Raises:

ValueError – If the two shapes can not be broadcasted.

tensorflow.broadcast_to(input, shape, name=None)

Broadcast an array for a compatible shape.

Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one. When trying to broadcast a Tensor to a shape, it starts with the trailing dimensions, and works its way forward.

For example,

>>> x = tf.constant([1, 2, 3])
>>> y = tf.broadcast_to(x, [3, 3])
>>> print(y)
tf.Tensor(
    [[1 2 3]
     [1 2 3]
     [1 2 3]], shape=(3, 3), dtype=int32)

In the above example, the input Tensor with the shape of [1, 3] is broadcasted to output Tensor with shape of [3, 3].

参数:
  • input – A Tensor. A Tensor to broadcast.
  • shape – A Tensor. Must be one of the following types: int32, int64. An 1-D int Tensor. The shape of the desired output.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.case(pred_fn_pairs, default=None, exclusive=False, strict=False, name='case')

Create a case operation.

See also tf.switch_case.

The pred_fn_pairs parameter is a list of pairs of size N. Each pair contains a boolean scalar tensor and a python callable that creates the tensors to be returned if the boolean evaluates to True. default is a callable generating a list of tensors. All the callables in pred_fn_pairs as well as default (if provided) should return the same number and types of tensors.

If exclusive==True, all predicates are evaluated, and an exception is thrown if more than one of the predicates evaluates to True. If exclusive==False, execution stops at the first predicate which evaluates to True, and the tensors generated by the corresponding function are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by default.

tf.case supports nested structures as implemented in tf.contrib.framework.nest. All of the callables must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by a callable, they are implicitly unpacked to single values. This behavior is disabled by passing strict=True.

@compatibility(v2) pred_fn_pairs could be a dictionary in v1. However, tf.Tensor and tf.Variable are no longer hashable in v2, so cannot be used as a key for a dictionary. Please use a list or a tuple instead. @end_compatibility

Example 1:

Pseudocode:

` if (x < y) return 17; else return 23; `

Expressions:

`python f1 = lambda: tf.constant(17) f2 = lambda: tf.constant(23) r = tf.case([(tf.less(x, y), f1)], default=f2) `

Example 2:

Pseudocode:

` if (x < y && x > z) raise OpError("Only one predicate may evaluate to True"); if (x < y) return 17; else if (x > z) return 23; else return -1; `

Expressions:

```python def f1(): return tf.constant(17) def f2(): return tf.constant(23) def f3(): return tf.constant(-1) r = tf.case([(tf.less(x, y), f1), (tf.greater(x, z), f2)],

default=f3, exclusive=True)

```

参数:
  • pred_fn_pairs – List of pairs of a boolean scalar tensor and a callable which returns a list of tensors.
  • default – Optional callable that returns a list of tensors.
  • exclusive – True iff at most one predicate is allowed to evaluate to True.
  • strict – A boolean that enables/disables ‘strict’ mode; see above.
  • name – A name for this operation (optional).
返回:

The tensors returned by the first pair whose predicate evaluated to True, or those returned by default if none does.

Raises:
  • TypeError – If pred_fn_pairs is not a list/tuple.
  • TypeError – If pred_fn_pairs is a list but does not contain 2-tuples.
  • TypeError – If fns[i] is not callable for any i, or default is not callable.
tensorflow.cast(x, dtype, name=None)

Casts a tensor to a new type.

The operation casts x (in case of Tensor) or x.values (in case of SparseTensor or IndexedSlices) to dtype.

For example:

>>> x = tf.constant([1.8, 2.2], dtype=tf.float32)
>>> tf.dtypes.cast(x, tf.int32)
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>

The operation supports data types (for x and dtype) of uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, complex64, complex128, bfloat16. In case of casting from complex types (complex64, complex128) to real types, only the real part of x is returned. In case of casting from real types to complex types (complex64, complex128), the imaginary part of the returned value is set to 0. The handling of complex types here matches the behavior of numpy.

参数:
  • x – A Tensor or SparseTensor or IndexedSlices of numeric type. It could be uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, complex64, complex128, bfloat16.
  • dtype – The destination type. The list of supported dtypes is the same as x.
  • name – A name for the operation (optional).
返回:

A Tensor or SparseTensor or IndexedSlices with same shape as x and

same type as dtype.

Raises:

TypeError – If x cannot be cast to the dtype.

tensorflow.clip_by_global_norm(t_list, clip_norm, use_norm=None, name=None)

Clips values of multiple tensors by the ratio of the sum of their norms.

Given a tuple or list of tensors t_list, and a clipping ratio clip_norm, this operation returns a list of clipped tensors list_clipped and the global norm (global_norm) of all tensors in t_list. Optionally, if you’ve already computed the global norm for t_list, you can specify the global norm with use_norm.

To perform the clipping, the values t_list[i] are set to:

t_list[i] * clip_norm / max(global_norm, clip_norm)

where:

global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))

If clip_norm > global_norm then the entries in t_list remain as they are, otherwise they’re all shrunk by the global ratio.

If global_norm == infinity then the entries in t_list are all set to NaN to signal that an error occurred.

Any of the entries of t_list that are of type None are ignored.

This is the correct way to perform gradient clipping (Pascanu et al., 2012).

However, it is slower than clip_by_norm() because all the parameters must be ready before the clipping operation can be performed.

参数:
  • t_list – A tuple or list of mixed Tensors, IndexedSlices, or None.
  • clip_norm – A 0-D (scalar) Tensor > 0. The clipping ratio.
  • use_norm – A 0-D (scalar) Tensor of type float (optional). The global norm to use. If not provided, global_norm() is used to compute the norm.
  • name – A name for the operation (optional).
返回:

A list of Tensors of the same type as list_t. global_norm: A 0-D (scalar) Tensor representing the global norm.

返回类型:

list_clipped

Raises:

TypeError – If t_list is not a sequence.

References

On the difficulty of training Recurrent Neural Networks:
[Pascanu et al., 2012](http://proceedings.mlr.press/v28/pascanu13.html) ([pdf](http://proceedings.mlr.press/v28/pascanu13.pdf))
tensorflow.clip_by_norm(t, clip_norm, axes=None, name=None)

Clips tensor values to a maximum L2-norm.

Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its L2-norm is less than or equal to clip_norm, along the dimensions given in axes. Specifically, in the default case where all dimensions are used for calculation, if the L2-norm of t is already less than or equal to clip_norm, then t is not modified. If the L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to:

t * clip_norm / l2norm(t)

In this case, the L2-norm of the output tensor is clip_norm.

As another example, if t is a matrix and axes == [1], then each row of the output will have L2-norm less than or equal to clip_norm. If axes == [0] instead, each column of the output will be clipped.

This operation is typically used to clip gradients before applying them with an optimizer.

参数:
  • t – A Tensor or IndexedSlices.
  • clip_norm – A 0-D (scalar) Tensor > 0. A maximum clipping value.
  • axes – A 1-D (vector) Tensor of type int32 containing the dimensions to use for computing the L2-norm. If None (the default), uses all dimensions.
  • name – A name for the operation (optional).
返回:

A clipped Tensor or IndexedSlices.

Raises:
  • ValueError – If the clip_norm tensor is not a 0-D scalar tensor.
  • TypeError – If dtype of the input is not a floating point or complex type.
tensorflow.clip_by_value(t, clip_value_min, clip_value_max, name=None)

Clips tensor values to a specified min and max.

Given a tensor t, this operation returns a tensor of the same type and shape as t with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.

Note: clip_value_min needs to be smaller or equal to clip_value_max for correct results.

For example:

Basic usage passes a scalar as the min and max value.

>>> t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])
>>> t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)
>>> t2.numpy()
array([[-1., -1.,  0.],
       [ 0.,  1.,  1.]], dtype=float32)

The min and max can be the same size as t, or broadcastable to that size.

>>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])
>>> clip_min = [[2],[1]]
>>> t3 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)
>>> t3.numpy()
array([[ 2.,  2., 10.],
       [ 1.,  1., 10.]], dtype=float32)

Broadcasting fails, intentionally, if you would expand the dimensions of t

>>> t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])
>>> clip_min = [[[2, 1]]] # Has a third axis
>>> t4 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)
Traceback (most recent call last):
...
InvalidArgumentError: Incompatible shapes: [2,3] vs. [1,1,2]

It throws a TypeError if you try to clip an int to a float value (tf.cast the input to float first).

>>> t = tf.constant([[1, 2], [3, 4]], dtype=tf.int32)
>>> t5 = tf.clip_by_value(t, clip_value_min=-3.1, clip_value_max=3.1)
Traceback (most recent call last):
...
TypeError: Cannot convert ...
参数:
  • t – A Tensor or IndexedSlices.
  • clip_value_min – The minimum value to clip to. A scalar Tensor or one that is broadcastable to the shape of t.
  • clip_value_max – The minimum value to clip to. A scalar Tensor or one that is broadcastable to the shape of t.
  • name – A name for the operation (optional).
返回:

A clipped Tensor or IndexedSlices.

Raises:
  • tf.errors.InvalidArgumentError – If the clip tensors would trigger array broadcasting that would make the returned tensor larger than the input.
  • TypeError – If dtype of the input is int32 and dtype of the clip_value_min or clip_value_max is float32
tensorflow.complex(real, imag, name=None)

Converts two real numbers to a complex number.

Given a tensor real representing the real part of a complex number, and a tensor imag representing the imaginary part of a complex number, this operation returns complex numbers elementwise of the form \(a + bj\), where a represents the real part and b represents the imag part.

The input tensors real and imag must have the same shape.

For example:

`python real = tf.constant([2.25, 3.25]) imag = tf.constant([4.75, 5.75]) tf.complex(real, imag)  # [[2.25 + 4.75j], [3.25 + 5.75j]] `

参数:
  • real – A Tensor. Must be one of the following types: float32, float64.
  • imag – A Tensor. Must have the same type as real.
  • name – A name for the operation (optional).
返回:

A Tensor of type complex64 or complex128.

Raises:

TypeError – Real and imag must be correct types

tensorflow.concat(values, axis, name='concat')

Concatenates tensors along one dimension.

See also tf.tile, tf.stack, tf.repeat.

Concatenates the list of tensors values along dimension axis. If values[i].shape = [D0, D1, … Daxis(i), …Dn], the concatenated result has shape

[D0, D1, … Raxis, …Dn]

where

Raxis = sum(Daxis(i))

That is, the data from the input tensors is joined along the axis dimension.

The number of dimensions of the input tensors must match, and all dimensions except axis must be equal.

For example:

>>> t1 = [[1, 2, 3], [4, 5, 6]]
>>> t2 = [[7, 8, 9], [10, 11, 12]]
>>> concat([t1, t2], 0)
<tf.Tensor: shape=(4, 3), dtype=int32, numpy=
array([[ 1,  2,  3],
       [ 4,  5,  6],
       [ 7,  8,  9],
       [10, 11, 12]], dtype=int32)>
>>> concat([t1, t2], 1)
<tf.Tensor: shape=(2, 6), dtype=int32, numpy=
array([[ 1,  2,  3,  7,  8,  9],
       [ 4,  5,  6, 10, 11, 12]], dtype=int32)>

As in Python, the axis could also be negative numbers. Negative axis are interpreted as counting from the end of the rank, i.e.,

axis + rank(values)-th dimension.

For example:

>>> t1 = [[[1, 2], [2, 3]], [[4, 4], [5, 3]]]
>>> t2 = [[[7, 4], [8, 4]], [[2, 10], [15, 11]]]
>>> tf.concat([t1, t2], -1)
<tf.Tensor: shape=(2, 2, 4), dtype=int32, numpy=
  array([[[ 1,  2,  7,  4],
          [ 2,  3,  8,  4]],
         [[ 4,  4,  2, 10],
          [ 5,  3, 15, 11]]], dtype=int32)>

Note: If you are concatenating along a new axis consider using stack. E.g.

`python tf.concat([tf.expand_dims(t, axis) for t in tensors], axis) `

can be rewritten as

`python tf.stack(tensors, axis=axis) `

参数:
  • values – A list of Tensor objects or a single Tensor.
  • axis – 0-D int32 Tensor. Dimension along which to concatenate. Must be in the range [-rank(values), rank(values)). As in Python, indexing for axis is 0-based. Positive axis in the rage of [0, rank(values)) refers to axis-th dimension. And negative axis refers to axis + rank(values)-th dimension.
  • name – A name for the operation (optional).
返回:

A Tensor resulting from concatenation of the input tensors.

tensorflow.cond(pred, true_fn=None, false_fn=None, name=None)

Return true_fn() if the predicate pred is true else false_fn().

true_fn and false_fn both return lists of output tensors. true_fn and false_fn must have the same non-zero number and type of outputs.

WARNING: Any Tensors or Operations created outside of true_fn and false_fn will be executed regardless of which branch is selected at runtime.

Although this behavior is consistent with the dataflow model of TensorFlow, it has frequently surprised users who expected a lazier semantics. Consider the following simple program:

`python z = tf.multiply(a, b) result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y)) `

If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.multiply operation is always executed, unconditionally.

Note that cond calls true_fn and false_fn exactly once (inside the call to cond, and not at all during Session.run()). cond stitches together the graph fragments created during the true_fn and false_fn calls with some additional graph nodes to ensure that the right branch gets executed depending on the value of pred.

tf.cond supports nested structures as implemented in tensorflow.python.util.nest. Both true_fn and false_fn must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by true_fn and/or false_fn, they are implicitly unpacked to single values.

Note: It is illegal to “directly” use tensors created inside a cond branch outside it, e.g. by storing a reference to a branch tensor in the python state. If you need to use a tensor created in a branch function you should return it as an output of the branch function and use the output from tf.cond instead.

参数:
  • pred – A scalar determining whether to return the result of true_fn or false_fn.
  • true_fn – The callable to be performed if pred is true.
  • false_fn – The callable to be performed if pred is false.
  • name – Optional name prefix for the returned tensors.
返回:

Tensors returned by the call to either true_fn or false_fn. If the callables return a singleton list, the element is extracted from the list.

Raises:
  • TypeError – if true_fn or false_fn is not callable.
  • ValueError – if true_fn and false_fn do not return the same number of tensors, or return tensors of different types.

Example:

`python x = tf.constant(2) y = tf.constant(5) def f1(): return tf.multiply(x, 17) def f2(): return tf.add(y, 23) r = tf.cond(tf.less(x, y), f1, f2) # r is set to f1(). # Operations in f2 (e.g., tf.add) are not executed. `

tensorflow.constant(value, dtype=None, shape=None, name='Const')

Creates a constant tensor from a tensor-like object.

Note: All eager tf.Tensor values are immutable (in contrast to tf.Variable). There is nothing especially _constant_ about the value returned from tf.constant. This function it is not fundamentally different from tf.convert_to_tensor. The name tf.constant comes from the symbolic APIs (like tf.data or keras functional models) where the value is embeded in a Const node in the tf.Graph. tf.constant is useful for asserting that the value can be embedded that way.

If the argument dtype is not specified, then the type is inferred from the type of value.

>>> # Constant 1-D Tensor from a python list.
>>> tf.constant([1, 2, 3, 4, 5, 6])
<tf.Tensor: shape=(6,), dtype=int32,
    numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
>>> # Or a numpy array
>>> a = np.array([[1, 2, 3], [4, 5, 6]])
>>> tf.constant(a)
<tf.Tensor: shape=(2, 3), dtype=int64, numpy=
  array([[1, 2, 3],
         [4, 5, 6]])>

If dtype is specified the resulting tensor values are cast to the requested dtype.

>>> tf.constant([1, 2, 3, 4, 5, 6], dtype=tf.float64)
<tf.Tensor: shape=(6,), dtype=float64,
    numpy=array([1., 2., 3., 4., 5., 6.])>

If shape is set, the value is reshaped to match. Scalars are expanded to fill the shape:

>>> tf.constant(0, shape=(2, 3))
  <tf.Tensor: shape=(2, 3), dtype=int32, numpy=
  array([[0, 0, 0],
         [0, 0, 0]], dtype=int32)>
>>> tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
  array([[1, 2, 3],
         [4, 5, 6]], dtype=int32)>

tf.constant has no effect if an eager Tensor is passed as the value, it even transmits gradients:

>>> v = tf.Variable([0.0])
>>> with tf.GradientTape() as g:
...     loss = tf.constant(v + v)
>>> g.gradient(loss, v).numpy()
array([2.], dtype=float32)

But, since tf.constant embeds the value in the tf.Graph this fails for symbolic tensors:

>>> i = tf.keras.layers.Input(shape=[None, None])
>>> t = tf.constant(i)
Traceback (most recent call last):
...
NotImplementedError: ...

tf.constant will _always_ create CPU (host) tensors. In order to create tensors on other devices, use tf.identity. (If the value is an eager Tensor, however, the tensor will be returned unmodified as mentioned above.)

Related Ops:

  • tf.convert_to_tensor is similar but: * It has no shape argument. * Symbolic tensors are allowed to pass through.

    >>> i = tf.keras.layers.Input(shape=[None, None])
    >>> t = tf.convert_to_tensor(i)
    
  • tf.fill: differs in a few ways: * tf.constant supports arbitrary constants, not just uniform scalar

    Tensors like tf.fill.

    • tf.fill creates an Op in the graph that is expanded at runtime, so it can efficiently represent large tensors.
    • Since tf.fill does not embed the value, it can produce dynamically sized outputs.
参数:
  • value – A constant value (or list) of output type dtype.
  • dtype – The type of the elements of the resulting tensor.
  • shape – Optional dimensions of resulting tensor.
  • name – Optional name for the tensor.
返回:

A Constant Tensor.

Raises:
  • TypeError – if shape is incorrectly specified or unsupported.
  • ValueError – if called on a symbolic tensor.
tensorflow.constant_initializer

tensorflow.python.ops.init_ops_v2.Constant 的别名

tensorflow.control_dependencies(control_inputs)

Wrapper for Graph.control_dependencies() using the default graph.

See tf.Graph.control_dependencies for more details.

When eager execution is enabled, any callable object in the control_inputs list will be called.

参数:control_inputs – A list of Operation or Tensor objects which must be executed or computed before running the operations defined in the context. Can also be None to clear the control dependencies. If eager execution is enabled, any callable object in the control_inputs list will be called.
返回:A context manager that specifies control dependencies for all operations constructed within the context.
tensorflow.convert_to_tensor(value, dtype=None, dtype_hint=None, name=None)

Converts the given value to a Tensor.

This function converts Python objects of various types to Tensor objects. It accepts Tensor objects, numpy arrays, Python lists, and Python scalars. For example:

>>> def my_func(arg):
...   arg = tf.convert_to_tensor(arg, dtype=tf.float32)
...   return arg
>>> # The following calls are equivalent.
>>> value_1 = my_func(tf.constant([[1.0, 2.0], [3.0, 4.0]]))
>>> print(value_1)
tf.Tensor(
  [[1. 2.]
   [3. 4.]], shape=(2, 2), dtype=float32)
>>> value_2 = my_func([[1.0, 2.0], [3.0, 4.0]])
>>> print(value_2)
tf.Tensor(
  [[1. 2.]
   [3. 4.]], shape=(2, 2), dtype=float32)
>>> value_3 = my_func(np.array([[1.0, 2.0], [3.0, 4.0]], dtype=np.float32))
>>> print(value_3)
tf.Tensor(
  [[1. 2.]
   [3. 4.]], shape=(2, 2), dtype=float32)

This function can be useful when composing a new operation in Python (such as my_func in the example above). All standard Python op constructors apply this function to each of their Tensor-valued inputs, which allows those ops to accept numpy arrays, Python lists, and scalars in addition to Tensor objects.

Note: This function diverges from default Numpy behavior for float and
string types when None is present in a Python list or scalar. Rather than silently converting None values, an error will be thrown.
参数:
  • value – An object whose type has a registered Tensor conversion function.
  • dtype – Optional element type for the returned tensor. If missing, the type is inferred from the type of value.
  • dtype_hint – Optional element type for the returned tensor, used when dtype is None. In some cases, a caller may not have a dtype in mind when converting to a tensor, so dtype_hint can be used as a soft preference. If the conversion to dtype_hint is not possible, this argument has no effect.
  • name – Optional name to use if a new Tensor is created.
返回:

A Tensor based on value.

Raises:
  • TypeError – If no conversion function is registered for value to dtype.
  • RuntimeError – If a registered conversion function returns an invalid value.
  • ValueError – If the value is a tensor not of given dtype in graph mode.
tensorflow.cos(x, name=None)

Computes cos of x element-wise.

Given an input tensor, this function computes cosine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1]. If input lies outside the boundary, nan is returned.

`python x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.cos(x) ==> [nan -0.91113025 0.87758255 0.5403023 0.36235774 0.48718765 -0.95215535 nan] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.cosh(x, name=None)

Computes hyperbolic cosine of x element-wise.

Given an input tensor, this function computes hyperbolic cosine of every element in the tensor. Input range is [-inf, inf] and output range is [1, inf].

`python x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) tf.math.cosh(x) ==> [inf 4.0515420e+03 1.1276259e+00 1.5430807e+00 1.8106556e+00 3.7621956e+00 1.1013233e+04 inf] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.cumsum(x, axis=0, exclusive=False, reverse=False, name=None)

Compute the cumulative sum of the tensor x along axis.

By default, this op performs an inclusive cumsum, which means that the first element of the input is identical to the first element of the output: For example:

>>> # tf.cumsum([a, b, c])   # [a, a + b, a + b + c]
>>> x = tf.constant([2, 4, 6, 8])
>>> tf.cumsum(x)
<tf.Tensor: shape=(4,), dtype=int32,
numpy=array([ 2,  6, 12, 20], dtype=int32)>
>>> # using varying `axis` values
>>> y = tf.constant([[2, 4, 6, 8], [1,3,5,7]])
>>> tf.cumsum(y, axis=0)
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[ 2,  4,  6,  8],
       [ 3,  7, 11, 15]], dtype=int32)>
>>> tf.cumsum(y, axis=1)
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
array([[ 2,  6, 12, 20],
       [ 1,  4,  9, 16]], dtype=int32)>

By setting the exclusive kwarg to True, an exclusive cumsum is performed instead:

>>> # tf.cumsum([a, b, c], exclusive=True)  => [0, a, a + b]
>>> x = tf.constant([2, 4, 6, 8])
>>> tf.cumsum(x, exclusive=True)
<tf.Tensor: shape=(4,), dtype=int32,
numpy=array([ 0,  2,  6, 12], dtype=int32)>

By setting the reverse kwarg to True, the cumsum is performed in the opposite direction:

>>> # tf.cumsum([a, b, c], reverse=True)  # [a + b + c, b + c, c]
>>> x = tf.constant([2, 4, 6, 8])
>>> tf.cumsum(x, reverse=True)
<tf.Tensor: shape=(4,), dtype=int32,
numpy=array([20, 18, 14,  8], dtype=int32)>

This is more efficient than using separate tf.reverse ops. The reverse and exclusive kwargs can also be combined:

>>> # tf.cumsum([a, b, c], exclusive=True, reverse=True)  # [b + c, c, 0]
>>> x = tf.constant([2, 4, 6, 8])
>>> tf.cumsum(x, exclusive=True, reverse=True)
<tf.Tensor: shape=(4,), dtype=int32,
numpy=array([18, 14,  8,  0], dtype=int32)>
参数:
  • x – A Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half.
  • axis – A Tensor of type int32 (default: 0). Must be in the range [-rank(x), rank(x)).
  • exclusive – If True, perform exclusive cumsum.
  • reverse – A bool (default: False).
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.custom_gradient(f=None)

Decorator to define a function with a custom gradient.

This decorator allows fine grained control over the gradients of a sequence for operations. This may be useful for multiple reasons, including providing a more efficient or numerically stable gradient for a sequence of operations.

For example, consider the following function that commonly occurs in the computation of cross entropy and log likelihoods:

```python def log1pexp(x):

return tf.math.log(1 + tf.exp(x))

```

Due to numerical instability, the gradient of this function evaluated at x=100 is NaN. For example:

`python x = tf.constant(100.) y = log1pexp(x) dy = tf.gradients(y, x) # Will be NaN when evaluated. `

The gradient expression can be analytically simplified to provide numerical stability:

```python @tf.custom_gradient def log1pexp(x):

e = tf.exp(x) def grad(dy):

return dy * (1 - 1 / (1 + e))

return tf.math.log(1 + e), grad

```

With this definition, the gradient at x=100 will be correctly evaluated as 1.0.

Nesting custom gradients can lead to unintuitive results. The default behavior does not correspond to n-th order derivatives. For example

```python @tf.custom_gradient def op(x):

y = op1(x) @tf.custom_gradient def grad_fn(dy):

gdy = op2(x, y, dy) def grad_grad_fn(ddy): # Not the 2nd order gradient of op w.r.t. x.

return op3(x, y, dy, ddy)

return gdy, grad_grad_fn

return y, grad_fn

```

The function grad_grad_fn will be calculating the first order gradient of grad_fn with respect to dy, which is used to generate forward-mode gradient graphs from backward-mode gradient graphs, but is not the same as the second order gradient of op with respect to x.

Instead, wrap nested @tf.custom_gradients in another function:

```python @tf.custom_gradient def op_with_fused_backprop(x):

y, x_grad = fused_op(x) def first_order_gradient(dy):

@tf.custom_gradient def first_order_custom(unused_x):

def second_order_and_transpose(ddy):
return second_order_for_x(…), gradient_wrt_dy(…)

return x_grad, second_order_and_transpose

return dy * first_order_custom(x)

return y, first_order_gradient

```

Additional arguments to the inner @tf.custom_gradient-decorated function control the expected return values of the innermost function.

See also tf.RegisterGradient which registers a gradient function for a primitive TensorFlow operation. tf.custom_gradient on the other hand allows for fine grained control over the gradient computation of a sequence of operations.

Note that if the decorated function uses `Variable`s, the enclosing variable scope must be using `ResourceVariable`s.

参数:f

function f(*x) that returns a tuple (y, grad_fn) where: - x is a sequence of Tensor inputs to the function. - y is a Tensor or sequence of Tensor outputs of applying

TensorFlow operations in f to x.
  • grad_fn is a function with the signature g(*grad_ys) which returns a list of Tensor`s - the derivatives of `Tensor`s in `y with respect to the Tensor`s in `x. grad_ys is a Tensor or sequence of Tensor`s the same size as `y holding the initial value gradients for each Tensor in y. In a pure mathematical sense, a vector-argument vector-valued function f’s derivatives should be its Jacobian matrix J. Here we are expressing the Jacobian J as a function grad_fn which defines how J will transform a vector grad_ys when left-multiplied with it (grad_ys * J). This functional representation of a matrix is convenient to use for chain-rule calculation (in e.g. the back-propagation algorithm).

    If f uses Variable`s (that are not part of the inputs), i.e. through `get_variable, then grad_fn should have signature g(*grad_ys, variables=None), where variables is a list of the Variable`s, and return a 2-tuple `(grad_xs, grad_vars), where grad_xs is the same as above, and grad_vars is a list<Tensor> with the derivatives of Tensor`s in `y with respect to the variables (that is, grad_vars has one Tensor per variable in variables).

返回:A function h(x) which returns the same value as f(x)[0] and whose gradient (as calculated by tf.gradients) is determined by f(x)[1].
tensorflow.device(device_name)

Specifies the device for ops created/executed in this context.

This function specifies the device to be used for ops created/executed in a particular context. Nested contexts will inherit and also create/execute their ops on the specified device. If a specific device is not required, consider not using this function so that a device can be automatically assigned. In general the use of this function is optional. device_name can be fully specified, as in “/job:worker/task:1/device:cpu:0”, or partially specified, containing only a subset of the “/”-separated fields. Any fields which are specified will override device annotations from outer scopes.

For example:

```python with tf.device(‘/job:foo’):

# ops created here have devices with /job:foo with tf.device(‘/job:bar/task:0/device:gpu:2’):

# ops created here have the fully specified device above
with tf.device(‘/device:gpu:1’):
# ops created here have the device ‘/job:foo/device:gpu:1’

```

参数:device_name – The device name to use in the context.
返回:A context manager that specifies the default device to use for newly created ops.
Raises:RuntimeError – If a function is passed in.
tensorflow.divide(x, y, name=None)

Computes Python style division of x by y.

For example:

>>> x = tf.constant([16, 12, 11])
>>> y = tf.constant([4, 6, 2])
>>> tf.divide(x,y)
<tf.Tensor: shape=(3,), dtype=float64,
numpy=array([4. , 2. , 5.5])>
参数:
  • x – A Tensor
  • y – A Tensor
  • name – A name for the operation (optional).
返回:

A Tensor with same shape as input

tensorflow.dynamic_partition(data, partitions, num_partitions, name=None)

Partitions data into num_partitions tensors using indices from partitions.

For each index tuple js of size partitions.ndim, the slice data[js, …] becomes part of outputs[partitions[js]]. The slices with partitions[js] = i are placed in outputs[i] in lexicographic order of js, and the first dimension of outputs[i] is the number of entries in partitions equal to i. In detail,

```python

outputs[i].shape = [sum(partitions == i)] + data.shape[partitions.ndim:]

outputs[i] = pack([data[js, …] for js if partitions[js] == i])

```

data.shape must start with partitions.shape.

For example:

```python

# Scalar partitions. partitions = 1 num_partitions = 2 data = [10, 20] outputs[0] = [] # Empty with shape [0, 2] outputs[1] = [[10, 20]]

# Vector partitions. partitions = [0, 0, 1, 1, 0] num_partitions = 2 data = [10, 20, 30, 40, 50] outputs[0] = [10, 20, 50] outputs[1] = [30, 40]

```

See dynamic_stitch for an example on how to merge partitions back.

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/DynamicPartition.png” alt> </div>

参数:
  • data – A Tensor.
  • partitions – A Tensor of type int32. Any shape. Indices in the range [0, num_partitions).
  • num_partitions – An int that is >= 1. The number of partitions to output.
  • name – A name for the operation (optional).
返回:

A list of num_partitions Tensor objects with the same type as data.

tensorflow.dynamic_stitch(indices, data, name=None)

Interleave the values from the data tensors into a single tensor.

Builds a merged tensor such that

```python
merged[indices[m][i, …, j], …] = data[m][i, …, j, …]

```

For example, if each indices[m] is scalar or vector, we have

```python

# Scalar indices: merged[indices[m], …] = data[m][…]

# Vector indices: merged[indices[m][i], …] = data[m][i, …]

```

Each data[i].shape must start with the corresponding indices[i].shape, and the rest of data[i].shape must be constant w.r.t. i. That is, we must have data[i].shape = indices[i].shape + constant. In terms of this constant, the output shape is

merged.shape = [max(indices)] + constant

Values are merged in order, so if an index appears in both indices[m][i] and indices[n][j] for (m,i) < (n,j) the slice data[n][j] will appear in the merged result. If you do not need this guarantee, ParallelDynamicStitch might perform better on some devices.

For example:

```python

indices[0] = 6 indices[1] = [4, 1] indices[2] = [[5, 2], [0, 3]] data[0] = [61, 62] data[1] = [[41, 42], [11, 12]] data[2] = [[[51, 52], [21, 22]], [[1, 2], [31, 32]]] merged = [[1, 2], [11, 12], [21, 22], [31, 32], [41, 42],

[51, 52], [61, 62]]

```

This method can be used to merge partitions created by dynamic_partition as illustrated on the following example:

```python

# Apply function (increments x_i) on elements for which a certain condition # apply (x_i != -1 in this example). x=tf.constant([0.1, -1., 5.2, 4.3, -1., 7.4]) condition_mask=tf.not_equal(x,tf.constant(-1.)) partitioned_data = tf.dynamic_partition(

x, tf.cast(condition_mask, tf.int32) , 2)

partitioned_data[1] = partitioned_data[1] + 1.0 condition_indices = tf.dynamic_partition(

tf.range(tf.shape(x)[0]), tf.cast(condition_mask, tf.int32) , 2)

x = tf.dynamic_stitch(condition_indices, partitioned_data) # Here x=[1.1, -1., 6.2, 5.3, -1, 8.4], the -1. values remain # unchanged.

```

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/DynamicStitch.png” alt> </div>

参数:
  • indices – A list of at least 1 Tensor objects with type int32.
  • data – A list with the same length as indices of Tensor objects with the same type.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as data.

tensorflow.edit_distance(hypothesis, truth, normalize=True, name='edit_distance')

Computes the Levenshtein distance between sequences.

This operation takes variable-length sequences (hypothesis and truth), each provided as a SparseTensor, and computes the Levenshtein distance. You can normalize the edit distance by length of truth by setting normalize to true.

For example, given the following input:

``python # ‘hypothesis’ is a tensor of shape `[2, 1] with variable-length values: # (0,0) = [“a”] # (1,0) = [“b”] hypothesis = tf.SparseTensor(

[[0, 0, 0],
[1, 0, 0]],

[“a”, “b”], (2, 1, 1))

# ‘truth’ is a tensor of shape [2, 2] with variable-length values: # (0,0) = [] # (0,1) = [“a”] # (1,0) = [“b”, “c”] # (1,1) = [“a”] truth = tf.SparseTensor(

[[0, 1, 0],
[1, 0, 0], [1, 0, 1], [1, 1, 0]],

[“a”, “b”, “c”, “a”], (2, 2, 2))

normalize = True ```

This operation would return the following:

``python # ‘output’ is a tensor of shape `[2, 2] with edit distances normalized # by ‘truth’ lengths. output ==> [[inf, 1.0], # (0,0): no truth, (0,1): no hypothesis

[0.5, 1.0]] # (1,0): addition, (1,1): no hypothesis

```

参数:
  • hypothesis – A SparseTensor containing hypothesis sequences.
  • truth – A SparseTensor containing truth sequences.
  • normalize – A bool. If True, normalizes the Levenshtein distance by length of truth.
  • name – A name for the operation (optional).
返回:

A dense Tensor with rank R - 1, where R is the rank of the SparseTensor inputs hypothesis and truth.

Raises:

TypeError – If either hypothesis or truth are not a SparseTensor.

tensorflow.eig(tensor, name=None)

Computes the eigen decomposition of a batch of matrices.

The eigenvalues and eigenvectors for a non-Hermitian matrix in general are complex. The eigenvectors are not guaranteed to be linearly independent.

Computes the eigenvalues and right eigenvectors of the innermost N-by-N matrices in tensor such that tensor[…,:,:] * v[…, :,i] = e[…, i] * v[…,:,i], for i=0…N-1.

参数:
  • tensorTensor of shape […, N, N]. Only the lower triangular part of each inner inner matrix is referenced.
  • name – string, optional name of the operation.
返回:

Eigenvalues. Shape is […, N]. Sorted in non-decreasing order. v: Eigenvectors. Shape is […, N, N]. The columns of the inner most

matrices contain eigenvectors of the corresponding matrices in tensor

返回类型:

e

tensorflow.eigvals(tensor, name=None)

Computes the eigenvalues of one or more matrices.

Note: If your program backpropagates through this function, you should replace it with a call to tf.linalg.eig (possibly ignoring the second output) to avoid computing the eigen decomposition twice. This is because the eigenvectors are used to compute the gradient w.r.t. the eigenvalues. See _SelfAdjointEigV2Grad in linalg_grad.py.

参数:
  • tensorTensor of shape […, N, N].
  • name – string, optional name of the operation.
返回:

Eigenvalues. Shape is […, N]. The vector e[…, :] contains the N

eigenvalues of tensor[…, :, :].

返回类型:

e

tensorflow.einsum(equation, *inputs, **kwargs)

Tensor contraction over specified indices and outer product.

Einsum allows defining Tensors by defining their element-wise computation. This computation is defined by equation, a shorthand form based on Einstein summation. As an example, consider multiplying two matrices A and B to form a matrix C. The elements of C are given by:

```
C[i,k] = sum_j A[i,j] * B[j,k]

```

The corresponding equation is:

```
ij,jk->ik

```

In general, to convert the element-wise equation into the equation string, use the following procedure (intermediate strings for matrix multiplication example provided in parentheses):

  1. remove variable names, brackets, and commas, (ik = sum_j ij * jk)
  2. replace “*” with “,”, (ik = sum_j ij , jk)
  3. drop summation signs, and (ik = ij, jk)
  4. move the output to the right, while replacing “=” with “->”. (ij,jk->ik)

Many common operations can be expressed in this way. For example:

```python # Matrix multiplication einsum(‘ij,jk->ik’, m0, m1) # output[i,k] = sum_j m0[i,j] * m1[j, k]

# Dot product einsum(‘i,i->’, u, v) # output = sum_i u[i]*v[i]

# Outer product einsum(‘i,j->ij’, u, v) # output[i,j] = u[i]*v[j]

# Transpose einsum(‘ij->ji’, m) # output[j,i] = m[i,j]

# Trace einsum(‘ii’, m) # output[j,i] = trace(m) = sum_i m[i, i]

# Batch matrix multiplication einsum(‘aij,ajk->aik’, s, t) # out[a,i,k] = sum_j s[a,i,j] * t[a, j, k] ```

To enable and control broadcasting, use an ellipsis. For example, to perform batch matrix multiplication with NumPy-style broadcasting across the batch dimensions, use:

`python einsum('...ij,...jk->...ik', u, v) `

参数:
  • equation – a str describing the contraction, in the same format as numpy.einsum.
  • *inputs – the inputs to contract (each one a Tensor), whose shapes should be consistent with equation.
  • **kwargs
    • optimize: Optimization strategy to use to find contraction path using opt_einsum. Must be ‘greedy’, ‘optimal’, ‘branch-2’, ‘branch-all’ or
      ’auto’. (optional, default: ‘greedy’).
    • name: A name for the operation (optional).
返回:

The contracted Tensor, with shape determined by equation.

Raises:

ValueError – If - the format of equation is incorrect, - number of inputs or their shapes are inconsistent with equation.

tensorflow.ensure_shape(x, shape, name=None)

Updates the shape of a tensor and checks at runtime that the shape holds.

For example: ```python x = tf.compat.v1.placeholder(tf.int32) print(x.shape) ==> TensorShape(None) y = x * 2 print(y.shape) ==> TensorShape(None)

y = tf.ensure_shape(y, (None, 3, 3)) print(y.shape) ==> TensorShape([Dimension(None), Dimension(3), Dimension(3)])

with tf.compat.v1.Session() as sess:
# Raises tf.errors.InvalidArgumentError, because the shape (3,) is not # compatible with the shape (None, 3, 3) sess.run(y, feed_dict={x: [1, 2, 3]})

```

NOTE: This differs from Tensor.set_shape in that it sets the static shape of the resulting tensor and enforces it at runtime, raising an error if the tensor’s runtime shape is incompatible with the specified shape. Tensor.set_shape sets the static shape of the tensor without enforcing it at runtime, which may result in inconsistencies between the statically-known shape of tensors and the runtime value of tensors.

参数:
  • x – A Tensor.
  • shape – A TensorShape representing the shape of this tensor, a TensorShapeProto, a list, a tuple, or None.
  • name – A name for this operation (optional). Defaults to “EnsureShape”.
返回:

A Tensor. Has the same type and contents as x. At runtime, raises a tf.errors.InvalidArgumentError if shape is incompatible with the shape of x.

tensorflow.equal(x, y, name=None)

Returns the truth value of (x == y) element-wise.

Performs a [broadcast]( https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the arguments and then an element-wise equality comparison, returning a Tensor of boolean values.

For example:

>>> x = tf.constant([2, 4])
>>> y = tf.constant(2)
>>> tf.math.equal(x, y)
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True,  False])>
>>> x = tf.constant([2, 4])
>>> y = tf.constant([2, 4])
>>> tf.math.equal(x, y)
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([ True,  True])>
参数:
  • x – A tf.Tensor or tf.SparseTensor or tf.IndexedSlices.
  • y – A tf.Tensor or tf.SparseTensor or tf.IndexedSlices.
  • name – A name for the operation (optional).
返回:

A tf.Tensor of type bool with the same size as that of x or y.

Raises:

tf.errors.InvalidArgumentError – If shapes of arguments are incompatible

tensorflow.executing_eagerly()

Checks whether the current thread has eager execution enabled.

Eager execution is enabled by default and this API returns True in most of cases. However, this API might return False in the following use cases.

  • Executing inside tf.function, unless under tf.init_scope or tf.config.experimental_run_functions_eagerly(True) is previously called.
  • Executing inside a transformation function for tf.dataset.
  • tf.compat.v1.disable_eager_execution() is called.

General case:

>>> print(tf.executing_eagerly())
True

Inside tf.function:

>>> @tf.function
... def fn():
...   with tf.init_scope():
...     print(tf.executing_eagerly())
...   print(tf.executing_eagerly())
>>> fn()
True
False

Inside tf.function after

tf.config.experimental_run_functions_eagerly(True) is called: >>> tf.config.experimental_run_functions_eagerly(True) >>> @tf.function … def fn(): … with tf.init_scope(): … print(tf.executing_eagerly()) … print(tf.executing_eagerly()) >>> fn() True True >>> tf.config.experimental_run_functions_eagerly(False)

Inside a transformation function for tf.dataset:

>>> def data_fn(x):
...   print(tf.executing_eagerly())
...   return x
>>> dataset = tf.data.Dataset.range(100)
>>> dataset = dataset.map(data_fn)
False
返回:True if the current thread has eager execution enabled.
tensorflow.exp(x, name=None)

Computes exponential of x element-wise. \(y = e^x\).

This function computes the exponential of the input tensor element-wise. i.e. math.exp(x) or \(e^x\), where x is the input tensor. \(e\) denotes Euler’s number and is approximately equal to 2.718281. Output is positive for any real input.

>>> x = tf.constant(2.0)
>>> tf.math.exp(x)
<tf.Tensor: shape=(), dtype=float32, numpy=7.389056>
>>> x = tf.constant([2.0, 8.0])
>>> tf.math.exp(x)
<tf.Tensor: shape=(2,), dtype=float32,
numpy=array([   7.389056, 2980.958   ], dtype=float32)>

For complex numbers, the exponential value is calculated as \(e^{x+iy}={e^x}{e^{iy}}={e^x}{\cos(y)+i\sin(y)}\)

For 1+1j the value would be computed as: \(e^1{\cos(1)+i\sin(1)} = 2.7182817 \times (0.5403023+0.84147096j)\)

>>> x = tf.constant(1 + 1j)
>>> tf.math.exp(x)
<tf.Tensor: shape=(), dtype=complex128,
numpy=(1.4686939399158851+2.2873552871788423j)>
参数:
  • x – A tf.Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A tf.Tensor. Has the same type as x.

@compatibility(numpy) Equivalent to np.exp @end_compatibility

tensorflow.expand_dims(input, axis, name=None)

Returns a tensor with an additional dimension inserted at index axis.

Given a tensor input, this operation inserts a dimension of size 1 at the dimension index axis of input’s shape. The dimension index axis starts at zero; if you specify a negative number for axis it is counted backward from the end.

This operation is useful if you want to add a batch dimension to a single element. For example, if you have a single image of shape [height, width, channels], you can make it a batch of one image with expand_dims(image, 0), which will make the shape [1, height, width, channels].

Examples:

>>> t = [[1, 2, 3],[4, 5, 6]] # shape [2, 3]
>>> tf.expand_dims(t, 0)
<tf.Tensor: shape=(1, 2, 3), dtype=int32, numpy=
array([[[1, 2, 3],
        [4, 5, 6]]], dtype=int32)>
>>> tf.expand_dims(t, 1)
<tf.Tensor: shape=(2, 1, 3), dtype=int32, numpy=
array([[[1, 2, 3]],
       [[4, 5, 6]]], dtype=int32)>
>>> tf.expand_dims(t, 2)
<tf.Tensor: shape=(2, 3, 1), dtype=int32, numpy=
array([[[1],
        [2],
        [3]],
       [[4],
        [5],
        [6]]], dtype=int32)>
>>> tf.expand_dims(t, -1) # Last dimension index. In this case, same as 2.
<tf.Tensor: shape=(2, 3, 1), dtype=int32, numpy=
array([[[1],
        [2],
        [3]],
       [[4],
        [5],
        [6]]], dtype=int32)>

This operation is related to:

  • tf.squeeze, which removes dimensions of size 1.
  • tf.reshape, which provides more flexible reshaping capability
参数:
  • input – A Tensor.
  • axis – Integer specifying the dimension index at which to expand the shape of input. Given an input of D dimensions, axis must be in range [-(D+1), D] (inclusive).
  • name – Optional string. The name of the output Tensor.
返回:

A tensor with the same data as input, with an additional dimension inserted at the index specified by axis.

Raises:
  • ValueError – If axis is not specified.
  • InvalidArgumentError – If axis is out of range [-(D+1), D].
tensorflow.extract_volume_patches(input, ksizes, strides, padding, name=None)

Extract patches from input and put them in the “depth” output dimension. 3D extension of extract_image_patches.

参数:
  • input – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64. 5-D Tensor with shape [batch, in_planes, in_rows, in_cols, depth].
  • ksizes – A list of ints that has length >= 5. The size of the sliding window for each dimension of input.
  • strides – A list of ints that has length >= 5. 1-D of length 5. How far the centers of two consecutive patches are in input. Must be: [1, stride_planes, stride_rows, stride_cols, 1].
  • padding

    A string from: “SAME”, “VALID”. The type of padding algorithm to use.

    We specify the size-related attributes as:

    ```python
    ksizes = [1, ksize_planes, ksize_rows, ksize_cols, 1] strides = [1, stride_planes, strides_rows, strides_cols, 1]

    ```

  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.eye(num_rows, num_columns=None, batch_shape=None, dtype=tf.float32, name=None)

Construct an identity matrix, or a batch of matrices.

```python # Construct one identity matrix. tf.eye(2) ==> [[1., 0.],

[0., 1.]]

# Construct a batch of 3 identity matrices, each 2 x 2. # batch_identity[i, :, :] is a 2 x 2 identity matrix, i = 0, 1, 2. batch_identity = tf.eye(2, batch_shape=[3])

# Construct one 2 x 3 “identity” matrix tf.eye(2, num_columns=3) ==> [[ 1., 0., 0.],

[ 0., 1., 0.]]

```

参数:
  • num_rows – Non-negative int32 scalar Tensor giving the number of rows in each batch matrix.
  • num_columns – Optional non-negative int32 scalar Tensor giving the number of columns in each batch matrix. Defaults to num_rows.
  • batch_shape – A list or tuple of Python integers or a 1-D int32 Tensor. If provided, the returned Tensor will have leading batch dimensions of this shape.
  • dtype – The type of an element in the resulting Tensor
  • name – A name for this Op. Defaults to “eye”.
返回:

A Tensor of shape batch_shape + [num_rows, num_columns]

tensorflow.fill(dims, value, name=None)

Creates a tensor filled with a scalar value.

This operation creates a tensor of shape dims and fills it with value.

For example:

>>> tf.fill([2, 3], 9)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[9, 9, 9],
       [9, 9, 9]], dtype=int32)>

tf.fill evaluates at graph runtime and supports dynamic shapes based on other runtime tf.Tensors, unlike tf.constant(value, shape=dims), which embeds the value as a Const node.

参数:
  • dims – A 1-D sequence of non-negative numbers. Represents the shape of the output tf.Tensor. Entries should be of type: int32, int64.
  • value – A value to fill the returned tf.Tensor.
  • name – Optional string. The name of the output tf.Tensor.
返回:

A tf.Tensor with shape dims and the same dtype as value.

Raises:
  • InvalidArgumentErrordims contains negative entries.
  • NotFoundErrordims contains non-integer entries.

@compatibility(numpy) Similar to np.full. In numpy, more parameters are supported. Passing a number argument as the shape (np.full(5, value)) is valid in numpy for specifying a 1-D shaped result, while TensorFlow does not support this syntax. @end_compatibility

tensorflow.fingerprint(data, method='farmhash64', name=None)

Generates fingerprint values.

Generates fingerprint values of data.

Fingerprint op considers the first dimension of data as the batch dimension, and output[i] contains the fingerprint value generated from contents in data[i, …] for all i.

Fingerprint op writes fingerprint values as byte arrays. For example, the default method farmhash64 generates a 64-bit fingerprint value at a time. This 8-byte value is written out as an tf.uint8 array of size 8, in little-endian order.

For example, suppose that data has data type tf.int32 and shape (2, 3, 4), and that the fingerprint method is farmhash64. In this case, the output shape is (2, 8), where 2 is the batch dimension size of data, and 8 is the size of each fingerprint value in bytes. output[0, :] is generated from 12 integers in data[0, :, :] and similarly output[1, :] is generated from other 12 integers in data[1, :, :].

Note that this op fingerprints the raw underlying buffer, and it does not fingerprint Tensor’s metadata such as data type and/or shape. For example, the fingerprint values are invariant under reshapes and bitcasts as long as the batch dimension remain the same:

`python tf.fingerprint(data) == tf.fingerprint(tf.reshape(data, ...)) tf.fingerprint(data) == tf.fingerprint(tf.bitcast(data, ...)) `

For string data, one should expect tf.fingerprint(data) != tf.fingerprint(tf.string.reduce_join(data)) in general.

参数:
  • data – A Tensor. Must have rank 1 or higher.
  • method – A Tensor of type tf.string. Fingerprint method used by this op. Currently available method is farmhash64.
  • name – A name for the operation (optional).
返回:

A two-dimensional Tensor of type tf.uint8. The first dimension equals to data’s first dimension, and the second dimension size depends on the fingerprint algorithm.

tensorflow.floor(x, name=None)

Returns element-wise largest integer not greater than x.

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.foldl(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)

foldl on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.foldl(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.foldl(fn, elems))

This foldl operator repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape`.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

参数:
  • fn – The callable to be performed.
  • elems – A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
  • initializer – (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator.
  • parallel_iterations – (optional) The number of iterations allowed to run in parallel.
  • back_prop – (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
  • swap_memory – (optional) True enables GPU-CPU memory swapping.
  • name – (optional) Name prefix for the returned tensors.
返回:

A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from first to last.

Raises:

TypeError – if fn is not callable.

Example

`python elems = tf.constant([1, 2, 3, 4, 5, 6]) sum = foldl(lambda a, x: a + x, elems) # sum == 21 `

tensorflow.foldr(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, name=None)

foldr on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.foldr(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.foldr(fn, elems))

This foldr operator repeatedly applies the callable fn to a sequence of elements from last to first. The elements are made of the tensors unpacked from elems. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is fn(initializer, values[0]).shape.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

参数:
  • fn – The callable to be performed.
  • elems – A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
  • initializer – (optional) A tensor or (possibly nested) sequence of tensors, as the initial value for the accumulator.
  • parallel_iterations – (optional) The number of iterations allowed to run in parallel.
  • back_prop – (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
  • swap_memory – (optional) True enables GPU-CPU memory swapping.
  • name – (optional) Name prefix for the returned tensors.
返回:

A tensor or (possibly nested) sequence of tensors, resulting from applying fn consecutively to the list of tensors unpacked from elems, from last to first.

Raises:

TypeError – if fn is not callable.

Example

`python elems = [1, 2, 3, 4, 5, 6] sum = foldr(lambda a, x: a + x, elems) # sum == 21 `

tensorflow.function(func=None, input_signature=None, autograph=True, experimental_implements=None, experimental_autograph_options=None, experimental_relax_shapes=False, experimental_compile=None)

Compiles a function into a callable TensorFlow graph.

tf.function constructs a callable that executes a TensorFlow graph (tf.Graph) created by trace-compiling the TensorFlow operations in func, effectively executing func as a TensorFlow graph.

Example usage:

>>> @tf.function
... def f(x, y):
...   return x ** 2 + y
>>> x = tf.constant([2, 3])
>>> y = tf.constant([3, -2])
>>> f(x, y)
<tf.Tensor: ... numpy=array([7, 7], ...)>

_Features_

func may use data-dependent control flow, including if, for, while break, continue and return statements:

>>> @tf.function
... def f(x):
...   if tf.reduce_sum(x) > 0:
...     return x * x
...   else:
...     return -x // 2
>>> f(tf.constant(-2))
<tf.Tensor: ... numpy=1>

func’s closure may include tf.Tensor and tf.Variable objects:

>>> @tf.function
... def f():
...   return x ** 2 + y
>>> x = tf.constant([-2, -3])
>>> y = tf.Variable([3, -2])
>>> f()
<tf.Tensor: ... numpy=array([7, 7], ...)>

func may also use ops with side effects, such as tf.print, tf.Variable and others:

>>> v = tf.Variable(1)
>>> @tf.function
... def f(x):
...   for i in tf.range(x):
...     v.assign_add(i)
>>> f(3)
>>> v
<tf.Variable ... numpy=4>

Important: Any Python side-effects (appending to a list, printing with print, etc) will only happen once, when func is traced. To have side-effects executed into your tf.function they need to be written as TF ops:

>>> l = []
>>> @tf.function
... def f(x):
...   for i in x:
...     l.append(i + 1)    # Caution! Will only happen once when tracing
>>> f(tf.constant([1, 2, 3]))
>>> l
[<tf.Tensor ...>]

Instead, use TensorFlow collections like tf.TensorArray:

>>> @tf.function
... def f(x):
...   ta = tf.TensorArray(dtype=tf.int32, size=0, dynamic_size=True)
...   for i in range(len(x)):
...     ta = ta.write(i, x[i] + 1)
...   return ta.stack()
>>> f(tf.constant([1, 2, 3]))
<tf.Tensor: ..., numpy=array([2, 3, 4], ...)>

tf.function is polymorphic_

Internally, tf.function can build more than one graph, to support arguments with different data types or shapes, since TensorFlow can build more efficient graphs that are specialized on shapes and dtypes. tf.function also treats any pure Python value as opaque objects, and builds a separate graph for each set of Python arguments that it encounters.

To obtain an individual graph, use the get_concrete_function method of the callable created by tf.function. It can be called with the same arguments as func and returns a special tf.Graph object:

>>> @tf.function
... def f(x):
...   return x + 1
>>> isinstance(f.get_concrete_function(1).graph, tf.Graph)
True

Caution: Passing python scalars or lists as arguments to tf.function will always build a new graph. To avoid this, pass numeric arguments as Tensors whenever possible:

>>> @tf.function
... def f(x):
...   return tf.abs(x)
>>> f1 = f.get_concrete_function(1)
>>> f2 = f.get_concrete_function(2)  # Slow - builds new graph
>>> f1 is f2
False
>>> f1 = f.get_concrete_function(tf.constant(1))
>>> f2 = f.get_concrete_function(tf.constant(2))  # Fast - reuses f1
>>> f1 is f2
True

Python numerical arguments should only be used when they take few distinct values, such as hyperparameters like the number of layers in a neural network.

_Input signatures_

For Tensor arguments, tf.function instantiates a separate graph for every unique set of input shapes and datatypes. The example below creates two separate graphs, each specialized to a different shape:

>>> @tf.function
... def f(x):
...   return x + 1
>>> vector = tf.constant([1.0, 1.0])
>>> matrix = tf.constant([[3.0]])
>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)
False

An “input signature” can be optionally provided to tf.function to control the graphs traced. The input signature specifies the shape and type of each Tensor argument to the function using a tf.TensorSpec object. More general shapes can be used. This is useful to avoid creating multiple graphs when Tensors have dynamic shapes. It also restricts the shape and datatype of Tensors that can be used:

>>> @tf.function(
...     input_signature=[tf.TensorSpec(shape=None, dtype=tf.float32)])
... def f(x):
...   return x + 1
>>> vector = tf.constant([1.0, 1.0])
>>> matrix = tf.constant([[3.0]])
>>> f.get_concrete_function(vector) is f.get_concrete_function(matrix)
True

_Variables may only be created once_

tf.function only allows creating new tf.Variable objects when it is called for the first time:

>>> class MyModule(tf.Module):
...   def __init__(self):
...     self.v = None
...
...   @tf.function
...   def call(self, x):
...     if self.v is None:
...       self.v = tf.Variable(tf.ones_like(x))
...     return self.v * x

In general, it is recommended to create stateful objects like tf.Variable outside of tf.function and passing them as arguments.

参数:
  • func – the function to be compiled. If func is None, tf.function returns a decorator that can be invoked with a single argument - func. In other words, tf.function(input_signature=…)(func) is equivalent to tf.function(func, input_signature=…). The former can be used as decorator.
  • input_signature – A possibly nested sequence of tf.TensorSpec objects specifying the shapes and dtypes of the Tensors that will be supplied to this function. If None, a separate function is instantiated for each inferred input signature. If input_signature is specified, every input to func must be a Tensor, and func cannot accept **kwargs.
  • autograph – Whether autograph should be applied on func before tracing a graph. Data-dependent control flow requires autograph=True. For more information, see the [tf.function and AutoGraph guide]( https://www.tensorflow.org/guide/function).
  • experimental_implements

    If provided, contains a name of a “known” function this implements. For example “mycompany.my_recurrent_cell”. This is stored as an attribute in inference function, which can then be detected when processing serialized function. See [standardizing composite ops](https://github.com/tensorflow/community/blob/master/rfcs/20190610-standardizing-composite_ops.md) # pylint: disable=line-too-long for details. For an example of utilizing this attribute see this [example](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/compiler/mlir/lite/transforms/prepare_composite_functions_tf.cc) The code above automatically detects and substitutes function that implements “embedded_matmul” and allows TFLite to substitute its own implementations. For instance, a tensorflow user can use this

    attribute to mark that their function also implements

    embedded_matmul (perhaps more efficiently!) by specifying it using this parameter: @tf.function(experimental_implements=”embedded_matmul”)

  • experimental_autograph_options – Optional tuple of tf.autograph.experimental.Feature values.
  • experimental_relax_shapes – When True, tf.function may generate fewer, graphs that are less specialized on input shapes.
  • experimental_compile – If True, the function is always compiled by [XLA](https://www.tensorflow.org/xla). XLA may be more efficient in some cases (e.g. TPU, XLA_GPU, dense tensor computations).
返回:

If func is not None, returns a callable that will execute the compiled function (and return zero or more tf.Tensor objects). If func is None, returns a decorator that, when invoked with a single func argument, returns a callable equivalent to the case above.

Raises:
  • ValueError when attempting to use experimental_compile, but XLA support is
  • not enabled.
tensorflow.gather(params, indices, validate_indices=None, axis=None, batch_dims=0, name=None)

Gather slices from params axis axis according to indices.

Gather slices from params axis axis according to indices. indices must be an integer tensor of any dimension (usually 0-D or 1-D).

For 0-D (scalar) indices:

$$begin{align*} output[p_0, …, p_{axis-1}, && &&& p_{axis + 1}, …, p_{N-1}] = \ params[p_0, …, p_{axis-1}, && indices, &&& p_{axis + 1}, …, p_{N-1}] end{align*}$$

Where N = ndims(params).

For 1-D (vector) indices with batch_dims=0:

$$begin{align*} output[p_0, …, p_{axis-1}, && &i, &&p_{axis + 1}, …, p_{N-1}] =\ params[p_0, …, p_{axis-1}, && indices[&i], &&p_{axis + 1}, …, p_{N-1}] end{align*}$$

In the general case, produces an output tensor where:

$$begin{align*} output[p_0, &…, p_{axis-1}, &

&i_{B}, …, i_{M-1}, & p_{axis + 1}, &…, p_{N-1}] = \
params[p_0, &…, p_{axis-1}, &
indices[p_0, …, p_{B-1}, &i_{B}, …, i_{M-1}], & p_{axis + 1}, &…, p_{N-1}]

end{align*}$$

Where N = ndims(params), M = ndims(indices), and B = batch_dims. Note that params.shape[:batch_dims] must be identical to indices.shape[:batch_dims].

The shape of the output tensor is:

> output.shape = params.shape[:axis] + indices.shape[batch_dims:] + > params.shape[axis + 1:].

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

See also tf.gather_nd.

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/Gather.png” alt> </div>

参数:
  • params – The Tensor from which to gather values. Must be at least rank axis + 1.
  • indices – The index Tensor. Must be one of the following types: int32, int64. Must be in range [0, params.shape[axis]).
  • validate_indices – Deprecated, does nothing.
  • axis – A Tensor. Must be one of the following types: int32, int64. The axis in params to gather indices from. Must be greater than or equal to batch_dims. Defaults to the first non-batch dimension. Supports negative indexes.
  • batch_dims – An integer. The number of batch dimensions. Must be less than or equal to rank(indices).
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as params.

tensorflow.gather_nd(params, indices, batch_dims=0, name=None)

Gather slices from params into a Tensor with shape specified by indices.

indices is an K-dimensional integer tensor, best thought of as a (K-1)-dimensional tensor of indices into params, where each element defines a slice of params:

output[\(i_0, …, i_{K-2}\)] = params[indices[\(i_0, …, i_{K-2}\)]]

Whereas in tf.gather indices defines slices into the first dimension of params, in tf.gather_nd, indices defines slices into the first N dimensions of params, where N = indices.shape[-1].

The last dimension of indices can be at most the rank of params:

indices.shape[-1] <= params.rank

The last dimension of indices corresponds to elements (if indices.shape[-1] == params.rank) or slices (if indices.shape[-1] < params.rank) along dimension indices.shape[-1] of params. The output tensor has shape

indices.shape[:-1] + params.shape[indices.shape[-1]:]

Additionally both ‘params’ and ‘indices’ can have M leading batch dimensions that exactly match. In this case ‘batch_dims’ must be M.

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, a 0 is stored in the corresponding output value.

Some examples below.

Simple indexing into a matrix:

```python
indices = [[0, 0], [1, 1]] params = [[‘a’, ‘b’], [‘c’, ‘d’]] output = [‘a’, ‘d’]

```

Slice indexing into a matrix:

```python
indices = [[1], [0]] params = [[‘a’, ‘b’], [‘c’, ‘d’]] output = [[‘c’, ‘d’], [‘a’, ‘b’]]

```

Indexing into a 3-tensor:

```python

indices = [[1]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

indices = [[0, 1], [1, 0]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[‘c0’, ‘d0’], [‘a1’, ‘b1’]]

indices = [[0, 0, 1], [1, 0, 1]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [‘b0’, ‘b1’]

```

The examples below are for the case when only indices have leading extra dimensions. If both ‘params’ and ‘indices’ have leading batch dimensions, use the ‘batch_dims’ parameter to run gather_nd in batch mode.

Batched indexing into a matrix:

```python
indices = [[[0, 0]], [[0, 1]]] params = [[‘a’, ‘b’], [‘c’, ‘d’]] output = [[‘a’], [‘b’]]

```

Batched slice indexing into a matrix:

```python
indices = [[[1]], [[0]]] params = [[‘a’, ‘b’], [‘c’, ‘d’]] output = [[[‘c’, ‘d’]], [[‘a’, ‘b’]]]

```

Batched indexing into a 3-tensor:

```python

indices = [[[1]], [[0]]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]
output = [[[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]],
[[[‘a0’, ‘b0’], [‘c0’, ‘d0’]]]]

indices = [[[0, 1], [1, 0]], [[0, 0], [1, 1]]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]
output = [[[‘c0’, ‘d0’], [‘a1’, ‘b1’]],
[[‘a0’, ‘b0’], [‘c1’, ‘d1’]]]

indices = [[[0, 0, 1], [1, 0, 1]], [[0, 1, 1], [1, 1, 0]]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[‘b0’, ‘b1’], [‘d0’, ‘c1’]]

```

Examples with batched ‘params’ and ‘indices’:

```python

batch_dims = 1 indices = [[1], [0]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[‘c0’, ‘d0’], [‘a1’, ‘b1’]]

batch_dims = 1 indices = [[[1]], [[0]]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[[‘c0’, ‘d0’]], [[‘a1’, ‘b1’]]]

batch_dims = 1 indices = [[[1, 0]], [[0, 1]]] params = [[[‘a0’, ‘b0’], [‘c0’, ‘d0’]],

[[‘a1’, ‘b1’], [‘c1’, ‘d1’]]]

output = [[‘c0’], [‘b1’]]

```

See also tf.gather.

参数:
  • params – A Tensor. The tensor from which to gather values.
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • name – A name for the operation (optional).
  • batch_dims – An integer or a scalar ‘Tensor’. The number of batch dimensions.
返回:

A Tensor. Has the same type as params.

tensorflow.get_logger()

Return TF logger instance.

tensorflow.get_static_value(tensor, partial=False)

Returns the constant value of the given tensor, if efficiently calculable.

This function attempts to partially evaluate the given tensor, and returns its value as a numpy ndarray if this succeeds.

Compatibility(V1): If constant_value(tensor) returns a non-None result, it will no longer be possible to feed a different value for tensor. This allows the result of this function to influence the graph that is constructed, and permits static shape optimizations.

参数:
  • tensor – The Tensor to be evaluated.
  • partial – If True, the returned numpy array is allowed to have partially evaluated values. Values that can’t be evaluated will be None.
返回:

A numpy ndarray containing the constant value of the given tensor, or None if it cannot be calculated.

Raises:

TypeError – if tensor is not an ops.Tensor.

tensorflow.grad_pass_through(f)

Creates a grad-pass-through op with the forward behavior provided in f.

Use this function to wrap any op, maintaining its behavior in the forward pass, but replacing the original op in the backward graph with an identity. For example:

```python x = tf.Variable(1.0, name=”x”) z = tf.Variable(3.0, name=”z”)

with tf.GradientTape() as tape:
# y will evaluate to 9.0 y = tf.grad_pass_through(x.assign)(z**2)

# grads will evaluate to 6.0 grads = tape.gradient(y, z) ```

Another example is a ‘differentiable’ moving average approximation, where gradients are allowed to flow into the last value fed to the moving average, but the moving average is still used for the forward pass:

```python x = … # Some scalar value # A moving average object, we don’t need to know how this is implemented moving_average = MovingAverage() with backprop.GradientTape() as tape:

# mavg_x will evaluate to the current running average value mavg_x = tf.grad_pass_through(moving_average)(x)

grads = tape.gradient(mavg_x, x) # grads will evaluate to 1.0 ```

参数:f – function f(*x) that returns a Tensor or nested structure of Tensor outputs.
返回:A function h(x) which returns the same values as f(x) and whose gradients are the same as those of an identity function.
tensorflow.gradients(ys, xs, grad_ys=None, name='gradients', gate_gradients=False, aggregation_method=None, stop_gradients=None, unconnected_gradients=<UnconnectedGradients.NONE: 'none'>)

Constructs symbolic derivatives of sum of ys w.r.t. x in xs.

ys and xs are each a Tensor or a list of tensors. grad_ys is a list of Tensor, holding the gradients received by the ys. The list must be the same length as ys.

gradients() adds ops to the graph to output the derivatives of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs.

grad_ys is a list of tensors of the same length as ys that holds the initial gradients for each y in ys. When grad_ys is None, we fill in a tensor of ‘1’s of the shape of y for each y in ys. A user can provide their own initial grad_ys to compute the derivatives using a different initial gradient for each y (e.g., if one wanted to weight the gradient differently for each value in each y).

stop_gradients is a Tensor or a list of tensors to be considered constant with respect to all xs. These tensors will not be backpropagated through, as though they had been explicitly disconnected using stop_gradient. Among other things, this allows computation of partial derivatives as opposed to total derivatives. For example:

`python a = tf.constant(0.) b = 2 * a g = tf.gradients(a + b, [a, b], stop_gradients=[a, b]) `

Here the partial derivatives g evaluate to [1.0, 1.0], compared to the total derivatives tf.gradients(a + b, [a, b]), which take into account the influence of a on b and evaluate to [3.0, 1.0]. Note that the above is equivalent to:

`python a = tf.stop_gradient(tf.constant(0.)) b = tf.stop_gradient(2 * a) g = tf.gradients(a + b, [a, b]) `

stop_gradients provides a way of stopping gradient after the graph has already been constructed, as compared to tf.stop_gradient which is used during graph construction. When the two approaches are combined, backpropagation stops at both tf.stop_gradient nodes and nodes in stop_gradients, whichever is encountered first.

All integer tensors are considered constant with respect to all xs, as if they were included in stop_gradients.

unconnected_gradients determines the value returned for each x in xs if it is unconnected in the graph to ys. By default this is None to safeguard against errors. Mathematically these gradients are zero which can be requested using the ‘zero’ option. tf.UnconnectedGradients provides the following options and behaviors:

```python a = tf.ones([1, 2]) b = tf.ones([3, 1]) g1 = tf.gradients([b], [a], unconnected_gradients=’none’) sess.run(g1) # [None]

g2 = tf.gradients([b], [a], unconnected_gradients=’zero’) sess.run(g2) # [array([[0., 0.]], dtype=float32)] ```

Let us take one practical example which comes during the back propogation phase. This function is used to evaluate the derivatives of the cost function with respect to Weights Ws and Biases bs. Below sample implementation provides the exaplantion of what it is actually used for :

`python Ws = tf.constant(0.) bs = 2 * Ws cost = Ws + bs  # This is just an example. So, please ignore the formulas. g = tf.gradients(cost, [Ws, bs]) dCost_dW, dCost_db = g `

参数:
  • ys – A Tensor or list of tensors to be differentiated.
  • xs – A Tensor or list of tensors to be used for differentiation.
  • grad_ys – Optional. A Tensor or list of tensors the same size as ys and holding the gradients computed for each y in ys.
  • name – Optional name to use for grouping all the gradient ops together. defaults to ‘gradients’.
  • gate_gradients – If True, add a tuple around the gradients returned for an operations. This avoids some race conditions.
  • aggregation_method – Specifies the method used to combine gradient terms. Accepted values are constants defined in the class AggregationMethod.
  • stop_gradients – Optional. A Tensor or list of tensors not to differentiate through.
  • unconnected_gradients – Optional. Specifies the gradient value returned when the given input tensors are unconnected. Accepted values are constants defined in the class tf.UnconnectedGradients and the default value is none.
返回:

A list of Tensor of length len(xs) where each tensor is the sum(dy/dx) for y in ys and for x in xs.

Raises:
  • LookupError – if one of the operations between x and y does not have a registered gradient function.
  • ValueError – if the arguments are invalid.
  • RuntimeError – if called in Eager mode.
tensorflow.greater(x, y, name=None)

Returns the truth value of (x > y) element-wise.

NOTE: math.greater supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

Example:

```python x = tf.constant([5, 4, 6]) y = tf.constant([5, 2, 5]) tf.math.greater(x, y) ==> [False, True, True]

x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.greater(x, y) ==> [False, False, True] ```

参数:
  • x – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.greater_equal(x, y, name=None)

Returns the truth value of (x >= y) element-wise.

NOTE: math.greater_equal supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

Example:

```python x = tf.constant([5, 4, 6, 7]) y = tf.constant([5, 2, 5, 10]) tf.math.greater_equal(x, y) ==> [True, True, True, False]

x = tf.constant([5, 4, 6, 7]) y = tf.constant([5]) tf.math.greater_equal(x, y) ==> [True, False, True, True] ```

参数:
  • x – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.group(*inputs, **kwargs)

Create an op that groups multiple operations.

When this op finishes, all ops in inputs have finished. This op has no output.

See also tf.tuple and tf.control_dependencies.

参数:
  • *inputs – Zero or more tensors to group.
  • name – A name for this operation (optional).
返回:

An Operation that executes all its inputs.

Raises:

ValueError – If an unknown keyword argument is provided.

tensorflow.guarantee_const(input, name=None)

Gives a guarantee to the TF runtime that the input tensor is a constant.

The runtime is then free to make optimizations based on this.

Only accepts value typed tensors as inputs and rejects resource variable handles as input.

Returns the input tensor without modification.

参数:
  • input – A Tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.hessians(ys, xs, gate_gradients=False, aggregation_method=None, name='hessians')

Constructs the Hessian of sum of ys with respect to x in xs.

hessians() adds ops to the graph to output the Hessian matrix of ys with respect to xs. It returns a list of Tensor of length len(xs) where each tensor is the Hessian of sum(ys).

The Hessian is a matrix of second-order partial derivatives of a scalar tensor (see https://en.wikipedia.org/wiki/Hessian_matrix for more details).

参数:
  • ys – A Tensor or list of tensors to be differentiated.
  • xs – A Tensor or list of tensors to be used for differentiation.
  • name – Optional name to use for grouping all the gradient ops together. defaults to ‘hessians’.
  • colocate_gradients_with_ops – See gradients() documentation for details.
  • gate_gradients – See gradients() documentation for details.
  • aggregation_method – See gradients() documentation for details.
返回:

A list of Hessian matrices of sum(ys) for each x in xs.

Raises:

LookupError – if one of the operations between xs and ys does not have a registered gradient function.

tensorflow.histogram_fixed_width(values, value_range, nbins=100, dtype=tf.int32, name=None)

Return histogram of values.

Given the tensor values, this operation returns a rank 1 histogram counting the number of entries in values that fell into every bin. The bins are equal width and determined by the arguments value_range and nbins.

参数:
  • values – Numeric Tensor.
  • value_range – Shape [2] Tensor of same dtype as values. values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
  • nbins – Scalar int32 Tensor. Number of histogram bins.
  • dtype – dtype for returned histogram.
  • name – A name for this operation (defaults to ‘histogram_fixed_width’).
返回:

A 1-D Tensor holding histogram of values.

Raises:
  • TypeError – If any unsupported dtype is provided.
  • tf.errors.InvalidArgumentError – If value_range does not satisfy value_range[0] < value_range[1].

Examples:

```python # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]

with tf.compat.v1.get_default_session() as sess:
hist = tf.histogram_fixed_width(new_values, value_range, nbins=5) variables.global_variables_initializer().run() sess.run(hist) => [2, 1, 1, 0, 2]

```

tensorflow.histogram_fixed_width_bins(values, value_range, nbins=100, dtype=tf.int32, name=None)

Bins the given values for use in a histogram.

Given the tensor values, this operation returns a rank 1 Tensor representing the indices of a histogram into which each element of values would be binned. The bins are equal width and determined by the arguments value_range and nbins.

参数:
  • values – Numeric Tensor.
  • value_range – Shape [2] Tensor of same dtype as values. values <= value_range[0] will be mapped to hist[0], values >= value_range[1] will be mapped to hist[-1].
  • nbins – Scalar int32 Tensor. Number of histogram bins.
  • dtype – dtype for returned histogram.
  • name – A name for this operation (defaults to ‘histogram_fixed_width’).
返回:

A Tensor holding the indices of the binned values whose shape matches values.

Raises:
  • TypeError – If any unsupported dtype is provided.
  • tf.errors.InvalidArgumentError – If value_range does not satisfy value_range[0] < value_range[1].

Examples:

```python # Bins will be: (-inf, 1), [1, 2), [2, 3), [3, 4), [4, inf) nbins = 5 value_range = [0.0, 5.0] new_values = [-1.0, 0.0, 1.5, 2.0, 5.0, 15]

with tf.compat.v1.get_default_session() as sess:
indices = tf.histogram_fixed_width_bins(new_values, value_range, nbins=5) variables.global_variables_initializer().run() sess.run(indices) # [0, 0, 1, 2, 4, 4]

```

tensorflow.identity(input, name=None)

Return a Tensor with the same shape and contents as input.

The return value is not the same Tensor as the original, but contains the same values. This operation is fast when used on the same device.

For example:

>>> a = tf.constant([0.78])
>>> a_identity = tf.identity(a)
>>> a.numpy()
array([0.78], dtype=float32)
>>> a_identity.numpy()
array([0.78], dtype=float32)

Calling tf.identity on a variable will make a Tensor that represents the value of that variable at the time it is called. This is equivalent to calling <variable>.read_value().

>>> a = tf.Variable(5)
>>> a_identity = tf.identity(a)
>>> a.assign_add(1)
<tf.Variable ... shape=() dtype=int32, numpy=6>
>>> a.numpy()
6
>>> a_identity.numpy()
5
参数:
  • input – A Tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.identity_n(input, name=None)

Returns a list of tensors with the same shapes and contents as the input

tensors.

This op can be used to override the gradient for complicated functions. For example, suppose y = f(x) and we wish to apply a custom function g for backprop such that dx = g(dy). In Python,

```python with tf.get_default_graph().gradient_override_map(

{‘IdentityN’: ‘OverrideGradientWithG’}):

y, _ = identity_n([f(x), x])

@tf.RegisterGradient(‘OverrideGradientWithG’) def ApplyG(op, dy, _):

return [None, g(dy)] # Do not backprop to f(x).

```

参数:
  • input – A list of Tensor objects.
  • name – A name for the operation (optional).
返回:

A list of Tensor objects. Has the same type as input.

tensorflow.import_graph_def(graph_def, input_map=None, return_elements=None, name=None, op_dict=None, producer_op_list=None)

Imports the graph from graph_def into the current default Graph. (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: (op_dict). They will be removed in a future version. Instructions for updating: Please file an issue at https://github.com/tensorflow/tensorflow/issues if you depend on this feature.

This function provides a way to import a serialized TensorFlow [GraphDef](https://www.tensorflow.org/code/tensorflow/core/framework/graph.proto) protocol buffer, and extract individual objects in the GraphDef as tf.Tensor and tf.Operation objects. Once extracted, these objects are placed into the current default Graph. See tf.Graph.as_graph_def for a way to create a GraphDef proto.

参数:
  • graph_def – A GraphDef proto containing operations to be imported into the default graph.
  • input_map – A dictionary mapping input names (as strings) in graph_def to Tensor objects. The values of the named input tensors in the imported graph will be re-mapped to the respective Tensor values.
  • return_elements – A list of strings containing operation names in graph_def that will be returned as Operation objects; and/or tensor names in graph_def that will be returned as Tensor objects.
  • name – (Optional.) A prefix that will be prepended to the names in graph_def. Note that this does not apply to imported function names. Defaults to “import”.
  • op_dict – (Optional.) Deprecated, do not use.
  • producer_op_list – (Optional.) An OpList proto with the (possibly stripped) list of OpDef`s used by the producer of the graph. If provided, unrecognized attrs for ops in `graph_def that have their default value according to producer_op_list will be removed. This will allow some more `GraphDef`s produced by later binaries to be accepted by earlier binaries.
返回:

A list of Operation and/or Tensor objects from the imported graph, corresponding to the names in return_elements, and None if returns_elements is None.

Raises:
  • TypeError – If graph_def is not a GraphDef proto, input_map is not a dictionary mapping strings to Tensor objects, or return_elements is not a list of strings.
  • ValueError – If input_map, or return_elements contains names that do not appear in graph_def, or graph_def is not well-formed (e.g. it refers to an unknown tensor).
tensorflow.init_scope()

A context manager that lifts ops out of control-flow scopes and function-building graphs.

There is often a need to lift variable initialization ops out of control-flow scopes, function-building graphs, and gradient tapes. Entering an init_scope is a mechanism for satisfying these desiderata. In particular, entering an init_scope has three effects:

  1. All control dependencies are cleared the moment the scope is entered; this is equivalent to entering the context manager returned from control_dependencies(None), which has the side-effect of exiting control-flow scopes like tf.cond and tf.while_loop.
  2. All operations that are created while the scope is active are lifted into the lowest context on the context_stack that is not building a graph function. Here, a context is defined as either a graph or an eager context. Every context switch, i.e., every installation of a graph as the default graph and every switch into eager mode, is logged in a thread-local stack called context_switches; the log entry for a context switch is popped from the stack when the context is exited. Entering an init_scope is equivalent to crawling up context_switches, finding the first context that is not building a graph function, and entering it. A caveat is that if graph mode is enabled but the default graph stack is empty, then entering an init_scope will simply install a fresh graph as the default one.
  3. The gradient tape is paused while the scope is active.

When eager execution is enabled, code inside an init_scope block runs with eager execution enabled even when tracing a tf.function. For example:

```python tf.compat.v1.enable_eager_execution()

@tf.function def func():

# A function constructs TensorFlow graphs, # it does not execute eagerly. assert not tf.executing_eagerly() with tf.init_scope():

# Initialization runs with eager execution enabled assert tf.executing_eagerly()

```

Raises:RuntimeError – if graph state is incompatible with this initialization.
tensorflow.is_tensor(x)

Checks whether x is a tensor or “tensor-like”.

If is_tensor(x) returns True, it is safe to assume that x is a tensor or can be converted to a tensor using ops.convert_to_tensor(x).

Usage example:

>>> tf.is_tensor(tf.constant([[1,2,3],[4,5,6],[7,8,9]]))
True
>>> tf.is_tensor("Hello World")
False
参数:x – A python object to check.
返回:True if x is a tensor or “tensor-like”, False if not.
tensorflow.less(x, y, name=None)

Returns the truth value of (x < y) element-wise.

NOTE: math.less supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

Example:

```python x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less(x, y) ==> [False, True, False]

x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 7]) tf.math.less(x, y) ==> [False, True, True] ```

参数:
  • x – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.less_equal(x, y, name=None)

Returns the truth value of (x <= y) element-wise.

NOTE: math.less_equal supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

Example:

```python x = tf.constant([5, 4, 6]) y = tf.constant([5]) tf.math.less_equal(x, y) ==> [True, True, False]

x = tf.constant([5, 4, 6]) y = tf.constant([5, 6, 6]) tf.math.less_equal(x, y) ==> [True, True, True] ```

参数:
  • x – A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, int64, bfloat16, uint16, half, uint32, uint64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.linspace(start, stop, num, name=None)

Generates values in an interval.

A sequence of num evenly-spaced values are generated beginning at start. If num > 1, the values in the sequence increase by stop - start / num - 1, so that the last one is exactly stop.

For example:

` tf.linspace(10.0, 12.0, 3, name="linspace") => [ 10.0  11.0  12.0] `

参数:
  • start – A Tensor. Must be one of the following types: bfloat16, half, float32, float64. 0-D tensor. First entry in the range.
  • stop – A Tensor. Must have the same type as start. 0-D tensor. Last entry in the range.
  • num – A Tensor. Must be one of the following types: int32, int64. 0-D tensor. Number of values to generate.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as start.

tensorflow.load_library(library_location)

Loads a TensorFlow plugin.

“library_location” can be a path to a specific shared object, or a folder. If it is a folder, all shared objects that are named “libtfkernel*” will be loaded. When the library is loaded, kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process.

参数:

library_location – Path to the plugin or the folder of plugins. Relative or absolute filesystem path to a dynamic library file or folder.

返回:

None

Raises:
  • OSError – When the file to be loaded is not found.
  • RuntimeError – when unable to load the library.
tensorflow.load_op_library(library_filename)

Loads a TensorFlow plugin, containing custom ops and kernels.

Pass “library_filename” to a platform-specific mechanism for dynamically loading a library. The rules for determining the exact location of the library are platform-specific and are not documented here. When the library is loaded, ops and kernels registered in the library via the REGISTER_* macros are made available in the TensorFlow process. Note that ops with the same name as an existing op are rejected and not registered with the process.

参数:library_filename – Path to the plugin. Relative or absolute filesystem path to a dynamic library file.
返回:A python module containing the Python wrappers for Ops defined in the plugin.
Raises:RuntimeError – when unable to load the library or get the python wrappers.
tensorflow.logical_and(x, y, name=None)

Logical AND function.

The operation works for the following input types:

  • Two single elements of type bool
  • One tf.Tensor of type bool and one single bool, where the result will be calculated by applying logical AND with the single element to each element in the larger Tensor.
  • Two tf.Tensor objects of type bool of the same shape. In this case, the result will be the element-wise logical AND of the two input tensors.

Usage:

>>> a = tf.constant([True])
>>> b = tf.constant([False])
>>> tf.math.logical_and(a, b)
<tf.Tensor: shape=(1,), dtype=bool, numpy=array([False])>
>>> c = tf.constant([True])
>>> x = tf.constant([False, True, True, False])
>>> tf.math.logical_and(c, x)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False,  True,  True, False])>
>>> y = tf.constant([False, False, True, True])
>>> z = tf.constant([False, True, False, True])
>>> tf.math.logical_and(y, z)
<tf.Tensor: shape=(4,), dtype=bool, numpy=array([False, False, False,  True])>
参数:
  • x – A tf.Tensor type bool.
  • y – A tf.Tensor of type bool.
  • name – A name for the operation (optional).
返回:

A tf.Tensor of type bool with the same size as that of x or y.

tensorflow.logical_not(x, name=None)

Returns the truth value of NOT x element-wise.

Example:

>>> tf.math.logical_not(tf.constant([True, False]))
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False,  True])>
参数:
  • x – A Tensor of type bool. A Tensor of type bool.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.logical_or(x, y, name=None)

Returns the truth value of x OR y element-wise.

NOTE: math.logical_or supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor of type bool.
  • y – A Tensor of type bool.
  • name – A name for the operation (optional).
返回:

A Tensor of type bool.

tensorflow.make_ndarray(tensor)

Create a numpy ndarray from a tensor.

Create a numpy ndarray with the same shape and data as the tensor.

For example:

`python # Tensor a has shape (2,3) a = tf.constant([[1,2,3],[4,5,6]]) proto_tensor = tf.make_tensor_proto(a)  # convert `tensor a` to a proto tensor tf.make_ndarray(proto_tensor) # output: array([[1, 2, 3], #                                              [4, 5, 6]], dtype=int32) # output has shape (2,3) `

参数:tensor – A TensorProto.
返回:A numpy array with the tensor contents.
Raises:TypeError – if tensor has unsupported type.
tensorflow.make_tensor_proto(values, dtype=None, shape=None, verify_shape=False, allow_broadcast=False)

Create a TensorProto.

In TensorFlow 2.0, representing tensors as protos should no longer be a common workflow. That said, this utility function is still useful for generating TF Serving request protos:

```python
request = tensorflow_serving.apis.predict_pb2.PredictRequest() request.model_spec.name = “my_model” request.model_spec.signature_name = “serving_default” request.inputs[“images”].CopyFrom(tf.make_tensor_proto(X_new))

```

make_tensor_proto accepts “values” of a python scalar, a python list, a numpy ndarray, or a numpy scalar.

If “values” is a python scalar or a python list, make_tensor_proto first convert it to numpy ndarray. If dtype is None, the conversion tries its best to infer the right numpy data type. Otherwise, the resulting numpy array has a compatible data type with the given dtype.

In either case above, the numpy ndarray (either the caller provided or the auto-converted) must have the compatible type with dtype.

make_tensor_proto then converts the numpy array to a tensor proto.

If “shape” is None, the resulting tensor proto represents the numpy array precisely.

Otherwise, “shape” specifies the tensor’s shape and the numpy array can not have more elements than what “shape” specifies.

参数:
  • values – Values to put in the TensorProto.
  • dtype – Optional tensor_pb2 DataType value.
  • shape – List of integers representing the dimensions of tensor.
  • verify_shape – Boolean that enables verification of a shape of values.
  • allow_broadcast – Boolean that enables allowing scalars and 1 length vector broadcasting. Cannot be true when verify_shape is true.
返回:

A TensorProto. Depending on the type, it may contain data in the “tensor_content” attribute, which is not directly useful to Python programs. To access the values you should convert the proto back to a numpy ndarray with tf.make_ndarray(proto).

If values is a TensorProto, it is immediately returned; dtype and shape are ignored.

Raises:
  • TypeError – if unsupported types are provided.
  • ValueError – if arguments have inappropriate values or if verify_shape is True and shape of values is not equals to a shape from the argument.
tensorflow.map_fn(fn, elems, dtype=None, parallel_iterations=None, back_prop=True, swap_memory=False, infer_shape=True, name=None)

map on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.map_fn(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.map_fn(fn, elems))

The simplest version of map_fn repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems. dtype is the data type of the return value of fn. Users must provide dtype if it is different from the data type of elems.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [values.shape[0]] + fn(values[0]).shape.

This method also allows multi-arity elems and output of fn. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The signature of fn may match the structure of elems. That is, if elems is (t1, [t2, t3, [t4, t5]]), then an appropriate signature for fn is: fn = lambda (t1, [t2, t3, [t4, t5]]):.

Furthermore, fn may emit a different structure than its input. For example, fn may look like: fn = lambda t1: return (t1 + 1, t1 - 1). In this case, the dtype parameter is not optional: dtype must be a type or (possibly nested) tuple of types matching the output of fn.

To apply a functional operation to the nonzero elements of a SparseTensor one of the following methods is recommended. First, if the function is expressible as TensorFlow ops, use

```python
result = SparseTensor(input.indices, fn(input.values), input.dense_shape)

```

If, however, the function is not expressible as a TensorFlow op, then use

```python result = SparseTensor(

input.indices, map_fn(fn, input.values), input.dense_shape)

```

instead.

When executing eagerly, map_fn does not execute in parallel even if parallel_iterations is set to a value > 1. You can still get the performance benefits of running a function in parallel by using the tf.function decorator,

```python # Assume the function being used in map_fn is fn. # To ensure map_fn calls fn in parallel, use the tf.function decorator. @tf.function def func(tensor):

return tf.map_fn(fn, tensor)

```

Note that if you use the tf.function decorator, any non-TensorFlow Python code that you may have written in your function won’t get executed. See [tf.function](https://www.tensorflow.org/api_docs/python/tf/function) for more details. The recommendation would be to debug without tf.function but switch to it to get performance benefits of running map_fn in parallel.

参数:
  • fn – The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems. Its output must have the same structure as dtype if one is provided, otherwise it must have the same structure as elems.
  • elems – A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be applied to fn.
  • dtype – (optional) The output type(s) of fn. If fn returns a structure of Tensors differing from the structure of elems, then dtype is not optional and must have the same structure as the output of fn.
  • parallel_iterations – (optional) The number of iterations allowed to run in parallel. When graph building, the default value is 10. While executing eagerly, the default value is set to 1.
  • back_prop – (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
  • swap_memory – (optional) True enables GPU-CPU memory swapping.
  • infer_shape – (optional) False disables tests for consistent output shapes.
  • name – (optional) Name prefix for the returned tensors.
返回:

A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, from first to last.

Raises:
  • TypeError – if fn is not callable or the structure of the output of fn and dtype do not match, or if elems is a SparseTensor.
  • ValueError – if the lengths of the output of fn and dtype do not match.

实际案例

`python elems = np.array([1, 2, 3, 4, 5, 6]) squares = map_fn(lambda x: x * x, elems) # squares == [1, 4, 9, 16, 25, 36] `

`python elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) # alternate == [-1, 2, -3] `

`python elems = np.array([1, 2, 3]) alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) # alternates[0] == [1, 2, 3] # alternates[1] == [-1, -2, -3] `

tensorflow.matmul(a, b, transpose_a=False, transpose_b=False, adjoint_a=False, adjoint_b=False, a_is_sparse=False, b_is_sparse=False, name=None)

Multiplies matrix a by matrix b, producing a * b.

The inputs must, following any transpositions, be tensors of rank >= 2 where the inner 2 dimensions specify valid matrix multiplication dimensions, and any further outer dimensions specify matching batch size.

Both matrices must be of the same type. The supported types are: float16, float32, float64, int32, complex64, complex128.

Either matrix can be transposed or adjointed (conjugated and transposed) on the fly by setting one of the corresponding flag to True. These are False by default.

If one or both of the matrices contain a lot of zeros, a more efficient multiplication algorithm can be used by setting the corresponding a_is_sparse or b_is_sparse flag to True. These are False by default. This optimization is only available for plain matrices (rank-2 tensors) with datatypes bfloat16 or float32.

A simple 2-D tensor matrix multiplication:

>>> a = tf.constant([1, 2, 3, 4, 5, 6], shape=[2, 3])
>>> a  # 2-D tensor
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>
>>> b = tf.constant([7, 8, 9, 10, 11, 12], shape=[3, 2])
>>> b  # 2-D tensor
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[ 7,  8],
       [ 9, 10],
       [11, 12]], dtype=int32)>
>>> c = tf.matmul(a, b)
>>> c  # `a` * `b`
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[ 58,  64],
       [139, 154]], dtype=int32)>

A batch matrix multiplication with batch shape [2]:

>>> a = tf.constant(np.arange(1, 13, dtype=np.int32), shape=[2, 2, 3])
>>> a  # 3-D tensor
<tf.Tensor: shape=(2, 2, 3), dtype=int32, numpy=
array([[[ 1,  2,  3],
        [ 4,  5,  6]],
       [[ 7,  8,  9],
        [10, 11, 12]]], dtype=int32)>
>>> b = tf.constant(np.arange(13, 25, dtype=np.int32), shape=[2, 3, 2])
>>> b  # 3-D tensor
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[13, 14],
        [15, 16],
        [17, 18]],
       [[19, 20],
        [21, 22],
        [23, 24]]], dtype=int32)>
>>> c = tf.matmul(a, b)
>>> c  # `a` * `b`
<tf.Tensor: shape=(2, 2, 2), dtype=int32, numpy=
array([[[ 94, 100],
        [229, 244]],
       [[508, 532],
        [697, 730]]], dtype=int32)>

Since python >= 3.5 the @ operator is supported (see [PEP 465](https://www.python.org/dev/peps/pep-0465/)). In TensorFlow, it simply calls the tf.matmul() function, so the following lines are equivalent:

>>> d = a @ b @ [[10], [11]]
>>> d = tf.matmul(tf.matmul(a, b), [[10], [11]])
参数:
  • atf.Tensor of type float16, float32, float64, int32, complex64, complex128 and rank > 1.
  • btf.Tensor with same type and rank as a.
  • transpose_a – If True, a is transposed before multiplication.
  • transpose_b – If True, b is transposed before multiplication.
  • adjoint_a – If True, a is conjugated and transposed before multiplication.
  • adjoint_b – If True, b is conjugated and transposed before multiplication.
  • a_is_sparse – If True, a is treated as a sparse matrix. Notice, this does not support `tf.sparse.SparseTensor`, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.SparseTensor multiplication.
  • b_is_sparse – If True, b is treated as a sparse matrix. Notice, this does not support `tf.sparse.SparseTensor`, it just makes optimizations that assume most values in a are zero. See tf.sparse.sparse_dense_matmul for some support for tf.SparseTensor multiplication.
  • name – Name for the operation (optional).
返回:

A tf.Tensor of the same type as a and b where each inner-most matrix is the product of the corresponding matrices in a and b, e.g. if all transpose or adjoint attributes are False:

output[…, i, j] = sum_k (a[…, i, k] * b[…, k, j]), for all indices i, j.

Note: This is matrix product, not element-wise product.

Raises:

ValueError – If transpose_a and adjoint_a, or transpose_b and adjoint_b are both set to True.

tensorflow.matrix_square_root(input, name=None)

Computes the matrix square root of one or more square matrices:

matmul(sqrtm(A), sqrtm(A)) = A

The input matrix should be invertible. If the input matrix is real, it should have no eigenvalues which are real and negative (pairs of complex conjugate eigenvalues are allowed).

The matrix square root is computed by first reducing the matrix to quasi-triangular form with the real Schur decomposition. The square root of the quasi-triangular matrix is then computed directly. Details of the algorithm can be found in: Nicholas J. Higham, “Computing real square roots of a real matrix”, Linear Algebra Appl., 1987.

The input is a tensor of shape […, M, M] whose inner-most 2 dimensions form square matrices. The output is a tensor of the same shape as the input containing the matrix square root for all input submatrices […, :, :].

参数:
  • input – A Tensor. Must be one of the following types: float64, float32, half, complex64, complex128. Shape is […, M, M].
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.maximum(x, y, name=None)

Returns the max of x and y (i.e. x > y ? x : y) element-wise.

Example: >>> x = tf.constant([0., 0., 0., 0.]) >>> y = tf.constant([-2., 0., 2., 5.]) >>> tf.math.maximum(x, y) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([0., 0., 2., 5.], dtype=float32)>

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.meshgrid(*args, **kwargs)

Broadcasts parameters for evaluation on an N-D grid.

Given N one-dimensional coordinate arrays *args, returns a list outputs of N-D coordinate arrays for evaluating expressions on an N-D grid.

Notes:

meshgrid supports cartesian (‘xy’) and matrix (‘ij’) indexing conventions. When the indexing argument is set to ‘xy’ (the default), the broadcasting instructions for the first two dimensions are swapped.

Examples:

Calling X, Y = meshgrid(x, y) with the tensors

`python x = [1, 2, 3] y = [4, 5, 6] X, Y = tf.meshgrid(x, y) # X = [[1, 2, 3], #      [1, 2, 3], #      [1, 2, 3]] # Y = [[4, 4, 4], #      [5, 5, 5], #      [6, 6, 6]] `

参数:
  • *args`Tensor`s with rank 1.
  • **kwargs
    • indexing: Either ‘xy’ or ‘ij’ (optional, default: ‘xy’).
    • name: A name for the operation (optional).
返回:

A list of N `Tensor`s with rank N.

返回类型:

outputs

Raises:
  • TypeError – When no keyword arguments (kwargs) are passed.
  • ValueError – When indexing keyword argument is not one of xy or ij.
tensorflow.minimum(x, y, name=None)

Returns the min of x and y (i.e. x < y ? x : y) element-wise.

Example: >>> x = tf.constant([0., 0., 0., 0.]) >>> y = tf.constant([-5., -2., 0., 3.]) >>> tf.math.minimum(x, y) <tf.Tensor: shape=(4,), dtype=float32, numpy=array([-5., -2., 0., 0.], dtype=float32)>

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.multiply(x, y, name=None)

Returns an element-wise x * y.

For example:

>>> x = tf.constant(([1, 2, 3, 4]))
>>> tf.math.multiply(x, x)
<tf.Tensor: shape=(4,), dtype=..., numpy=array([ 1,  4,  9, 16], dtype=int32)>

Since tf.math.multiply will convert its arguments to Tensor`s, you can also pass in non-`Tensor arguments:

>>> tf.math.multiply(7,6)
<tf.Tensor: shape=(), dtype=int32, numpy=42>

If x.shape is not thes same as y.shape, they will be broadcast to a compatible shape. (More about broadcasting [here](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).)

For example:

>>> x = tf.ones([1, 2]);
>>> y = tf.ones([2, 1]);
>>> x * y  # Taking advantage of operator overriding
<tf.Tensor: shape=(2, 2), dtype=float32, numpy=
array([[1., 1.],
     [1., 1.]], dtype=float32)>
参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).

Returns:

A Tensor. Has the same type as x.

Raises:* InvalidArgumentError – When x and y have incomptatible shapes or types.
tensorflow.name_scope

tensorflow.python.framework.ops.name_scope_v2 的别名

tensorflow.negative(x, name=None)

Computes numerical negative value element-wise.

I.e., \(y = -x\).

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.negative(x.values, …), x.dense_shape)

tensorflow.no_gradient(op_type)

Specifies that ops of type op_type is not differentiable.

This function should not be used for operations that have a well-defined gradient that is not yet implemented.

This function is only used when defining a new op type. It may be used for ops such as tf.size() that are not differentiable. For example:

`python tf.no_gradient("Size") `

The gradient computed for ‘op_type’ will then propagate zeros.

For ops that have a well-defined gradient but are not yet implemented, no declaration should be made, and an error must be thrown if an attempt to request its gradient is made.

参数:op_type – The string type of an operation. This corresponds to the OpDef.name field for the proto that defines the operation.
Raises:TypeError – If op_type is not a string.
tensorflow.no_op(name=None)

Does nothing. Only useful as a placeholder for control edges.

参数:name – A name for the operation (optional).
返回:The created Operation.
tensorflow.nondifferentiable_batch_function(num_batch_threads, max_batch_size, batch_timeout_micros, allowed_batch_sizes=None, max_enqueued_batches=10, autograph=True)

Batches the computation done by the decorated function.

So, for example, in the following code

```python @batch_function(1, 2, 3) def layer(a):

return tf.matmul(a, a)

b = layer(w) ```

if more than one session.run call is simultaneously trying to compute b the values of w will be gathered, non-deterministically concatenated along the first axis, and only one thread will run the computation. See the documentation of the Batch op for more details.

Assumes that all arguments of the decorated function are Tensors which will be batched along their first dimension.

SparseTensor is not supported. The return value of the decorated function must be a Tensor or a list/tuple of Tensors.

参数:
  • num_batch_threads – Number of scheduling threads for processing batches of work. Determines the number of batches processed in parallel.
  • max_batch_size – Batch sizes will never be bigger than this.
  • batch_timeout_micros – Maximum number of microseconds to wait before outputting an incomplete batch.
  • allowed_batch_sizes – Optional list of allowed batch sizes. If left empty, does nothing. Otherwise, supplies a list of batch sizes, causing the op to pad batches up to one of those sizes. The entries must increase monotonically, and the final entry must equal max_batch_size.
  • max_enqueued_batches – The maximum depth of the batch queue. Defaults to 10.
  • autograph – Whether to use autograph to compile python and eager style code for efficient graph-mode execution.
返回:

The decorated function will return the unbatched computation output Tensors.

tensorflow.norm(tensor, ord='euclidean', axis=None, keepdims=None, name=None)

Computes the norm of vectors, matrices, and tensors.

This function can compute several different vector norms (the 1-norm, the Euclidean or 2-norm, the inf-norm, and in general the p-norm for p > 0) and matrix norms (Frobenius, 1-norm, 2-norm and inf-norm).

参数:
  • tensorTensor of types float32, float64, complex64, complex128
  • ord

    Order of the norm. Supported values are ‘fro’, ‘euclidean’, 1, 2, np.inf and any positive real number yielding the corresponding p-norm. Default is ‘euclidean’ which is equivalent to Frobenius norm if tensor is a matrix and equivalent to 2-norm for vectors. Some restrictions apply:

    1. The Frobenius norm ‘fro’ is not defined for vectors,
    2. If axis is a 2-tuple (matrix norm), only ‘euclidean’, ‘fro’, 1, 2, np.inf are supported.

    See the description of axis on how to compute norms for a batch of vectors or matrices stored in a tensor.

  • axis – If axis is None (the default), the input is considered a vector and a single vector norm is computed over the entire set of values in the tensor, i.e. norm(tensor, ord=ord) is equivalent to norm(reshape(tensor, [-1]), ord=ord). If axis is a Python integer, the input is considered a batch of vectors, and axis determines the axis in tensor over which to compute vector norms. If axis is a 2-tuple of Python integers it is considered a batch of matrices and axis determines the axes in tensor over which to compute a matrix norm. Negative indices are supported. Example: If you are passing a tensor that can be either a matrix or a batch of matrices at runtime, pass axis=[-2,-1] instead of axis=None to make sure that matrix norms are computed.
  • keepdims – If True, the axis indicated in axis are kept with size 1. Otherwise, the dimensions in axis are removed from the output shape.
  • name – The name of the op.
返回:

A Tensor of the same type as tensor, containing the vector or

matrix norms. If keepdims is True then the rank of output is equal to the rank of tensor. Otherwise, if axis is none the output is a scalar, if axis is an integer, the rank of output is one less than the rank of tensor, if axis is a 2-tuple the rank of output is two less than the rank of tensor.

返回类型:

output

Raises:

ValueError – If ord or axis is invalid.

@compatibility(numpy) Mostly equivalent to numpy.linalg.norm. Not supported: ord <= 0, 2-norm for matrices, nuclear norm. Other differences:

  1. If axis is None, treats the flattened tensor as a vector
regardless of rank.
  1. Explicitly supports ‘euclidean’ norm as the default, including for
higher order tensors.

@end_compatibility

tensorflow.not_equal(x, y, name=None)

Returns the truth value of (x != y) element-wise.

Performs a [broadcast]( https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html) with the arguments and then an element-wise inequality comparison, returning a Tensor of boolean values.

For example:

>>> x = tf.constant([2, 4])
>>> y = tf.constant(2)
>>> tf.math.not_equal(x, y)
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False,  True])>
>>> x = tf.constant([2, 4])
>>> y = tf.constant([2, 4])
>>> tf.math.not_equal(x, y)
<tf.Tensor: shape=(2,), dtype=bool, numpy=array([False,  False])>
参数:
  • x – A tf.Tensor or tf.SparseTensor or tf.IndexedSlices.
  • y – A tf.Tensor or tf.SparseTensor or tf.IndexedSlices.
  • name – A name for the operation (optional).
返回:

A tf.Tensor of type bool with the same size as that of x or y.

Raises:

tf.errors.InvalidArgumentError – If shapes of arguments are incompatible

tensorflow.numpy_function(func, inp, Tout, name=None)

Wraps a python function and uses it as a TensorFlow op.

Given a python function func wrap this function as an operation in a TensorFlow function. func must take numpy arrays as its arguments and return numpy arrays as its outputs.

The following example creates a TensorFlow graph with np.sinh() as an operation in the graph:

>>> def my_numpy_func(x):
...   # x will be a numpy array with the contents of the input to the
...   # tf.function
...   return np.sinh(x)
>>> @tf.function(input_signature=[tf.TensorSpec(None, tf.float32)])
... def tf_function(input):
...   y = tf.numpy_function(my_numpy_func, [input], tf.float32)
...   return y * y
>>> tf_function(tf.constant(1.))
<tf.Tensor: shape=(), dtype=float32, numpy=1.3810978>

Comparison to tf.py_function: tf.py_function and tf.numpy_function are very similar, except that tf.numpy_function takes numpy arrays, and not tf.Tensor`s. If you want the function to contain `tf.Tensors, and have any TensorFlow operations executed in the function be differentiable, please use tf.py_function.

Note: The tf.numpy_function operation has the following known limitations:

  • The body of the function (i.e. func) will not be serialized in a tf.SavedModel. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment.
  • The operation must run in the same address space as the Python program that calls tf.numpy_function(). If you are using distributed TensorFlow, you must run a tf.distribute.Server in the same process as the program that calls tf.numpy_function you must pin the created operation to a device in that server (e.g. using with tf.device():).
  • Since the function takes numpy arrays, you cannot take gradients through a numpy_function. If you require something that is differentiable, please consider using tf.py_function.
  • The resulting function is assumed stateful and will never be optimized.
参数:
  • func

    A Python function, which accepts numpy.ndarray objects as arguments and returns a list of numpy.ndarray objects (or a single numpy.ndarray). This function must accept as many arguments as there are tensors in inp, and these argument types will match the corresponding tf.Tensor objects in inp. The returns numpy.ndarray`s must match the number and types defined `Tout. Important Note: Input and output numpy.ndarray`s of `func are not

    guaranteed to be copies. In some cases their underlying memory will be shared with the corresponding TensorFlow tensors. In-place modification or storing func input or return values in python datastructures without explicit (np.)copy can have non-deterministic consequences.
  • inp – A list of tf.Tensor objects.
  • Tout – A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns.
  • name – (Optional) A name for the operation.
返回:

Single or list of tf.Tensor which func computes.

tensorflow.one_hot(indices, depth, on_value=None, off_value=None, axis=None, dtype=None, name=None)

Returns a one-hot tensor.

The locations represented by indices in indices take value on_value, while all other locations take value off_value.

on_value and off_value must have matching data types. If dtype is also provided, they must be the same data type as specified by dtype.

If on_value is not provided, it will default to the value 1 with type dtype

If off_value is not provided, it will default to the value 0 with type dtype

If the input indices is rank N, the output will have rank N+1. The new axis is created at dimension axis (default: the new axis is appended at the end).

If indices is a scalar the output shape will be a vector of length depth

If indices is a vector of length features, the output shape will be:

```
features x depth if axis == -1 depth x features if axis == 0

```

If indices is a matrix (batch) with shape [batch, features], the output shape will be:

```
batch x features x depth if axis == -1 batch x depth x features if axis == 1 depth x batch x features if axis == 0

```

If indices is a RaggedTensor, the ‘axis’ argument must be positive and refer to a non-ragged axis. The output will be equivalent to applying ‘one_hot’ on the values of the RaggedTensor, and creating a new RaggedTensor from the result.

If dtype is not provided, it will attempt to assume the data type of on_value or off_value, if one or both are passed in. If none of on_value, off_value, or dtype are provided, dtype will default to the value tf.float32.

Note: If a non-numeric data type output is desired (tf.string, tf.bool, etc.), both on_value and off_value _must_ be provided to one_hot.

For example:

```python indices = [0, 1, 2] depth = 3 tf.one_hot(indices, depth) # output: [3 x 3] # [[1., 0., 0.], # [0., 1., 0.], # [0., 0., 1.]]

indices = [0, 2, -1, 1] depth = 3 tf.one_hot(indices, depth,

on_value=5.0, off_value=0.0, axis=-1) # output: [4 x 3]

# [[5.0, 0.0, 0.0], # one_hot(0) # [0.0, 0.0, 5.0], # one_hot(2) # [0.0, 0.0, 0.0], # one_hot(-1) # [0.0, 5.0, 0.0]] # one_hot(1)

indices = [[0, 2], [1, -1]] depth = 3 tf.one_hot(indices, depth,

on_value=1.0, off_value=0.0, axis=-1) # output: [2 x 2 x 3]

# [[[1.0, 0.0, 0.0], # one_hot(0) # [0.0, 0.0, 1.0]], # one_hot(2) # [[0.0, 1.0, 0.0], # one_hot(1) # [0.0, 0.0, 0.0]]] # one_hot(-1)

indices = tf.ragged.constant([[0, 1], [2]]) depth = 3 tf.one_hot(indices, depth) # output: [2 x None x 3] # [[[1., 0., 0.], # [0., 1., 0.]], # [[0., 0., 1.]]] ```

参数:
  • indices – A Tensor of indices.
  • depth – A scalar defining the depth of the one hot dimension.
  • on_value – A scalar defining the value to fill in output when indices[j] = i. (default: 1)
  • off_value – A scalar defining the value to fill in output when indices[j] != i. (default: 0)
  • axis – The axis to fill (default: -1, a new inner-most axis).
  • dtype – The data type of the output tensor.
  • name – A name for the operation (optional).
返回:

The one-hot tensor.

返回类型:

output

Raises:
  • TypeError – If dtype of either on_value or off_value don’t match dtype
  • TypeError – If dtype of on_value and off_value don’t match one another
tensorflow.ones(shape, dtype=tf.float32, name=None)

Creates a tensor with all elements set to one (1).

See also tf.ones_like.

This operation returns a tensor of type dtype with shape shape and all elements set to one.

>>> tf.ones([3, 4], tf.int32)
<tf.Tensor: shape=(3, 4), dtype=int32, numpy=
array([[1, 1, 1, 1],
       [1, 1, 1, 1],
       [1, 1, 1, 1]], dtype=int32)>
参数:
  • shape – A list of integers, a tuple of integers, or a 1-D Tensor of type int32.
  • dtype – Optional DType of an element in the resulting Tensor. Default is tf.float32.
  • name – Optional string. A name for the operation.
返回:

A Tensor with all elements set to one (1).

tensorflow.ones_initializer

tensorflow.python.ops.init_ops_v2.Ones 的别名

tensorflow.ones_like(input, dtype=None, name=None)

Creates a tensor of all ones that has the same shape as the input.

See also tf.ones.

Given a single tensor (tensor), this operation returns a tensor of the same type and shape as tensor with all elements set to 1. Optionally, you can use dtype to specify a new type for the returned tensor.

For example:

>>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
>>> tf.ones_like(tensor)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
  array([[1, 1, 1],
         [1, 1, 1]], dtype=int32)>
参数:
  • input – A Tensor.
  • dtype – A type for the returned Tensor. Must be float16, float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128, bool or string.
  • name – A name for the operation (optional).
返回:

A Tensor with all elements set to one.

tensorflow.pad(tensor, paddings, mode='CONSTANT', constant_values=0, name=None)

Pads a tensor.

This operation pads a tensor according to the paddings you specify. paddings is an integer tensor with shape [n, 2], where n is the rank of tensor. For each dimension D of input, paddings[D, 0] indicates how many values to add before the contents of tensor in that dimension, and paddings[D, 1] indicates how many values to add after the contents of tensor in that dimension. If mode is “REFLECT” then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D) - 1. If mode is “SYMMETRIC” then both paddings[D, 0] and paddings[D, 1] must be no greater than tensor.dim_size(D).

The padded size of each dimension D of the output is:

paddings[D, 0] + tensor.dim_size(D) + paddings[D, 1]

For example:

```python t = tf.constant([[1, 2, 3], [4, 5, 6]]) paddings = tf.constant([[1, 1,], [2, 2]]) # ‘constant_values’ is 0. # rank of ‘t’ is 2. tf.pad(t, paddings, “CONSTANT”) # [[0, 0, 0, 0, 0, 0, 0],

# [0, 0, 1, 2, 3, 0, 0], # [0, 0, 4, 5, 6, 0, 0], # [0, 0, 0, 0, 0, 0, 0]]
tf.pad(t, paddings, “REFLECT”) # [[6, 5, 4, 5, 6, 5, 4],
# [3, 2, 1, 2, 3, 2, 1], # [6, 5, 4, 5, 6, 5, 4], # [3, 2, 1, 2, 3, 2, 1]]
tf.pad(t, paddings, “SYMMETRIC”) # [[2, 1, 1, 2, 3, 3, 2],
# [2, 1, 1, 2, 3, 3, 2], # [5, 4, 4, 5, 6, 6, 5], # [5, 4, 4, 5, 6, 6, 5]]

```

参数:
  • tensor – A Tensor.
  • paddings – A Tensor of type int32.
  • mode – One of “CONSTANT”, “REFLECT”, or “SYMMETRIC” (case-insensitive)
  • constant_values – In “CONSTANT” mode, the scalar pad value to use. Must be same type as tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as tensor.

Raises:

ValueError – When mode is not one of “CONSTANT”, “REFLECT”, or “SYMMETRIC”.

tensorflow.parallel_stack(values, name='parallel_stack')

Stacks a list of rank-R tensors into one rank-(R+1) tensor in parallel.

Requires that the shape of inputs be known at graph construction time.

Packs the list of tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the first dimension. Given a list of length N of tensors of shape (A, B, C); the output tensor will have the shape (N, A, B, C).

For example:

`python x = tf.constant([1, 4]) y = tf.constant([2, 5]) z = tf.constant([3, 6]) tf.parallel_stack([x, y, z])  # [[1, 4], [2, 5], [3, 6]] `

The difference between stack and parallel_stack is that stack requires all the inputs be computed before the operation will begin but doesn’t require that the input shapes be known during graph construction.

parallel_stack will copy pieces of the input into the output as they become available, in some situations this can provide a performance benefit.

Unlike stack, parallel_stack does NOT support backpropagation.

This is the opposite of unstack. The numpy equivalent is

tf.parallel_stack([x, y, z]) = np.asarray([x, y, z])
参数:
  • values – A list of Tensor objects with the same shape and type.
  • name – A name for this operation (optional).
返回:

A stacked Tensor with the same type as values.

返回类型:

output

tensorflow.pow(x, y, name=None)

Computes the power of one value to another.

Given a tensor x and a tensor y, this operation computes \(x^y\) for corresponding elements in x and y. For example:

`python x = tf.constant([[2, 2], [3, 3]]) y = tf.constant([[8, 16], [2, 3]]) tf.pow(x, y)  # [[256, 65536], [9, 27]] `

参数:
  • x – A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • y – A Tensor of type float16, float32, float64, int32, int64, complex64, or complex128.
  • name – A name for the operation (optional).
返回:

A Tensor.

tensorflow.print(*inputs, **kwargs)

Print the specified inputs.

A TensorFlow operator that prints the specified inputs to a desired output stream or logging level. The inputs may be dense or sparse Tensors, primitive python objects, data structures that contain tensors, and printable Python objects. Printed tensors will recursively show the first and last elements of each dimension to summarize.

@compatibility(python2) In python 2.7, make sure to import the following: from __future__ import print_function @end_compatibility

Example

Single-input usage:

`python tensor = tf.range(10) tf.print(tensor, output_stream=sys.stderr) `

(This prints “[0 1 2 … 7 8 9]” to sys.stderr)

Multi-input usage:

`python tensor = tf.range(10) tf.print("tensors:", tensor, {2: tensor * 2}, output_stream=sys.stdout) `

(This prints “tensors: [0 1 2 … 7 8 9] {2: [0 2 4 … 14 16 18]}” to sys.stdout)

Changing the input separator: `python tensor_a = tf.range(2) tensor_b = tensor_a * 2 tf.print(tensor_a, tensor_b, output_stream=sys.stderr, sep=',') `

(This prints “[0 1],[0 2]” to sys.stderr)

Usage in a tf.function:

```python @tf.function def f():

tensor = tf.range(10) tf.print(tensor, output_stream=sys.stderr) return tensor

range_tensor = f() ```

(This prints “[0 1 2 … 7 8 9]” to sys.stderr)

@compatibility(TF 1.x Graphs and Sessions) In graphs manually created outside of tf.function, this method returns the created TF operator that prints the data. To make sure the operator runs, users need to pass the produced op to tf.compat.v1.Session’s run method, or to use the op as a control dependency for executed ops by specifying with tf.compat.v1.control_dependencies([print_op]). @end_compatibility

Compatibility usage in TF 1.x graphs:

```python sess = tf.compat.v1.Session() with sess.as_default():

tensor = tf.range(10) print_op = tf.print(“tensors:”, tensor, {2: tensor * 2},

output_stream=sys.stdout)
with tf.control_dependencies([print_op]):
tripled_tensor = tensor * 3

sess.run(tripled_tensor)

```

(This prints “tensors: [0 1 2 … 7 8 9] {2: [0 2 4 … 14 16 18]}” to sys.stdout)

Note: In Jupyter notebooks and colabs, tf.print prints to the notebook
cell outputs. It will not write to the notebook kernel’s console logs.
参数:
  • *inputs – Positional arguments that are the inputs to print. Inputs in the printed output will be separated by spaces. Inputs may be python primitives, tensors, data structures such as dicts and lists that may contain tensors (with the data structures possibly nested in arbitrary ways), and printable python objects.
  • output_stream – The output stream, logging level, or file to print to. Defaults to sys.stderr, but sys.stdout, tf.compat.v1.logging.info, tf.compat.v1.logging.warning, tf.compat.v1.logging.error, absl.logging.info, absl.logging.warning and absl.logging.error are also supported. To print to a file, pass a string started with “file://” followed by the file path, e.g., “file:///tmp/foo.out”.
  • summarize – The first and last summarize elements within each dimension are recursively printed per Tensor. If None, then the first 3 and last 3 elements of each dimension are printed for each tensor. If set to -1, it will print all elements of every tensor.
  • sep – The string to use to separate the inputs. Defaults to ” “.
  • end – End character that is appended at the end the printed string. Defaults to the newline character.
  • name – A name for the operation (optional).
返回:

None when executing eagerly. During graph tracing this returns a TF operator that prints the specified inputs in the specified output stream or logging level. This operator will be automatically executed except inside of tf.compat.v1 graphs and sessions.

Raises:

ValueError – If an unsupported output stream is specified.

tensorflow.py_function(func, inp, Tout, name=None)

Wraps a python function into a TensorFlow op that executes it eagerly.

This function allows expressing computations in a TensorFlow graph as Python functions. In particular, it wraps a Python function func in a once-differentiable TensorFlow operation that executes it with eager execution enabled. As a consequence, tf.py_function makes it possible to express control flow using Python constructs (if, while, for, etc.), instead of TensorFlow control flow constructs (tf.cond, tf.while_loop). For example, you might use tf.py_function to implement the log huber function:

```python def log_huber(x, m):

if tf.abs(x) <= m:
return x**2
else:
return m**2 * (1 - 2 * tf.math.log(m) + tf.math.log(x**2))

x = tf.compat.v1.placeholder(tf.float32) m = tf.compat.v1.placeholder(tf.float32)

y = tf.py_function(func=log_huber, inp=[x, m], Tout=tf.float32) dy_dx = tf.gradients(y, x)[0]

with tf.compat.v1.Session() as sess:
# The session executes log_huber eagerly. Given the feed values below, # it will take the first branch, so y evaluates to 1.0 and # dy_dx evaluates to 2.0. y, dy_dx = sess.run([y, dy_dx], feed_dict={x: 1.0, m: 2.0})

```

You can also use tf.py_function to debug your models at runtime using Python tools, i.e., you can isolate portions of your code that you want to debug, wrap them in Python functions and insert pdb tracepoints or print statements as desired, and wrap those functions in tf.py_function.

For more information on eager execution, see the [Eager guide](https://tensorflow.org/guide/eager).

tf.py_function is similar in spirit to tf.compat.v1.py_func, but unlike the latter, the former lets you use TensorFlow operations in the wrapped Python function. In particular, while tf.compat.v1.py_func only runs on CPUs and wraps functions that take NumPy arrays as inputs and return NumPy arrays as outputs, tf.py_function can be placed on GPUs and wraps functions that take Tensors as inputs, execute TensorFlow operations in their bodies, and return Tensors as outputs.

Like tf.compat.v1.py_func, tf.py_function has the following limitations with respect to serialization and distribution:

  • The body of the function (i.e. func) will not be serialized in a GraphDef. Therefore, you should not use this function if you need to serialize your model and restore it in a different environment.
  • The operation must run in the same address space as the Python program that calls tf.py_function(). If you are using distributed TensorFlow, you must run a tf.distribute.Server in the same process as the program that calls tf.py_function() and you must pin the created operation to a device in that server (e.g. using with tf.device():).
参数:
  • func – A Python function which accepts a list of Tensor objects having element types that match the corresponding tf.Tensor objects in inp and returns a list of Tensor objects (or a single Tensor, or None) having element types that match the corresponding values in Tout.
  • inp – A list of Tensor objects.
  • Tout – A list or tuple of tensorflow data types or a single tensorflow data type if there is only one, indicating what func returns; an empty list if no value is returned (i.e., if the return value is None).
  • name – A name for the operation (optional).
返回:

A list of Tensor or a single Tensor which func computes; an empty list if func returns None.

tensorflow.random_normal_initializer

tensorflow.python.ops.init_ops_v2.RandomNormal 的别名

tensorflow.random_uniform_initializer

tensorflow.python.ops.init_ops_v2.RandomUniform 的别名

tensorflow.range(start, limit=None, delta=1, dtype=None, name='range')

Creates a sequence of numbers.

Creates a sequence of numbers that begins at start and extends by increments of delta up to but not including limit.

The dtype of the resulting tensor is inferred from the inputs unless it is provided explicitly.

Like the Python builtin range, start defaults to 0, so that range(n) = range(0, n).

For example:

>>> start = 3
>>> limit = 18
>>> delta = 3
>>> tf.range(start, limit, delta)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([ 3,  6,  9, 12, 15], dtype=int32)>
>>> start = 3
>>> limit = 1
>>> delta = -0.5
>>> tf.range(start, limit, delta)
<tf.Tensor: shape=(4,), dtype=float32,
numpy=array([3. , 2.5, 2. , 1.5], dtype=float32)>
>>> limit = 5
>>> tf.range(limit)
<tf.Tensor: shape=(5,), dtype=int32,
numpy=array([0, 1, 2, 3, 4], dtype=int32)>
参数:
  • start – A 0-D Tensor (scalar). Acts as first entry in the range if limit is not None; otherwise, acts as range limit and first entry defaults to 0.
  • limit – A 0-D Tensor (scalar). Upper limit of sequence, exclusive. If None, defaults to the value of start while the first entry of the range defaults to 0.
  • delta – A 0-D Tensor (scalar). Number that increments start. Defaults to 1.
  • dtype – The type of the elements of the resulting tensor.
  • name – A name for the operation. Defaults to “range”.
返回:

An 1-D Tensor of type dtype.

@compatibility(numpy) Equivalent to np.arange @end_compatibility

tensorflow.rank(input, name=None)

Returns the rank of a tensor.

Returns a 0-D int32 Tensor representing the rank of input.

For example:

`python # shape of tensor 't' is [2, 2, 3] t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]]) tf.rank(t)  # 3 `

Note: The rank of a tensor is not the same as the rank of a matrix. The rank of a tensor is the number of indices required to uniquely select each element of the tensor. Rank is also known as “order”, “degree”, or “ndims.”

参数:
  • input – A Tensor or SparseTensor.
  • name – A name for the operation (optional).
返回:

A Tensor of type int32.

@compatibility(numpy) Equivalent to np.ndim @end_compatibility

tensorflow.realdiv(x, y, name=None)

Returns x / y element-wise for real types.

If x and y are reals, this will return the floating-point division.

NOTE: Div supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.recompute_grad(f)

An eager-compatible version of recompute_grad.

For f(*args, **kwargs), this supports gradients with respect to args or kwargs, but kwargs are currently only supported in eager-mode. Note that for keras layer and model objects, this is handled automatically.

Warning: If f was originally a tf.keras Model or Layer object, g will not be able to access the member variables of that object, because g returns through the wrapper function inner. When recomputing gradients through objects that inherit from keras, we suggest keeping a reference to the underlying object around for the purpose of accessing these variables.

参数:f – function f(*x) that returns a Tensor or sequence of Tensor outputs.
返回:A function g that wraps f, but which recomputes f on the backwards pass of a gradient call.
tensorflow.reduce_all(input_tensor, axis=None, keepdims=False, name=None)

Computes the “logical and” of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

For example:

`python x = tf.constant([[True,  True], [False, False]]) tf.reduce_all(x)  # False tf.reduce_all(x, 0)  # [False, False] tf.reduce_all(x, 1)  # [True, False] `

参数:
  • input_tensor – The boolean tensor to reduce.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

@compatibility(numpy) Equivalent to np.all @end_compatibility

tensorflow.reduce_any(input_tensor, axis=None, keepdims=False, name=None)

Computes the “logical or” of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

For example:

`python x = tf.constant([[True,  True], [False, False]]) tf.reduce_any(x)  # True tf.reduce_any(x, 0)  # [True, True] tf.reduce_any(x, 1)  # [True, False] `

参数:
  • input_tensor – The boolean tensor to reduce.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

@compatibility(numpy) Equivalent to np.any @end_compatibility

tensorflow.reduce_logsumexp(input_tensor, axis=None, keepdims=False, name=None)

Computes log(sum(exp(elements across dimensions of a tensor))).

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis has no entries, all dimensions are reduced, and a tensor with a single element is returned.

This function is more numerically stable than log(sum(exp(input))). It avoids overflows caused by taking the exp of large inputs and underflows caused by taking the log of small inputs.

For example:

`python x = tf.constant([[0., 0., 0.], [0., 0., 0.]]) tf.reduce_logsumexp(x)  # log(6) tf.reduce_logsumexp(x, 0)  # [log(2), log(2), log(2)] tf.reduce_logsumexp(x, 1)  # [log(3), log(3)] tf.reduce_logsumexp(x, 1, keepdims=True)  # [[log(3)], [log(3)]] tf.reduce_logsumexp(x, [0, 1])  # log(6) `

参数:
  • input_tensor – The tensor to reduce. Should have numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

tensorflow.reduce_max(input_tensor, axis=None, keepdims=False, name=None)

Computes the maximum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

Usage example:

>>> x = tf.constant([5, 1, 2, 4])
>>> print(tf.reduce_max(x))
tf.Tensor(5, shape=(), dtype=int32)
>>> x = tf.constant([-5, -1, -2, -4])
>>> print(tf.reduce_max(x))
tf.Tensor(-1, shape=(), dtype=int32)
>>> x = tf.constant([4, float('nan')])
>>> print(tf.reduce_max(x))
tf.Tensor(4.0, shape=(), dtype=float32)
>>> x = tf.constant([float('nan'), float('nan')])
>>> print(tf.reduce_max(x))
tf.Tensor(-inf, shape=(), dtype=float32)
>>> x = tf.constant([float('-inf'), float('inf')])
>>> print(tf.reduce_max(x))
tf.Tensor(inf, shape=(), dtype=float32)

See the numpy docs for np.amax and np.nanmax behavior.

参数:
  • input_tensor – The tensor to reduce. Should have real numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

tensorflow.reduce_mean(input_tensor, axis=None, keepdims=False, name=None)

Computes the mean of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis by computing the mean of elements across the dimensions in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

For example:

>>> x = tf.constant([[1., 1.], [2., 2.]])
>>> tf.reduce_mean(x)
<tf.Tensor: shape=(), dtype=float32, numpy=1.5>
>>> tf.reduce_mean(x, 0)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1.5, 1.5], dtype=float32)>
>>> tf.reduce_mean(x, 1)
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([1., 2.], dtype=float32)>
参数:
  • input_tensor – The tensor to reduce. Should have numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

@compatibility(numpy) Equivalent to np.mean

Please note that np.mean has a dtype parameter that could be used to specify the output type. By default this is dtype=float64. On the other hand, tf.reduce_mean has an aggressive type inference from input_tensor, for example:

>>> x = tf.constant([1, 0, 1, 0])
>>> tf.reduce_mean(x)
<tf.Tensor: shape=(), dtype=int32, numpy=0>
>>> y = tf.constant([1., 0., 1., 0.])
>>> tf.reduce_mean(y)
<tf.Tensor: shape=(), dtype=float32, numpy=0.5>

@end_compatibility

tensorflow.reduce_min(input_tensor, axis=None, keepdims=False, name=None)

Computes the minimum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

参数:
  • input_tensor – The tensor to reduce. Should have real numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

For example:
>>> a = tf.constant([[1, 2], [3, 4]])
>>> tf.reduce_min(a)
<tf.Tensor: shape=(), dtype=int32, numpy=1>

@compatibility(numpy) Equivalent to np.min @end_compatibility

tensorflow.reduce_prod(input_tensor, axis=None, keepdims=False, name=None)

Computes the product of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

参数:
  • input_tensor – The tensor to reduce. Should have numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor.

@compatibility(numpy) Equivalent to np.prod @end_compatibility

tensorflow.reduce_sum(input_tensor, axis=None, keepdims=False, name=None)

Computes the sum of elements across dimensions of a tensor.

Reduces input_tensor along the dimensions given in axis. Unless keepdims is true, the rank of the tensor is reduced by 1 for each entry in axis. If keepdims is true, the reduced dimensions are retained with length 1.

If axis is None, all dimensions are reduced, and a tensor with a single element is returned.

For example:

`python x = tf.constant([[1, 1, 1], [1, 1, 1]]) tf.reduce_sum(x)  # 6 tf.reduce_sum(x, 0)  # [2, 2, 2] tf.reduce_sum(x, 1)  # [3, 3] tf.reduce_sum(x, 1, keepdims=True)  # [[3], [3]] tf.reduce_sum(x, [0, 1])  # 6 `

参数:
  • input_tensor – The tensor to reduce. Should have numeric type.
  • axis – The dimensions to reduce. If None (the default), reduces all dimensions. Must be in the range [-rank(input_tensor), rank(input_tensor)).
  • keepdims – If true, retains reduced dimensions with length 1.
  • name – A name for the operation (optional).
返回:

The reduced tensor, of the same dtype as the input_tensor.

@compatibility(numpy) Equivalent to np.sum apart the fact that numpy upcast uint8 and int32 to int64 while tensorflow returns the same dtype as the input. @end_compatibility

tensorflow.register_tensor_conversion_function(base_type, conversion_func, priority=100)

Registers a function for converting objects of base_type to Tensor.

The conversion function must have the following signature:

```python
def conversion_func(value, dtype=None, name=None, as_ref=False):
# …

```

It must return a Tensor with the given dtype if specified. If the conversion function creates a new Tensor, it should use the given name if specified. All exceptions will be propagated to the caller.

The conversion function may return NotImplemented for some inputs. In this case, the conversion process will continue to try subsequent conversion functions.

If as_ref is true, the function must return a Tensor reference, such as a Variable.

NOTE: The conversion functions will execute in order of priority, followed by order of registration. To ensure that a conversion function F runs before another conversion function G, ensure that F is registered with a smaller priority than G.

参数:
  • base_type – The base type or tuple of base types for all objects that conversion_func accepts.
  • conversion_func – A function that converts instances of base_type to Tensor.
  • priority – Optional integer that indicates the priority for applying this conversion function. Conversion functions with smaller priority values run earlier than conversion functions with larger priority values. Defaults to 100.
Raises:

TypeError – If the arguments do not have the appropriate type.

tensorflow.repeat(input, repeats, axis=None, name=None)

Repeat elements of input.

See also tf.concat, tf.stack, tf.tile.

参数:
  • input – An N-dimensional Tensor.
  • repeats – An 1-D int Tensor. The number of repetitions for each element. repeats is broadcasted to fit the shape of the given axis. len(repeats) must equal input.shape[axis] if axis is not None.
  • axis – An int. The axis along which to repeat values. By default (axis=None), use the flattened input array, and return a flat output array.
  • name – A name for the operation.
返回:

A Tensor which has the same shape as input, except along the given axis.

If axis is None then the output array is flattened to match the flattened input array.

Example usage:

>>> repeat(['a', 'b', 'c'], repeats=[3, 0, 2], axis=0)
<tf.Tensor: shape=(5,), dtype=string,
numpy=array([b'a', b'a', b'a', b'c', b'c'], dtype=object)>
>>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=0)
<tf.Tensor: shape=(5, 2), dtype=int32, numpy=
array([[1, 2],
       [1, 2],
       [3, 4],
       [3, 4],
       [3, 4]], dtype=int32)>
>>> repeat([[1, 2], [3, 4]], repeats=[2, 3], axis=1)
<tf.Tensor: shape=(2, 5), dtype=int32, numpy=
array([[1, 1, 2, 2, 2],
       [3, 3, 4, 4, 4]], dtype=int32)>
>>> repeat(3, repeats=4)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([3, 3, 3, 3], dtype=int32)>
>>> repeat([[1,2], [3,4]], repeats=2)
<tf.Tensor: shape=(8,), dtype=int32,
numpy=array([1, 1, 2, 2, 3, 3, 4, 4], dtype=int32)>
tensorflow.required_space_to_batch_paddings(input_shape, block_shape, base_paddings=None, name=None)

Calculate padding required to make block_shape divide input_shape.

This function can be used to calculate a suitable paddings argument for use with space_to_batch_nd and batch_to_space_nd.

参数:
  • input_shape – int32 Tensor of shape [N].
  • block_shape – int32 Tensor of shape [N].
  • base_paddings – Optional int32 Tensor of shape [N, 2]. Specifies the minimum amount of padding to use. All elements must be >= 0. If not specified, defaults to 0.
  • name – string. Optional name prefix.
返回:

paddings and crops are int32 Tensors of rank 2 and shape [N, 2] satisfying:

paddings[i, 0] = base_paddings[i, 0]. 0 <= paddings[i, 1] - base_paddings[i, 1] < block_shape[i] (input_shape[i] + paddings[i, 0] + paddings[i, 1]) % block_shape[i] == 0

crops[i, 0] = 0 crops[i, 1] = paddings[i, 1] - base_paddings[i, 1]

返回类型:

(paddings, crops), where

Raises: ValueError if called with incompatible shapes.

tensorflow.reshape(tensor, shape, name=None)

Reshapes a tensor.

Given tensor, this operation returns a new tf.Tensor that has the same values as tensor in the same order, except with a new shape given by shape.

>>> t1 = [[1, 2, 3],
...       [4, 5, 6]]
>>> print(tf.shape(t1).numpy())
[2 3]
>>> t2 = tf.reshape(t1, [6])
>>> t2
<tf.Tensor: shape=(6,), dtype=int32,
  numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
>>> tf.reshape(t2, [3, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
  array([[1, 2],
         [3, 4],
         [5, 6]], dtype=int32)>

The tf.reshape does not change the order of or the total number of elements in the tensor, and so it can reuse the underlying data buffer. This makes it a fast operation independent of how big of a tensor it is operating on.

>>> tf.reshape([1, 2, 3], [2, 2])
Traceback (most recent call last):
...
InvalidArgumentError: Input to reshape is a tensor with 3 values, but the
requested shape has 4

To instead reorder the data to rearrange the dimensions of a tensor, see tf.transpose.

>>> t = [[1, 2, 3],
...      [4, 5, 6]]
>>> tf.reshape(t, [3, 2]).numpy()
array([[1, 2],
       [3, 4],
       [5, 6]], dtype=int32)
>>> tf.transpose(t, perm=[1, 0]).numpy()
array([[1, 4],
       [2, 5],
       [3, 6]], dtype=int32)

If one component of shape is the special value -1, the size of that dimension is computed so that the total size remains constant. In particular, a shape of [-1] flattens into 1-D. At most one component of shape can be -1.

>>> t = [[1, 2, 3],
...      [4, 5, 6]]
>>> tf.reshape(t, [-1])
<tf.Tensor: shape=(6,), dtype=int32,
  numpy=array([1, 2, 3, 4, 5, 6], dtype=int32)>
>>> tf.reshape(t, [3, -1])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
  array([[1, 2],
         [3, 4],
         [5, 6]], dtype=int32)>
>>> tf.reshape(t, [-1, 2])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
  array([[1, 2],
         [3, 4],
         [5, 6]], dtype=int32)>

tf.reshape(t, []) reshapes a tensor t with one element to a scalar.

>>> tf.reshape([7], []).numpy()
7

More examples:

>>> t = [1, 2, 3, 4, 5, 6, 7, 8, 9]
>>> print(tf.shape(t).numpy())
[9]
>>> tf.reshape(t, [3, 3])
<tf.Tensor: shape=(3, 3), dtype=int32, numpy=
  array([[1, 2, 3],
         [4, 5, 6],
         [7, 8, 9]], dtype=int32)>
>>> t = [[[1, 1], [2, 2]],
...      [[3, 3], [4, 4]]]
>>> print(tf.shape(t).numpy())
[2 2 2]
>>> tf.reshape(t, [2, 4])
<tf.Tensor: shape=(2, 4), dtype=int32, numpy=
  array([[1, 1, 2, 2],
         [3, 3, 4, 4]], dtype=int32)>
>>> t = [[[1, 1, 1],
...       [2, 2, 2]],
...      [[3, 3, 3],
...       [4, 4, 4]],
...      [[5, 5, 5],
...       [6, 6, 6]]]
>>> print(tf.shape(t).numpy())
[3 2 3]
>>> # Pass '[-1]' to flatten 't'.
>>> tf.reshape(t, [-1])
<tf.Tensor: shape=(18,), dtype=int32,
  numpy=array([1, 1, 1, 2, 2, 2, 3, 3, 3, 4, 4, 4, 5, 5, 5, 6, 6, 6],
  dtype=int32)>
>>> # -- Using -1 to infer the shape --
>>> # Here -1 is inferred to be 9:
>>> tf.reshape(t, [2, -1])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
  array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
         [4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
>>> # -1 is inferred to be 2:
>>> tf.reshape(t, [-1, 9])
<tf.Tensor: shape=(2, 9), dtype=int32, numpy=
  array([[1, 1, 1, 2, 2, 2, 3, 3, 3],
         [4, 4, 4, 5, 5, 5, 6, 6, 6]], dtype=int32)>
>>> # -1 is inferred to be 3:
>>> tf.reshape(t, [ 2, -1, 3])
<tf.Tensor: shape=(2, 3, 3), dtype=int32, numpy=
  array([[[1, 1, 1],
          [2, 2, 2],
          [3, 3, 3]],
         [[4, 4, 4],
          [5, 5, 5],
          [6, 6, 6]]], dtype=int32)>
参数:
  • tensor – A Tensor.
  • shape – A Tensor. Must be one of the following types: int32, int64. Defines the shape of the output tensor.
  • name – Optional string. A name for the operation.
返回:

A Tensor. Has the same type as tensor.

tensorflow.reverse(tensor, axis, name=None)

Reverses specific dimensions of a tensor.

NOTE tf.reverse has now changed behavior in preparation for 1.0. tf.reverse_v2 is currently an alias that will be deprecated before TF 1.0.

Given a tensor, and a int32 tensor axis representing the set of dimensions of tensor to reverse. This operation reverses each dimension i for which there exists j s.t. axis[j] == i.

tensor can have up to 8 dimensions. The number of dimensions specified in axis may be 0 or more entries. If an index is specified more than once, a InvalidArgument error is raised.

For example:

``` # tensor ‘t’ is [[[[ 0, 1, 2, 3], # [ 4, 5, 6, 7], # [ 8, 9, 10, 11]], # [[12, 13, 14, 15], # [16, 17, 18, 19], # [20, 21, 22, 23]]]] # tensor ‘t’ shape is [1, 2, 3, 4]

# ‘dims’ is [3] or ‘dims’ is [-1] reverse(t, dims) ==> [[[[ 3, 2, 1, 0],

[ 7, 6, 5, 4], [ 11, 10, 9, 8]],
[[15, 14, 13, 12],
[19, 18, 17, 16], [23, 22, 21, 20]]]]

# ‘dims’ is ‘[1]’ (or ‘dims’ is ‘[-3]’) reverse(t, dims) ==> [[[[12, 13, 14, 15],

[16, 17, 18, 19], [20, 21, 22, 23]
[[ 0, 1, 2, 3],
[ 4, 5, 6, 7], [ 8, 9, 10, 11]]]]

# ‘dims’ is ‘[2]’ (or ‘dims’ is ‘[-2]’) reverse(t, dims) ==> [[[[8, 9, 10, 11],

[4, 5, 6, 7], [0, 1, 2, 3]]
[[20, 21, 22, 23],
[16, 17, 18, 19], [12, 13, 14, 15]]]]

```

参数:
  • tensor – A Tensor. Must be one of the following types: uint8, int8, uint16, int16, int32, int64, bool, bfloat16, half, float32, float64, complex64, complex128, string. Up to 8-D.
  • axis – A Tensor. Must be one of the following types: int32, int64. 1-D. The indices of the dimensions to reverse. Must be in the range [-rank(tensor), rank(tensor)).
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as tensor.

tensorflow.reverse_sequence(input, seq_lengths, seq_axis=None, batch_axis=None, name=None)

Reverses variable length slices. (deprecated arguments) (deprecated arguments)

Warning: SOME ARGUMENTS ARE DEPRECATED: (seq_dim). They will be removed in a future version. Instructions for updating: seq_dim is deprecated, use seq_axis instead

Warning: SOME ARGUMENTS ARE DEPRECATED: (batch_dim). They will be removed in a future version. Instructions for updating: batch_dim is deprecated, use batch_axis instead

This op first slices input along the dimension batch_axis, and for each slice i, reverses the first seq_lengths[i] elements along the dimension seq_axis.

The elements of seq_lengths must obey seq_lengths[i] <= input.dims[seq_dim], and seq_lengths must be a vector of length input.dims[batch_dim].

The output slice i along dimension batch_axis is then given by input slice i, with the first seq_lengths[i] slices along dimension seq_axis reversed.

Example usage:

>>> seq_lengths = [7, 2, 3, 5]
>>> input = [[1, 2, 3, 4, 5, 0, 0, 0], [1, 2, 0, 0, 0, 0, 0, 0],
...          [1, 2, 3, 4, 0, 0, 0, 0], [1, 2, 3, 4, 5, 6, 7, 8]]
>>> output = tf.reverse_sequence(input, seq_lengths, seq_axis=1, batch_axis=0)
>>> output
<tf.Tensor: shape=(4, 8), dtype=int32, numpy=
array([[0, 0, 5, 4, 3, 2, 1, 0],
       [2, 1, 0, 0, 0, 0, 0, 0],
       [3, 2, 1, 4, 0, 0, 0, 0],
       [5, 4, 3, 2, 1, 6, 7, 8]], dtype=int32)>
参数:
  • input – A Tensor. The input to reverse.
  • seq_lengths – A Tensor. Must be one of the following types: int32, int64. 1-D with length input.dims(batch_dim) and max(seq_lengths) <= input.dims(seq_dim)
  • seq_axis – An int. The dimension which is partially reversed.
  • batch_axis – An optional int. Defaults to 0. The dimension along which reversal is performed.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.roll(input, shift, axis, name=None)

Rolls the elements of a tensor along an axis.

The elements are shifted positively (towards larger indices) by the offset of shift along the dimension of axis. Negative shift values will shift elements in the opposite direction. Elements that roll passed the last position will wrap around to the first and vice versa. Multiple shifts along multiple axes may be specified.

For example:

``` # ‘t’ is [0, 1, 2, 3, 4] roll(t, shift=2, axis=0) ==> [3, 4, 0, 1, 2]

# shifting along multiple dimensions # ‘t’ is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] roll(t, shift=[1, -2], axis=[0, 1]) ==> [[7, 8, 9, 5, 6], [2, 3, 4, 0, 1]]

# shifting along the same axis multiple times # ‘t’ is [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9]] roll(t, shift=[2, -3], axis=[1, 1]) ==> [[1, 2, 3, 4, 0], [6, 7, 8, 9, 5]] ```

参数:
  • input – A Tensor.
  • shift – A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. shift[i] specifies the number of places by which elements are shifted positively (towards larger indices) along the dimension specified by axis[i]. Negative shifts will roll the elements in the opposite direction.
  • axis – A Tensor. Must be one of the following types: int32, int64. Dimension must be 0-D or 1-D. axis[i] specifies the dimension that the shift shift[i] should occur. If the same axis is referenced more than once, the total shift for that axis will be the sum of all the shifts that belong to that axis.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.round(x, name=None)

Rounds the values of a tensor to the nearest integer, element-wise.

Rounds half to even. Also known as bankers rounding. If you want to round according to the current system rounding mode use tf::cint. For example:

`python x = tf.constant([0.9, 2.5, 2.3, 1.5, -4.5]) tf.round(x)  # [ 1.0, 2.0, 2.0, 2.0, -4.0 ] `

参数:
  • x – A Tensor of type float16, float32, float64, int32, or int64.
  • name – A name for the operation (optional).
返回:

A Tensor of same shape and type as x.

tensorflow.saturate_cast(value, dtype, name=None)

Performs a safe saturating cast of value to dtype.

This function casts the input to dtype without applying any scaling. If there is a danger that values would over or underflow in the cast, this op applies the appropriate clamping before the cast.

参数:
  • value – A Tensor.
  • dtype – The desired output DType.
  • name – A name for the operation (optional).
返回:

value safely cast to dtype.

tensorflow.scalar_mul(scalar, x, name=None)

Multiplies a scalar times a Tensor or IndexedSlices object.

Intended for use in gradient code which might deal with IndexedSlices objects, which are easy to multiply by a scalar but more expensive to multiply with arbitrary tensors.

参数:
  • scalar – A 0-D scalar Tensor. Must have known shape.
  • x – A Tensor or IndexedSlices to be scaled.
  • name – A name for the operation (optional).
返回:

scalar * x of the same type (Tensor or IndexedSlices) as x.

Raises:

ValueError – if scalar is not a 0-D scalar.

tensorflow.scan(fn, elems, initializer=None, parallel_iterations=10, back_prop=True, swap_memory=False, infer_shape=True, reverse=False, name=None)

scan on the list of tensors unpacked from elems on dimension 0. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.scan(fn, elems, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.scan(fn, elems))

The simplest version of scan repeatedly applies the callable fn to a sequence of elements from first to last. The elements are made of the tensors unpacked from elems on dimension 0. The callable fn takes two tensors as arguments. The first argument is the accumulated value computed from the preceding invocation of fn, and the second is the value at the current position of elems. If initializer is None, elems must contain at least one element, and its first element is used as the initializer.

Suppose that elems is unpacked into values, a list of tensors. The shape of the result tensor is [len(values)] + fn(initializer, values[0]).shape. If reverse=True, it’s fn(initializer, values[-1]).shape.

This method also allows multi-arity elems and accumulator. If elems is a (possibly nested) list or tuple of tensors, then each of these tensors must have a matching first (unpack) dimension. The second argument of fn must match the structure of elems.

If no initializer is provided, the output structure and dtypes of fn are assumed to be the same as its input; and in this case, the first argument of fn must match the structure of elems.

If an initializer is provided, then the output of fn must have the same structure as initializer; and the first argument of fn must match this structure.

For example, if elems is (t1, [t2, t3]) and initializer is [i1, i2] then an appropriate signature for fn in python2 is: fn = lambda (acc_p1, acc_p2), (t1, [t2, t3]): and fn must return a list, [acc_n1, acc_n2]. An alternative correct signature for fn, and the

one that works in python3, is:

fn = lambda a, t:, where a and t correspond to the input tuples.

参数:
  • fn – The callable to be performed. It accepts two arguments. The first will have the same structure as initializer if one is provided, otherwise it will have the same structure as elems. The second will have the same (possibly nested) structure as elems. Its output must have the same structure as initializer if one is provided, otherwise it must have the same structure as elems.
  • elems – A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be the first argument to fn.
  • initializer – (optional) A tensor or (possibly nested) sequence of tensors, initial value for the accumulator, and the expected output type of fn.
  • parallel_iterations – (optional) The number of iterations allowed to run in parallel.
  • back_prop – (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
  • swap_memory – (optional) True enables GPU-CPU memory swapping.
  • infer_shape – (optional) False disables tests for consistent output shapes.
  • reverse – (optional) True scans the tensor last to first (instead of first to last).
  • name – (optional) Name prefix for the returned tensors.
返回:

A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, and the previous accumulator value(s), from first to last (or last to first, if reverse=True).

Raises:
  • TypeError – if fn is not callable or the structure of the output of fn and initializer do not match.
  • ValueError – if the lengths of the output of fn and initializer do not match.

实际案例

`python elems = np.array([1, 2, 3, 4, 5, 6]) sum = scan(lambda a, x: a + x, elems) # sum == [1, 3, 6, 10, 15, 21] sum = scan(lambda a, x: a + x, elems, reverse=True) # sum == [21, 20, 18, 15, 11, 6] `

```python elems = np.array([1, 2, 3, 4, 5, 6]) initializer = np.array(0) sum_one = scan(

lambda a, x: x[0] - x[1] + a, (elems + 1, elems), initializer)

# sum_one == [1, 2, 3, 4, 5, 6] ```

`python elems = np.array([1, 0, 0, 0, 0, 0]) initializer = (np.array(0), np.array(1)) fibonaccis = scan(lambda a, _: (a[1], a[0] + a[1]), elems, initializer) # fibonaccis == ([1, 1, 2, 3, 5, 8], [1, 2, 3, 5, 8, 13]) `

tensorflow.scatter_nd(indices, updates, shape, name=None)

Scatter updates into a new tensor according to indices.

Creates a new tensor by applying sparse updates to individual values or slices within a tensor (initially zero for numeric, empty for string) of the given shape according to indices. This operator is the inverse of the tf.gather_nd operator which extracts values or slices from a given tensor.

This operation is similar to tensor_scatter_add, except that the tensor is zero-initialized. Calling tf.scatter_nd(indices, values, shape) is identical to tensor_scatter_add(tf.zeros(shape, values.dtype), indices, values)

If indices contains duplicates, then their updates are accumulated (summed).

WARNING: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates – because of some numerical approximation issues, numbers summed in different order may yield different results.

indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/ScatterNd1.png” alt> </div>

In Python, this scatter operation would look like this:

```python
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) shape = tf.constant([8]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter)

```

The resulting tensor would look like this:

[0, 11, 0, 10, 9, 0, 0, 12]

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/ScatterNd2.png” alt> </div>

In Python, this scatter operation would look like this:

```python

indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],

[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])

shape = tf.constant([4, 4, 4]) scatter = tf.scatter_nd(indices, updates, shape) print(scatter)

```

The resulting tensor would look like this:

[[[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]],
[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]], [[5, 5, 5, 5], [6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8]], [[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

参数:
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • updates – A Tensor. Updates to scatter into output.
  • shape – A Tensor. Must have the same type as indices. 1-D. The shape of the resulting tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as updates.

tensorflow.searchsorted(sorted_sequence, values, side='left', out_type=tf.int32, name=None)

Searches input tensor for values on the innermost dimension.

A 2-D example:

```
sorted_sequence = [[0, 3, 9, 9, 10],
[1, 2, 3, 4, 5]]
values = [[2, 4, 9],
[0, 2, 6]]

result = searchsorted(sorted_sequence, values, side=”left”)

result == [[1, 2, 2],
[0, 1, 5]]

result = searchsorted(sorted_sequence, values, side=”right”)

result == [[1, 2, 4],
[0, 2, 5]]

```

参数:
  • sorted_sequence – N-D Tensor containing a sorted sequence.
  • values – N-D Tensor containing the search values.
  • side – ‘left’ or ‘right’; ‘left’ corresponds to lower_bound and ‘right’ to upper_bound.
  • out_type – The output type (int32 or int64). Default is tf.int32.
  • name – Optional name for the operation.
返回:

An N-D Tensor the size of values containing the result of applying either lower_bound or upper_bound (depending on side) to each value. The result is not a global index to the entire Tensor, but the index in the last dimension.

Raises:

ValueError – If the last dimension of sorted_sequence >= 2^31-1 elements. If the total size of values exceeds 2^31 - 1 elements. If the first N-1 dimensions of the two tensors don’t match.

tensorflow.sequence_mask(lengths, maxlen=None, dtype=tf.bool, name=None)

Returns a mask tensor representing the first N positions of each cell.

If lengths has shape [d_1, d_2, …, d_n] the resulting tensor mask has dtype dtype and shape [d_1, d_2, …, d_n, maxlen], with

` mask[i_1, i_2, ..., i_n, j] = (j < lengths[i_1, i_2, ..., i_n]) `

Examples:

```python tf.sequence_mask([1, 3, 2], 5) # [[True, False, False, False, False],

# [True, True, True, False, False], # [True, True, False, False, False]]
tf.sequence_mask([[1, 3],[2,0]]) # [[[True, False, False],
# [True, True, True]], # [[True, True, False], # [False, False, False]]]

```

参数:
  • lengths – integer tensor, all its values <= maxlen.
  • maxlen – scalar integer tensor, size of last dimension of returned tensor. Default is the maximum value in lengths.
  • dtype – output type of the resulting tensor.
  • name – name of the op.
返回:

A mask tensor of shape lengths.shape + (maxlen,), cast to specified dtype.

Raises:

ValueError – if maxlen is not a scalar.

tensorflow.shape(input, out_type=tf.int32, name=None)

Returns the shape of a tensor.

See also tf.size.

This operation returns a 1-D integer tensor representing the shape of input. This represents the minimal set of known information at definition time.

For example:

>>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
>>> tf.shape(t)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([2, 2, 3], dtype=int32)>
>>> tf.shape(t).numpy()
array([2, 2, 3], dtype=int32)

Note: When using symbolic tensors, such as when using the Keras functional API, tf.shape() will return the shape of the symbolic tensor.

>>> a = tf.keras.layers.Input((None, 10))
>>> tf.shape(a)
<tf.Tensor ... shape=(3,) dtype=int32>

In these cases, using tf.Tensor.shape will return more informative results.

>>> a.shape
TensorShape([None, None, 10])

tf.shape and Tensor.shape should be identical in eager mode. Within tf.function or within a compat.v1 context, not all dimensions may be known until execution time.

参数:
  • input – A Tensor or SparseTensor.
  • out_type – (Optional) The specified output type of the operation (int32 or int64). Defaults to tf.int32.
  • name – A name for the operation (optional).
返回:

A Tensor of type out_type.

tensorflow.shape_n(input, out_type=tf.int32, name=None)

Returns shape of tensors.

参数:
  • input – A list of at least 1 Tensor object with the same type.
  • out_type – The specified output type of the operation (int32 or int64). Defaults to `tf.int32`(optional).
  • name – A name for the operation (optional).
返回:

A list with the same length as input of Tensor objects with

type out_type.

tensorflow.sigmoid(x, name=None)

Computes sigmoid of x element-wise.

Formula for calculating sigmoid(x): y = 1 / (1 + exp(-x)).

For x in (-inf, inf) => sigmoid(x) in (0, 1)

Example Usage:

If a positive number is large, then its sigmoid will approach to 1 since the formula will be y = <large_num> / (1 + <large_num>)

>>> x = tf.constant([0.0, 1.0, 50.0, 100.0])
>>> tf.math.sigmoid(x)
<tf.Tensor: shape=(4,), dtype=float32,
numpy=array([0.5      , 0.7310586, 1.       , 1.       ], dtype=float32)>

If a negative number is large, its sigmoid will approach to 0 since the formula will be y = 1 / (1 + <large_num>)

>>> x = tf.constant([-100.0, -50.0, -1.0, 0.0])
>>> tf.math.sigmoid(x)
<tf.Tensor: shape=(4,), dtype=float32, numpy=
array([0.0000000e+00, 1.9287499e-22, 2.6894143e-01, 0.5],
      dtype=float32)>
参数:
  • x – A Tensor with type float16, float32, float64, complex64, or complex128.
  • name – A name for the operation (optional).
返回:

A Tensor with the same type as x.

Usage Example:

>>> x = tf.constant([-128.0, 0.0, 128.0], dtype=tf.float32)
>>> tf.sigmoid(x)
<tf.Tensor: shape=(3,), dtype=float32,
numpy=array([0. , 0.5, 1. ], dtype=float32)>

@compatibility(scipy) Equivalent to scipy.special.expit @end_compatibility

tensorflow.sign(x, name=None)

Returns an element-wise indication of the sign of a number.

y = sign(x) = -1 if x < 0; 0 if x == 0; 1 if x > 0.

For complex numbers, y = sign(x) = x / |x| if x != 0, otherwise y = 0.

Example usage:

>>> tf.math.sign([0., 2., -3.])
<tf.Tensor: ... numpy=array([ 0.,  1., -1.], dtype=float32)>
参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices,

tf.math.sign(x.values, …), x.dense_shape).

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.sign(x.values, …), x.dense_shape)

tensorflow.sin(x, name=None)

Computes sine of x element-wise.

Given an input tensor, this function computes sine of every element in the tensor. Input range is (-inf, inf) and output range is [-1,1].

`python x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10, float("inf")]) tf.math.sin(x) ==> [nan -0.4121185 -0.47942555 0.84147096 0.9320391 -0.87329733 -0.54402107 nan] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.sinh(x, name=None)

Computes hyperbolic sine of x element-wise.

Given an input tensor, this function computes hyperbolic sine of every element in the tensor. Input range is [-inf,inf] and output range is [-inf,inf].

`python x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 2, 10, float("inf")]) tf.math.sinh(x) ==> [-inf -4.0515420e+03 -5.2109528e-01 1.1752012e+00 1.5094614e+00 3.6268604e+00 1.1013232e+04 inf] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.size(input, out_type=tf.int32, name=None)

Returns the size of a tensor.

See also tf.shape.

Returns a 0-D Tensor representing the number of elements in input of type out_type. Defaults to tf.int32.

For example:

>>> t = tf.constant([[[1, 1, 1], [2, 2, 2]], [[3, 3, 3], [4, 4, 4]]])
>>> tf.size(t)
<tf.Tensor: shape=(), dtype=int32, numpy=12>
参数:
  • input – A Tensor or SparseTensor.
  • name – A name for the operation (optional).
  • out_type – (Optional) The specified non-quantized numeric output type of the operation. Defaults to tf.int32.
返回:

A Tensor of type out_type. Defaults to tf.int32.

@compatibility(numpy) Equivalent to np.size() @end_compatibility

tensorflow.slice(input_, begin, size, name=None)

Extracts a slice from a tensor.

This operation extracts a slice of size size from a tensor input_ starting at the location specified by begin. The slice size is represented as a tensor shape, where size[i] is the number of elements of the ‘i’th dimension of input_ that you want to slice. The starting location (begin) for the slice is represented as an offset in each dimension of input_. In other words, begin[i] is the offset into the i’th dimension of input_ that you want to slice from.

Note that tf.Tensor.__getitem__ is typically a more pythonic way to perform slices, as it allows you to write foo[3:7, :-2] instead of tf.slice(foo, [3, 0], [4, foo.get_shape()[1]-2]).

begin is zero-based; size is one-based. If size[i] is -1, all remaining elements in dimension i are included in the slice. In other words, this is equivalent to setting:

size[i] = input_.dim_size(i) - begin[i]

This operation requires that:

0 <= begin[i] <= begin[i] + size[i] <= Di for i in [0, n]

For example:

```python t = tf.constant([[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]])

tf.slice(t, [1, 0, 0], [1, 1, 3]) # [[[3, 3, 3]]] tf.slice(t, [1, 0, 0], [1, 2, 3]) # [[[3, 3, 3],

# [4, 4, 4]]]
tf.slice(t, [1, 0, 0], [2, 1, 3]) # [[[3, 3, 3]],
# [[5, 5, 5]]]

```

参数:
  • input – A Tensor.
  • begin – An int32 or int64 Tensor.
  • size – An int32 or int64 Tensor.
  • name – A name for the operation (optional).
返回:

A Tensor the same type as input_.

tensorflow.sort(values, axis=-1, direction='ASCENDING', name=None)

Sorts a tensor.

Usage:

`python import tensorflow as tf a = [1, 10, 26.9, 2.8, 166.32, 62.3] b = tf.sort(a,axis=-1,direction='ASCENDING',name=None) c = tf.keras.backend.eval(b) # Here, c = [  1.     2.8   10.    26.9   62.3  166.32] `

参数:
  • values – 1-D or higher numeric Tensor.
  • axis – The axis along which to sort. The default is -1, which sorts the last axis.
  • direction – The direction in which to sort the values (‘ASCENDING’ or ‘DESCENDING’).
  • name – Optional name for the operation.
返回:

A Tensor with the same dtype and shape as values, with the elements

sorted along the given axis.

Raises:

ValueError – If axis is not a constant scalar, or the direction is invalid.

tensorflow.space_to_batch(input, block_shape, paddings, name=None)

SpaceToBatch for N-D tensors of type T.

This operation divides “spatial” dimensions [1, …, M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the “batch” dimension (0) such that in the output, the spatial dimensions [1, …, M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description.

参数:
  • input – A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
  • block_shape – A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1.
  • paddings

    A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0.

    paddings[i] = [pad_start, pad_end] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that block_shape[i] divides input_shape[i + 1] + pad_start + pad_end.

    This operation is equivalent to the following steps:

    1. Zero-pad the start and end of dimensions [1, …, M] of the input according to paddings to produce padded of shape padded_shape.
    2. Reshape padded to reshaped_padded of shape:
      [batch] + [padded_shape[1] / block_shape[0],
      block_shape[0],

      …, padded_shape[M] / block_shape[M-1], block_shape[M-1]] +

      remaining_shape

    3. Permute dimensions of reshaped_padded to produce permuted_reshaped_padded of shape:
      block_shape + [batch] + [padded_shape[1] / block_shape[0],
      …, padded_shape[M] / block_shape[M-1]] +

      remaining_shape

    4. Reshape permuted_reshaped_padded to flatten block_shape into the batch dimension, producing an output tensor of shape:
      [batch * prod(block_shape)] + [padded_shape[1] / block_shape[0],
      …, padded_shape[M] / block_shape[M-1]] +

      remaining_shape

    Some examples:

    1. For the following input of shape [1, 2, 2, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ` x = [[[[1], [2]], [[3], [4]]]] `

    The output tensor has shape [4, 1, 1, 1] and value:

    ` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] `

    1. For the following input of shape [1, 2, 2, 3], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ``` x = [[[[1, 2, 3], [4, 5, 6]],

    [[7, 8, 9], [10, 11, 12]]]]

    ```

    The output tensor has shape [4, 1, 1, 3] and value:

    ` [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] `

    1. For the following input of shape [1, 4, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ``` x = [[[[1], [2], [3], [4]],

    [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]

    ```

    The output tensor has shape [4, 2, 2, 1] and value:

    ``` x = [[[[1], [3]], [[9], [11]]],

    [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

    ```

    1. For the following input of shape [2, 2, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [2, 0]]:

    ``` x = [[[[1], [2], [3], [4]],

    [[5], [6], [7], [8]]],
    [[[9], [10], [11], [12]],
    [[13], [14], [15], [16]]]]

    ```

    The output tensor has shape [8, 1, 3, 1] and value:

    ``` x = [[[[0], [1], [3]]], [[[0], [9], [11]]],

    [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]

    ```

    Among others, this operation is useful for reducing atrous convolution into regular convolution.

  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.space_to_batch_nd(input, block_shape, paddings, name=None)

SpaceToBatch for N-D tensors of type T.

This operation divides “spatial” dimensions [1, …, M] of the input into a grid of blocks of shape block_shape, and interleaves these blocks with the “batch” dimension (0) such that in the output, the spatial dimensions [1, …, M] correspond to the position within the grid, and the batch dimension combines both the position within a spatial block and the original batch position. Prior to division into blocks, the spatial dimensions of the input are optionally zero padded according to paddings. See below for a precise description.

参数:
  • input – A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
  • block_shape – A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1.
  • paddings

    A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0.

    paddings[i] = [pad_start, pad_end] specifies the padding for input dimension i + 1, which corresponds to spatial dimension i. It is required that block_shape[i] divides input_shape[i + 1] + pad_start + pad_end.

    This operation is equivalent to the following steps:

    1. Zero-pad the start and end of dimensions [1, …, M] of the input according to paddings to produce padded of shape padded_shape.
    2. Reshape padded to reshaped_padded of shape:
      [batch] + [padded_shape[1] / block_shape[0],
      block_shape[0],

      …, padded_shape[M] / block_shape[M-1], block_shape[M-1]] +

      remaining_shape

    3. Permute dimensions of reshaped_padded to produce permuted_reshaped_padded of shape:
      block_shape + [batch] + [padded_shape[1] / block_shape[0],
      …, padded_shape[M] / block_shape[M-1]] +

      remaining_shape

    4. Reshape permuted_reshaped_padded to flatten block_shape into the batch dimension, producing an output tensor of shape:
      [batch * prod(block_shape)] + [padded_shape[1] / block_shape[0],
      …, padded_shape[M] / block_shape[M-1]] +

      remaining_shape

    Some examples:

    1. For the following input of shape [1, 2, 2, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ` x = [[[[1], [2]], [[3], [4]]]] `

    The output tensor has shape [4, 1, 1, 1] and value:

    ` [[[[1]]], [[[2]]], [[[3]]], [[[4]]]] `

    1. For the following input of shape [1, 2, 2, 3], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ``` x = [[[[1, 2, 3], [4, 5, 6]],

    [[7, 8, 9], [10, 11, 12]]]]

    ```

    The output tensor has shape [4, 1, 1, 3] and value:

    ` [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]] `

    1. For the following input of shape [1, 4, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [0, 0]]:

    ``` x = [[[[1], [2], [3], [4]],

    [[5], [6], [7], [8]], [[9], [10], [11], [12]], [[13], [14], [15], [16]]]]

    ```

    The output tensor has shape [4, 2, 2, 1] and value:

    ``` x = [[[[1], [3]], [[9], [11]]],

    [[[2], [4]], [[10], [12]]], [[[5], [7]], [[13], [15]]], [[[6], [8]], [[14], [16]]]]

    ```

    1. For the following input of shape [2, 2, 4, 1], block_shape = [2, 2], and paddings = [[0, 0], [2, 0]]:

    ``` x = [[[[1], [2], [3], [4]],

    [[5], [6], [7], [8]]],
    [[[9], [10], [11], [12]],
    [[13], [14], [15], [16]]]]

    ```

    The output tensor has shape [8, 1, 3, 1] and value:

    ``` x = [[[[0], [1], [3]]], [[[0], [9], [11]]],

    [[[0], [2], [4]]], [[[0], [10], [12]]], [[[0], [5], [7]]], [[[0], [13], [15]]], [[[0], [6], [8]]], [[[0], [14], [16]]]]

    ```

    Among others, this operation is useful for reducing atrous convolution into regular convolution.

  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.split(value, num_or_size_splits, axis=0, num=None, name='split')

Splits a tensor value into a list of sub tensors.

See also tf.unstack.

If num_or_size_splits is an integer, then value is split along the dimension axis into num_split smaller tensors. This requires that value.shape[axis] is divisible by num_split.

If num_or_size_splits is a 1-D Tensor (or list), we call it size_splits and value is split into len(size_splits) elements. The shape of the i-th element has the same size as the value except along dimension axis where the size is size_splits[i].

For example:

>>> x = tf.Variable(tf.random.uniform([5, 30], -1, 1))

Split x into 3 tensors along dimension 1 >>> s0, s1, s2 = tf.split(x, num_or_size_splits=3, axis=1) >>> tf.shape(s0).numpy() array([ 5, 10], dtype=int32)

Split x into 3 tensors with sizes [4, 15, 11] along dimension 1 >>> split0, split1, split2 = tf.split(x, [4, 15, 11], 1) >>> tf.shape(split0).numpy() array([5, 4], dtype=int32) >>> tf.shape(split1).numpy() array([ 5, 15], dtype=int32) >>> tf.shape(split2).numpy() array([ 5, 11], dtype=int32)

参数:
  • value – The Tensor to split.
  • num_or_size_splits – Either an integer indicating the number of splits along axis or a 1-D integer Tensor or Python list containing the sizes of each output tensor along axis. If a scalar, then it must evenly divide value.shape[axis]; otherwise the sum of sizes along the split axis must match that of the value.
  • axis – An integer or scalar int32 Tensor. The dimension along which to split. Must be in the range [-rank(value), rank(value)). Defaults to 0.
  • num – Optional, used to specify the number of outputs when it cannot be inferred from the shape of size_splits.
  • name – A name for the operation (optional).
返回:

if num_or_size_splits is a scalar returns a list of num_or_size_splits Tensor objects; if num_or_size_splits is a 1-D Tensor returns num_or_size_splits.get_shape[0] Tensor objects resulting from splitting value.

Raises:

ValueError – If num is unspecified and cannot be inferred.

tensorflow.sqrt(x, name=None)

Computes element-wise square root of the input tensor.

Note: This operation does not support integer types.

>>> x = tf.constant([[4.0], [16.0]])
>>> tf.sqrt(x)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
  array([[2.],
         [4.]], dtype=float32)>
>>> y = tf.constant([[-4.0], [16.0]])
>>> tf.sqrt(y)
<tf.Tensor: shape=(2, 1), dtype=float32, numpy=
  array([[nan],
         [ 4.]], dtype=float32)>
>>> z = tf.constant([[-1.0], [16.0]], dtype=tf.complex128)
>>> tf.sqrt(z)
<tf.Tensor: shape=(2, 1), dtype=complex128, numpy=
  array([[0.0+1.j],
         [4.0+0.j]])>

Note: In order to support complex complex, please provide an input tensor of complex64 or complex128.

参数:
  • x – A tf.Tensor of type bfloat16, half, float32, float64, complex64, complex128
  • name – A name for the operation (optional).
返回:

A tf.Tensor of same size, type and sparsity as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.sqrt(x.values, …), x.dense_shape)

tensorflow.square(x, name=None)

Computes square of x element-wise.

I.e., \(y = x * x = x^2\).

>>> tf.math.square([-2., 0., 3.])
<tf.Tensor: shape=(3,), dtype=float32, numpy=array([4., 0., 9.], dtype=float32)>
参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.square(x.values, …), x.dense_shape)

tensorflow.squeeze(input, axis=None, name=None)

Removes dimensions of size 1 from the shape of a tensor.

Given a tensor input, this operation returns a tensor of the same type with all dimensions of size 1 removed. If you don’t want to remove all size 1 dimensions, you can remove specific size 1 dimensions by specifying axis.

For example:

`python # 't' is a tensor of shape [1, 2, 1, 3, 1, 1] tf.shape(tf.squeeze(t))  # [2, 3] `

Or, to remove specific size 1 dimensions:

`python # 't' is a tensor of shape [1, 2, 1, 3, 1, 1] tf.shape(tf.squeeze(t, [2, 4]))  # [1, 2, 3, 1] `

Unlike the older op tf.compat.v1.squeeze, this op does not accept a deprecated squeeze_dims argument.

Note: if input is a tf.RaggedTensor, then this operation takes O(N) time, where N is the number of elements in the squeezed dimensions.

参数:
  • input – A Tensor. The input to squeeze.
  • axis – An optional list of ints. Defaults to []. If specified, only squeezes the dimensions listed. The dimension index starts at 0. It is an error to squeeze a dimension that is not 1. Must be in the range [-rank(input), rank(input)). Must be specified if input is a RaggedTensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input. Contains the same data as input, but has one or more dimensions of size 1 removed.

Raises:

ValueError – The input cannot be converted to a tensor, or the specified axis cannot be squeezed.

tensorflow.stack(values, axis=0, name='stack')

Stacks a list of rank-R tensors into one rank-(R+1) tensor.

See also tf.concat, tf.tile, tf.repeat.

Packs the list of tensors in values into a tensor with rank one higher than each tensor in values, by packing them along the axis dimension. Given a list of length N of tensors of shape (A, B, C);

if axis == 0 then the output tensor will have the shape (N, A, B, C). if axis == 1 then the output tensor will have the shape (A, N, B, C). Etc.

For example:

>>> x = tf.constant([1, 4])
>>> y = tf.constant([2, 5])
>>> z = tf.constant([3, 6])
>>> tf.stack([x, y, z])
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 4],
       [2, 5],
       [3, 6]], dtype=int32)>
>>> tf.stack([x, y, z], axis=1)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6]], dtype=int32)>

This is the opposite of unstack. The numpy equivalent is np.stack

>>> np.array_equal(np.stack([x, y, z]), tf.stack([x, y, z]))
True
参数:
  • values – A list of Tensor objects with the same shape and type.
  • axis – An int. The axis to stack along. Defaults to the first dimension. Negative values wrap around, so the valid range is [-(R+1), R+1).
  • name – A name for this operation (optional).
返回:

A stacked Tensor with the same type as values.

返回类型:

output

Raises:

ValueError – If axis is out of the range [-(R+1), R+1).

tensorflow.stop_gradient(input, name=None)

Stops gradient computation.

When executed in a graph, this op outputs its input tensor as-is.

When building ops to compute gradients, this op prevents the contribution of its inputs to be taken into account. Normally, the gradient generator adds ops to a graph to compute the derivatives of a specified ‘loss’ by recursively finding out inputs that contributed to its computation. If you insert this op in the graph it inputs are masked from the gradient generator. They are not taken into account for computing gradients.

This is useful any time you want to compute a value with TensorFlow but need to pretend that the value was a constant. Some examples include:

  • The EM algorithm where the M-step should not involve backpropagation through the output of the E-step.
  • Contrastive divergence training of Boltzmann machines where, when differentiating the energy function, the training must not backpropagate through the graph that generated the samples from the model.
  • Adversarial training, where no backprop should happen through the adversarial example generation process.
参数:
  • input – A Tensor.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.strided_slice(input_, begin, end, strides=None, begin_mask=0, end_mask=0, ellipsis_mask=0, new_axis_mask=0, shrink_axis_mask=0, var=None, name=None)

Extracts a strided slice of a tensor (generalized python array indexing).

Instead of calling this op directly most users will want to use the NumPy-style slicing syntax (e.g. `tensor[…, 3:4:-1, tf.newaxis, 3]`), which is supported via `tf.Tensor.__getitem__` and `tf.Variable.__getitem__`. The interface of this op is a low-level encoding of the slicing syntax.

Roughly speaking, this op extracts a slice of size (end-begin)/stride from the given input_ tensor. Starting at the location specified by begin the slice continues by adding stride to the index until all dimensions are not less than end. Note that a stride can be negative, which causes a reverse slice.

Given a Python slice input[spec0, spec1, …, specn], this function will be called as follows.

begin, end, and strides will be vectors of length n. n in general is not equal to the rank of the input_ tensor.

In each mask field (begin_mask, end_mask, ellipsis_mask, new_axis_mask, shrink_axis_mask) the ith bit will correspond to the ith spec.

If the ith bit of begin_mask is set, begin[i] is ignored and the fullest possible range in that dimension is used instead. end_mask works analogously, except with the end range.

foo[5:,:,:3] on a 7x8x9 tensor is equivalent to foo[5:7,0:8,0:3]. foo[::-1] reverses a tensor with shape 8.

If the ith bit of ellipsis_mask is set, as many unspecified dimensions as needed will be inserted between other dimensions. Only one non-zero bit is allowed in ellipsis_mask.

For example foo[3:5,…,4:5] on a shape 10x3x3x10 tensor is equivalent to foo[3:5,:,:,4:5] and foo[3:5,…] is equivalent to foo[3:5,:,:,:].

If the ith bit of new_axis_mask is set, then begin, end, and stride are ignored and a new length 1 dimension is added at this point in the output tensor.

For example, foo[:4, tf.newaxis, :2] would produce a shape (4, 1, 2) tensor.

If the ith bit of shrink_axis_mask is set, it implies that the ith specification shrinks the dimensionality by 1, taking on the value at index begin[i]. end[i] and strides[i] are ignored in this case. For example in Python one might do foo[:, 3, :] which would result in shrink_axis_mask equal to 2.

NOTE: begin and end are zero-indexed. strides entries must be non-zero.

```python t = tf.constant([[[1, 1, 1], [2, 2, 2]],

[[3, 3, 3], [4, 4, 4]], [[5, 5, 5], [6, 6, 6]]])

tf.strided_slice(t, [1, 0, 0], [2, 1, 3], [1, 1, 1]) # [[[3, 3, 3]]] tf.strided_slice(t, [1, 0, 0], [2, 2, 3], [1, 1, 1]) # [[[3, 3, 3],

# [4, 4, 4]]]
tf.strided_slice(t, [1, -1, 0], [2, -3, 3], [1, -1, 1]) # [[[4, 4, 4],
# [3, 3, 3]]]

```

参数:
  • input – A Tensor.
  • begin – An int32 or int64 Tensor.
  • end – An int32 or int64 Tensor.
  • strides – An int32 or int64 Tensor.
  • begin_mask – An int32 mask.
  • end_mask – An int32 mask.
  • ellipsis_mask – An int32 mask.
  • new_axis_mask – An int32 mask.
  • shrink_axis_mask – An int32 mask.
  • var – The variable corresponding to input_ or None
  • name – A name for the operation (optional).
返回:

A Tensor the same type as input.

tensorflow.subtract(x, y, name=None)

Returns x - y element-wise.

NOTE: Subtract supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.switch_case(branch_index, branch_fns, default=None, name='switch_case')

Create a switch/case operation, i.e. an integer-indexed conditional.

See also tf.case.

This op can be substantially more efficient than tf.case when exactly one branch will be selected. tf.switch_case is more like a C++ switch/case statement than tf.case, which is more like an if/elif/elif/else chain.

The branch_fns parameter is either a dict from int to callables, or list of (int, callable) pairs, or simply a list of callables (in which case the index is implicitly the key). The branch_index Tensor is used to select an element in branch_fns with matching int key, falling back to default if none match, or max(keys) if no default is provided. The keys must form a contiguous set from 0 to len(branch_fns) - 1.

tf.switch_case supports nested structures as implemented in tf.nest. All callables must return the same (possibly nested) value structure of lists, tuples, and/or named tuples.

Example:

Pseudocode:

```c++ switch (branch_index) { // c-style switch

case 0: return 17; case 1: return 31; default: return -1;

or `python branches = {0: lambda: 17, 1: lambda: 31} branches.get(branch_index, lambda: -1)() `

Expressions:

`python def f1(): return tf.constant(17) def f2(): return tf.constant(31) def f3(): return tf.constant(-1) r = tf.switch_case(branch_index, branch_fns={0: f1, 1: f2}, default=f3) # Equivalent: tf.switch_case(branch_index, branch_fns={0: f1, 1: f2, 2: f3}) `

参数:
  • branch_index – An int Tensor specifying which of branch_fns should be executed.
  • branch_fns – A dict mapping int`s to callables, or a `list of (int, callable) pairs, or simply a list of callables (in which case the index serves as the key). Each callable must return a matching structure of tensors.
  • default – Optional callable that returns a structure of tensors.
  • name – A name for this operation (optional).
返回:

The tensors returned by the callable identified by branch_index, or those returned by default if no key matches and default was provided, or those returned by the max-keyed branch_fn if no default is provided.

Raises:
  • TypeError – If branch_fns is not a list/dictionary.
  • TypeError – If branch_fns is a list but does not contain 2-tuples or callables.
  • TypeError – If fns[i] is not callable for any i, or default is not callable.
tensorflow.tan(x, name=None)

Computes tan of x element-wise.

Given an input tensor, this function computes tangent of every element in the tensor. Input range is (-inf, inf) and output range is (-inf, inf). If input lies outside the boundary, nan is returned.

`python x = tf.constant([-float("inf"), -9, -0.5, 1, 1.2, 200, 10000, float("inf")]) tf.math.tan(x) ==> [nan 0.45231566 -0.5463025 1.5574077 2.572152 -1.7925274 0.32097113 nan] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int32, int64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.tanh(x, name=None)

Computes hyperbolic tangent of x element-wise.

Given an input tensor, this function computes hyperbolic tangent of every element in the tensor. Input range is [-inf, inf] and output range is [-1,1].

`python x = tf.constant([-float("inf"), -5, -0.5, 1, 1.2, 2, 3, float("inf")]) tf.math.tanh(x) ==> [-1. -0.99990916 -0.46211717 0.7615942 0.8336547 0.9640276 0.9950547 1.] `

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, complex64, complex128.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

If x is a SparseTensor, returns SparseTensor(x.indices, tf.math.tanh(x.values, …), x.dense_shape)

tensorflow.tensor_scatter_nd_add(tensor, indices, updates, name=None)

Adds sparse updates to an existing tensor according to indices.

This operation creates a new tensor by adding sparse updates to the passed in tensor. This operation is very similar to tf.scatter_nd_add, except that the updates are added onto an existing tensor (as opposed to a variable). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of tensor_scatter_add is to add individual elements to a tensor by index. For example, say we want to add 4 elements in a rank-1 tensor with 8 elements.

In Python, this scatter add operation would look like this:

```python
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) print(updated)

```

The resulting tensor would look like this:

[1, 12, 1, 11, 10, 1, 1, 13]

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter add operation would look like this:

```python

indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],

[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])

tensor = tf.ones([4, 4, 4],dtype=tf.int32) updated = tf.tensor_scatter_nd_add(tensor, indices, updates) print(updated)

```

The resulting tensor would look like this:

[[[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]],
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[6, 6, 6, 6], [7, 7, 7, 7], [8, 8, 8, 8], [9, 9, 9, 9]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

参数:
  • tensor – A Tensor. Tensor to copy/update.
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • updates – A Tensor. Must have the same type as tensor. Updates to scatter into output.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as tensor.

tensorflow.tensor_scatter_nd_sub(tensor, indices, updates, name=None)

Subtracts sparse updates from an existing tensor according to indices.

This operation creates a new tensor by subtracting sparse updates from the passed in tensor. This operation is very similar to tf.scatter_nd_sub, except that the updates are subtracted from an existing tensor (as opposed to a variable). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of tensor_scatter_sub is to subtract individual elements from a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

In Python, this scatter subtract operation would look like this:

```python
indices = tf.constant([[4], [3], [1], [7]]) updates = tf.constant([9, 10, 11, 12]) tensor = tf.ones([8], dtype=tf.int32) updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) print(updated)

```

The resulting tensor would look like this:

[1, -10, 1, -9, -8, 1, 1, -11]

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter add operation would look like this:

```python

indices = tf.constant([[0], [2]]) updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],

[7, 7, 7, 7], [8, 8, 8, 8]],
[[5, 5, 5, 5], [6, 6, 6, 6],
[7, 7, 7, 7], [8, 8, 8, 8]]])

tensor = tf.ones([4, 4, 4],dtype=tf.int32) updated = tf.tensor_scatter_nd_sub(tensor, indices, updates) print(updated)

```

The resulting tensor would look like this:

[[[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]],
[[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]], [[-4, -4, -4, -4], [-5, -5, -5, -5], [-6, -6, -6, -6], [-7, -7, -7, -7]], [[1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1], [1, 1, 1, 1]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

参数:
  • tensor – A Tensor. Tensor to copy/update.
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • updates – A Tensor. Must have the same type as tensor. Updates to scatter into output.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as tensor.

tensorflow.tensor_scatter_nd_update(tensor, indices, updates, name=None)

Scatter updates into an existing tensor according to indices.

This operation creates a new tensor by applying sparse updates to the passed in tensor. This operation is very similar to tf.scatter_nd, except that the updates are scattered onto an existing tensor (as opposed to a zero-tensor). If the memory for the existing tensor cannot be re-used, a copy is made and updated.

If indices contains duplicates, then their updates are accumulated (summed).

WARNING: The order in which updates are applied is nondeterministic, so the output will be nondeterministic if indices contains duplicates – because of some numerical approximation issues, numbers summed in different order may yield different results.

indices is an integer tensor containing indices into a new tensor of shape shape. The last dimension of indices can be at most the rank of shape:

indices.shape[-1] <= shape.rank

The last dimension of indices corresponds to indices into elements (if indices.shape[-1] = shape.rank) or slices (if indices.shape[-1] < shape.rank) along dimension indices.shape[-1] of shape. updates is a tensor with shape

indices.shape[:-1] + shape[indices.shape[-1]:]

The simplest form of scatter is to insert individual elements in a tensor by index. For example, say we want to insert 4 scattered elements in a rank-1 tensor with 8 elements.

<div style=”width:70%; margin:auto; margin-bottom:10px; margin-top:20px;”> <img style=”width:100%” src=”https://www.tensorflow.org/images/ScatterNd1.png” alt> </div>

In Python, this scatter operation would look like this:

>>> indices = tf.constant([[4], [3], [1], [7]])
>>> updates = tf.constant([9, 10, 11, 12])
>>> tensor = tf.ones([8], dtype=tf.int32)
>>> print(tf.tensor_scatter_nd_update(tensor, indices, updates))
tf.Tensor([ 1 11  1 10  9  1  1 12], shape=(8,), dtype=int32)

We can also, insert entire slices of a higher rank tensor all at once. For example, if we wanted to insert two slices in the first dimension of a rank-3 tensor with two matrices of new values.

In Python, this scatter operation would look like this:

>>> indices = tf.constant([[0], [2]])
>>> updates = tf.constant([[[5, 5, 5, 5], [6, 6, 6, 6],
...                         [7, 7, 7, 7], [8, 8, 8, 8]],
...                        [[5, 5, 5, 5], [6, 6, 6, 6],
...                         [7, 7, 7, 7], [8, 8, 8, 8]]])
>>> tensor = tf.ones([4, 4, 4], dtype=tf.int32)
>>> print(tf.tensor_scatter_nd_update(tensor, indices, updates).numpy())
[[[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]
 [[5 5 5 5]
  [6 6 6 6]
  [7 7 7 7]
  [8 8 8 8]]
 [[1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]
  [1 1 1 1]]]

Note that on CPU, if an out of bound index is found, an error is returned. On GPU, if an out of bound index is found, the index is ignored.

参数:
  • tensor – A Tensor. Tensor to copy/update.
  • indices – A Tensor. Must be one of the following types: int32, int64. Index tensor.
  • updates – A Tensor. Must have the same type as tensor. Updates to scatter into output.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as tensor.

tensorflow.tensordot(a, b, axes, name=None)

Tensor contraction of a and b along specified axes and outer product.

Tensordot (also known as tensor contraction) sums the product of elements from a and b over the indices specified by a_axes and b_axes. The lists a_axes and b_axes specify those pairs of axes along which to contract the tensors. The axis a_axes[i] of a must have the same dimension as axis b_axes[i] of b for all i in range(0, len(a_axes)). The lists a_axes and b_axes must have identical length and consist of unique integers that specify valid axes for each of the tensors. Additionally outer product is supported by passing axes=0.

This operation corresponds to numpy.tensordot(a, b, axes).

Example 1: When a and b are matrices (order 2), the case axes = 1 is equivalent to matrix multiplication.

Example 2: When a and b are matrices (order 2), the case axes = [[1], [0]] is equivalent to matrix multiplication.

Example 3: When a and b are matrices (order 2), the case axes=0 gives the outer product, a tensor of order 4.

Example 4: Suppose that \(a_{ijk}\) and \(b_{lmn}\) represent two tensors of order 3. Then, contract(a, b, [[0], [2]]) is the order 4 tensor \(c_{jklm}\) whose entry corresponding to the indices \((j,k,l,m)\) is given by:

\( c_{jklm} = sum_i a_{ijk} b_{lmi} \).

In general, order(c) = order(a) + order(b) - 2*len(axes[0]).

参数:
  • aTensor of type float32 or float64.
  • bTensor with the same type as a.
  • axes – Either a scalar N, or a list or an int32 Tensor of shape [2, k]. If axes is a scalar, sum over the last N axes of a and the first N axes of b in order. If axes is a list or Tensor the first and second row contain the set of unique integers specifying axes along which the contraction is computed, for a and b, respectively. The number of axes for a and b must be equal. If axes=0, computes the outer product between a and b.
  • name – A name for the operation (optional).
返回:

A Tensor with the same type as a.

Raises:
  • ValueError – If the shapes of a, b, and axes are incompatible.
  • IndexError – If the values in axes exceed the rank of the corresponding tensor.
tensorflow.tile(input, multiples, name=None)

Constructs a tensor by tiling a given tensor.

This operation creates a new tensor by replicating input multiples times. The output tensor’s i’th dimension has input.dims(i) * multiples[i] elements, and the values of input are replicated multiples[i] times along the ‘i’th dimension. For example, tiling [a b c d] by [2] produces [a b c d a b c d].

>>> a = tf.constant([[1,2,3],[4,5,6]], tf.int32)
>>> b = tf.constant([1,2], tf.int32)
>>> tf.tile(a, b)
<tf.Tensor: shape=(2, 6), dtype=int32, numpy=
array([[1, 2, 3, 1, 2, 3],
       [4, 5, 6, 4, 5, 6]], dtype=int32)>
>>> c = tf.constant([2,1], tf.int32)
>>> tf.tile(a, c)
<tf.Tensor: shape=(4, 3), dtype=int32, numpy=
array([[1, 2, 3],
       [4, 5, 6],
       [1, 2, 3],
       [4, 5, 6]], dtype=int32)>
>>> d = tf.constant([2,2], tf.int32)
>>> tf.tile(a, d)
<tf.Tensor: shape=(4, 6), dtype=int32, numpy=
array([[1, 2, 3, 1, 2, 3],
       [4, 5, 6, 4, 5, 6],
       [1, 2, 3, 1, 2, 3],
       [4, 5, 6, 4, 5, 6]], dtype=int32)>
参数:
  • input – A Tensor. 1-D or higher.
  • multiples – A Tensor. Must be one of the following types: int32, int64. 1-D. Length must be the same as the number of dimensions in input
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as input.

tensorflow.timestamp(name=None)

Provides the time since epoch in seconds.

Returns the timestamp as a float64 for seconds since the Unix epoch.

Note: the timestamp is computed when the op is executed, not when it is added to the graph.

参数:name – A name for the operation (optional).
返回:A Tensor of type float64.
tensorflow.transpose(a, perm=None, conjugate=False, name='transpose')

Transposes a, where a is a Tensor.

Permutes the dimensions according to the value of perm.

The returned tensor’s dimension i will correspond to the input dimension perm[i]. If perm is not given, it is set to (n-1…0), where n is the rank of the input tensor. Hence by default, this operation performs a regular matrix transpose on 2-D input Tensors.

If conjugate is True and a.dtype is either complex64 or complex128 then the values of a are conjugated and transposed.

@compatibility(numpy) In numpy transposes are memory-efficient constant time operations as they simply return a new view of the same data with adjusted strides.

TensorFlow does not support strides, so transpose returns a new tensor with the items permuted. @end_compatibility

For example:

>>> x = tf.constant([[1, 2, 3], [4, 5, 6]])
>>> tf.transpose(x)
<tf.Tensor: shape=(3, 2), dtype=int32, numpy=
array([[1, 4],
       [2, 5],
       [3, 6]], dtype=int32)>

Equivalently, you could call tf.transpose(x, perm=[1, 0]).

If x is complex, setting conjugate=True gives the conjugate transpose:

>>> x = tf.constant([[1 + 1j, 2 + 2j, 3 + 3j],
...                  [4 + 4j, 5 + 5j, 6 + 6j]])
>>> tf.transpose(x, conjugate=True)
<tf.Tensor: shape=(3, 2), dtype=complex128, numpy=
array([[1.-1.j, 4.-4.j],
       [2.-2.j, 5.-5.j],
       [3.-3.j, 6.-6.j]])>

‘perm’ is more useful for n-dimensional tensors where n > 2:

>>> x = tf.constant([[[ 1,  2,  3],
...                   [ 4,  5,  6]],
...                  [[ 7,  8,  9],
...                   [10, 11, 12]]])

As above, simply calling tf.transpose will default to perm=[2,1,0].

To take the transpose of the matrices in dimension-0 (such as when you are transposing matrices where 0 is the batch dimesnion), you would set perm=[0,2,1].

>>> tf.transpose(x, perm=[0, 2, 1])
<tf.Tensor: shape=(2, 3, 2), dtype=int32, numpy=
array([[[ 1,  4],
        [ 2,  5],
        [ 3,  6]],
        [[ 7, 10],
        [ 8, 11],
        [ 9, 12]]], dtype=int32)>

Note: This has a shorthand linalg.matrix_transpose):

参数:
  • a – A Tensor.
  • perm – A permutation of the dimensions of a. This should be a vector.
  • conjugate – Optional bool. Setting it to True is mathematically equivalent to tf.math.conj(tf.transpose(input)).
  • name – A name for the operation (optional).
返回:

A transposed Tensor.

tensorflow.truediv(x, y, name=None)

Divides x / y elementwise (using Python 3 division operator semantics).

NOTE: Prefer using the Tensor operator or tf.divide which obey Python division operator semantics.

This function forces Python 3 division operator semantics where all integer arguments are cast to floating types first. This op is generated by normal x / y division in Python 3 and in Python 2.7 with from __future__ import division. If you want integer division that rounds down, use x // y or tf.math.floordiv.

x and y must have the same numeric type. If the inputs are floating point, the output will have the same type. If the inputs are integral, the inputs are cast to float32 for int8 and int16 and float64 for int32 and int64 (matching the behavior of Numpy).

参数:
  • xTensor numerator of numeric type.
  • yTensor denominator of numeric type.
  • name – A name for the operation (optional).
返回:

x / y evaluated in floating point.

Raises:

TypeError – If x and y have different dtypes.

tensorflow.truncatediv(x, y, name=None)

Returns x / y element-wise for integer types.

Truncation designates that negative numbers will round fractional quantities toward zero. I.e. -7 / 5 = -1. This matches C semantics but it is different than Python semantics. See FloorDiv for a division function that matches Python Semantics.

NOTE: truncatediv supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor. Must be one of the following types: bfloat16, half, float32, float64, uint8, int8, uint16, int16, int32, int64, complex64, complex128.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.truncatemod(x, y, name=None)

Returns element-wise remainder of division. This emulates C semantics in that

the result here is consistent with a truncating divide. E.g. truncate(x / y) * y + truncate_mod(x, y) = x.

NOTE: truncatemod supports broadcasting. More about broadcasting [here](http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html)

参数:
  • x – A Tensor. Must be one of the following types: int32, int64, bfloat16, half, float32, float64.
  • y – A Tensor. Must have the same type as x.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as x.

tensorflow.tuple(tensors, control_inputs=None, name=None)

Group tensors together.

This creates a tuple of tensors with the same values as the tensors argument, except that the value of each tensor is only returned after the values of all tensors have been computed.

control_inputs contains additional ops that have to finish before this op finishes, but whose outputs are not returned.

This can be used as a “join” mechanism for parallel computations: all the argument tensors can be computed in parallel, but the values of any tensor returned by tuple are only available after all the parallel computations are done.

See also tf.group and tf.control_dependencies.

参数:
  • tensors – A list of Tensor`s or `IndexedSlices, some entries can be None.
  • control_inputs – List of additional ops to finish before returning.
  • name – (optional) A name to use as a name_scope for the operation.
返回:

Same as tensors.

Raises:
  • ValueError – If tensors does not contain any Tensor or IndexedSlices.
  • TypeError – If control_inputs is not a list of Operation or Tensor objects.
tensorflow.unique(x, out_idx=tf.int32, name=None)

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x; x does not need to be sorted. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. In other words:

y[idx[i]] = x[i] for i in [0, 1,…,rank(x) - 1]

Examples:

` # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx = unique(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] `

` # tensor 'x' is [4, 5, 1, 2, 3, 3, 4, 5] y, idx = unique(x) y ==> [4, 5, 1, 2, 3] idx ==> [0, 1, 2, 3, 4, 4, 0, 1] `

参数:
  • x – A Tensor. 1-D.
  • out_idx – An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
  • name – A name for the operation (optional).
返回:

A tuple of Tensor objects (y, idx).

y: A Tensor. Has the same type as x. idx: A Tensor of type out_idx.

tensorflow.unique_with_counts(x, out_idx=tf.int32, name=None)

Finds unique elements in a 1-D tensor.

This operation returns a tensor y containing all of the unique elements of x sorted in the same order that they occur in x. This operation also returns a tensor idx the same size as x that contains the index of each value of x in the unique output y. Finally, it returns a third tensor count that contains the count of each element of y in x. In other words:

y[idx[i]] = x[i] for i in [0, 1,…,rank(x) - 1]

For example:

` # tensor 'x' is [1, 1, 2, 4, 4, 4, 7, 8, 8] y, idx, count = unique_with_counts(x) y ==> [1, 2, 4, 7, 8] idx ==> [0, 0, 1, 2, 2, 2, 3, 4, 4] count ==> [2, 1, 3, 1, 2] `

参数:
  • x – A Tensor. 1-D.
  • out_idx – An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int32.
  • name – A name for the operation (optional).
返回:

A tuple of Tensor objects (y, idx, count).

y: A Tensor. Has the same type as x. idx: A Tensor of type out_idx. count: A Tensor of type out_idx.

tensorflow.unravel_index(indices, dims, name=None)

Converts an array of flat indices into a tuple of coordinate arrays.

Example:

` y = tf.unravel_index(indices=[2, 5, 7], dims=[3, 3]) # 'dims' represent a hypothetical (3, 3) tensor of indices: # [[0, 1, *2*], #  [3, 4, *5*], #  [6, *7*, 8]] # For each entry from 'indices', this operation returns # its coordinates (marked with '*'), such as # 2 ==> (0, 2) # 5 ==> (1, 2) # 7 ==> (2, 1) y ==> [[0, 1, 2], [2, 2, 1]] `

@compatibility(numpy) Equivalent to np.unravel_index @end_compatibility

参数:
  • indices – A Tensor. Must be one of the following types: int32, int64. An 0-D or 1-D int Tensor whose elements are indices into the flattened version of an array of dimensions dims.
  • dims – A Tensor. Must have the same type as indices. An 1-D int Tensor. The shape of the array to use for unraveling indices.
  • name – A name for the operation (optional).
返回:

A Tensor. Has the same type as indices.

tensorflow.unstack(value, num=None, axis=0, name='unstack')

Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors.

Unpacks num tensors from value by chipping it along the axis dimension. If num is not specified (the default), it is inferred from value’s shape. If value.shape[axis] is not known, ValueError is raised.

For example, given a tensor of shape (A, B, C, D);

If axis == 0 then the i’th tensor in output is the slice
value[i, :, :, :] and each tensor in output will have shape (B, C, D). (Note that the dimension unpacked along is gone, unlike split).
If axis == 1 then the i’th tensor in output is the slice
value[:, i, :, :] and each tensor in output will have shape (A, C, D).

Etc.

This is the opposite of stack.

参数:
  • value – A rank R > 0 Tensor to be unstacked.
  • num – An int. The length of the dimension axis. Automatically inferred if None (the default).
  • axis – An int. The axis to unstack along. Defaults to the first dimension. Negative values wrap around, so the valid range is [-R, R).
  • name – A name for the operation (optional).
返回:

The list of Tensor objects unstacked from value.

Raises:
  • ValueError – If num is unspecified and cannot be inferred.
  • ValueError – If axis is out of the range [-R, R).
tensorflow.variable_creator_scope(variable_creator)

Scope which defines a variable creation function to be used by variable().

variable_creator is expected to be a function with the following signature:

```
def variable_creator(next_creator, **kwargs)

```

The creator is supposed to eventually call the next_creator to create a variable if it does want to create a variable and not call Variable or ResourceVariable directly. This helps make creators composable. A creator may choose to create multiple variables, return already existing variables, or simply register that a variable was created and defer to the next creators in line. Creators can also modify the keyword arguments seen by the next creators.

Custom getters in the variable scope will eventually resolve down to these custom creators when they do create variables.

The valid keyword arguments in kwds are:

  • initial_value: A Tensor, or Python object convertible to a Tensor,

    which is the initial value for the Variable. The initial value must have a shape specified unless validate_shape is set to False. Can also be a callable with no argument that returns the initial value when called. In that case, dtype must be specified. (Note that initializer functions from init_ops.py must first be bound to a shape before being used here.)

  • trainable: If True, the default, GradientTapes automatically watch

    uses of this Variable.

  • validate_shape: If False, allows the variable to be initialized with a

    value of unknown shape. If True, the default, the shape of initial_value must be known.

  • caching_device: Optional device string describing where the Variable

    should be cached for reading. Defaults to the Variable’s device. If not None, caches on another device. Typical use is to cache on the device where the Ops using the Variable reside, to deduplicate copying through Switch and other conditional statements.

  • name: Optional name for the variable. Defaults to ‘Variable’ and gets

    uniquified automatically.

    dtype: If set, initial_value will be converted to the given type.

    If None, either the datatype will be kept (if initial_value is a Tensor), or convert_to_tensor will decide.

  • constraint: A constraint function to be applied to the variable after

    updates by some algorithms.

  • synchronization: Indicates when a distributed a variable will be

    aggregated. Accepted values are constants defined in the class tf.VariableSynchronization. By default the synchronization is set to AUTO and the current DistributionStrategy chooses when to synchronize.

  • aggregation: Indicates how a distributed variable will be aggregated.

    Accepted values are constants defined in the class tf.VariableAggregation.

This set may grow over time, so it’s important the signature of creators is as mentioned above.

参数:variable_creator – the passed creator
Yields:A scope in which the creator is active
tensorflow.vectorized_map(fn, elems)

Parallel map on the list of tensors unpacked from elems on dimension 0.

This method works similar to tf.map_fn but is optimized to run much faster, possibly with a much larger memory footprint. The speedups are obtained by vectorization (see https://arxiv.org/pdf/1903.04243.pdf). The idea behind vectorization is to semantically launch all the invocations of fn in parallel and fuse corresponding operations across all these invocations. This fusion is done statically at graph generation time and the generated code is often similar in performance to a manually fused version.

Because tf.vectorized_map fully parallelizes the batch, this method will generally be significantly faster than using tf.map_fn, especially in eager mode. However this is an experimental feature and currently has a lot of limitations:

  • There should be no data dependency between the different semantic invocations of fn, i.e. it should be safe to map the elements of the inputs in any order.
  • Stateful kernels may mostly not be supported since these often imply a data dependency. We do support a limited set of such stateful kernels though (like RandomFoo, Variable operations like reads, etc).
  • fn has limited support for control flow operations. tf.cond in particular is not supported.
  • fn should return nested structure of Tensors or Operations. However if an Operation is returned, it should have zero outputs.
  • The shape and dtype of any intermediate or output tensors in the computation of fn should not depend on the input to fn.

Examples: ```python def outer_product(a):

return tf.tensordot(a, a, 0)

batch_size = 100 a = tf.ones((batch_size, 32, 32)) c = tf.vectorized_map(outer_product, a) assert c.shape == (batch_size, 32, 32, 32, 32) ```

```python # Computing per-example gradients

batch_size = 10 num_features = 32 layer = tf.keras.layers.Dense(1)

def model_fn(arg):
with tf.GradientTape() as g:
inp, label = arg inp = tf.expand_dims(inp, 0) label = tf.expand_dims(label, 0) prediction = layer(inp) loss = tf.nn.l2_loss(label - prediction)

return g.gradient(loss, (layer.kernel, layer.bias))

inputs = tf.random.uniform([batch_size, num_features]) labels = tf.random.uniform([batch_size, 1]) per_example_gradients = tf.vectorized_map(model_fn, (inputs, labels)) assert per_example_gradients[0].shape == (batch_size, num_features, 1) assert per_example_gradients[1].shape == (batch_size, 1) ```

参数:
  • fn – The callable to be performed. It accepts one argument, which will have the same (possibly nested) structure as elems, and returns a possibly nested structure of Tensors and Operations, which may be different than the structure of elems.
  • elems – A tensor or (possibly nested) sequence of tensors, each of which will be unpacked along their first dimension. The nested sequence of the resulting slices will be mapped over by fn.
返回:

A tensor or (possibly nested) sequence of tensors. Each tensor packs the results of applying fn to tensors unpacked from elems along the first dimension, from first to last.

tensorflow.where(condition, x=None, y=None, name=None)

Return the elements where condition is True (multiplexing x and y).

This operator has two modes: in one mode both x and y are provided, in another mode neither are provided. condition is always expected to be a tf.Tensor of type bool.

#### Retrieving indices of True elements

If x and y are not provided (both are None):

tf.where will return the indices of condition that are True, in the form of a 2-D tensor with shape (n, d). (Where n is the number of matching indices in condition, and d is the number of dimensions in condition).

Indices are output in row-major order.

>>> tf.where([True, False, False, True])
<tf.Tensor: shape=(2, 1), dtype=int64, numpy=
array([[0],
       [3]])>
>>> tf.where([[True, False], [False, True]])
<tf.Tensor: shape=(2, 2), dtype=int64, numpy=
array([[0, 0],
       [1, 1]])>
>>> tf.where([[[True, False], [False, True], [True, True]]])
<tf.Tensor: shape=(4, 3), dtype=int64, numpy=
array([[0, 0, 0],
       [0, 1, 1],
       [0, 2, 0],
       [0, 2, 1]])>

#### Multiplexing between x and y

If x and y are provided (both have non-None values):

tf.where will choose an output shape from the shapes of condition, x, and y that all three shapes are [broadcastable](https://docs.scipy.org/doc/numpy/reference/ufuncs.html) to.

The condition tensor acts as a mask that chooses whether the corresponding element / row in the output should be taken from x (if the elemment in condition is True) or `y (if it is false).

>>> tf.where([True, False, False, True], [1,2,3,4], [100,200,300,400])
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([  1, 200, 300,   4],
dtype=int32)>
>>> tf.where([True, False, False, True], [1,2,3,4], [100])
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([  1, 100, 100,   4],
dtype=int32)>
>>> tf.where([True, False, False, True], [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([  1, 100, 100,   4],
dtype=int32)>
>>> tf.where([True, False, False, True], 1, 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([  1, 100, 100,   1],
dtype=int32)>
>>> tf.where(True, [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([1, 2, 3, 4],
dtype=int32)>
>>> tf.where(False, [1,2,3,4], 100)
<tf.Tensor: shape=(4,), dtype=int32, numpy=array([100, 100, 100, 100],
dtype=int32)>
参数:
  • condition – A tf.Tensor of type bool
  • x – If provided, a Tensor which is of the same type as y, and has a shape broadcastable with condition and y.
  • y – If provided, a Tensor which is of the same type as y, and has a shape broadcastable with condition and x.
  • name – A name of the operation (optional).
返回:

A Tensor with the same type as x and y, and shape that

is broadcast from condition, x, and y.

Otherwise, a Tensor with shape (num_true, dim_size(condition)).

返回类型:

If x and y are provided

Raises:

ValueError – When exactly one of x or y is non-None, or the shapes are not all broadcastable.

tensorflow.while_loop(cond, body, loop_vars, shape_invariants=None, parallel_iterations=10, back_prop=True, swap_memory=False, maximum_iterations=None, name=None)

Repeat body while the condition cond is true. (deprecated argument values)

Warning: SOME ARGUMENT VALUES ARE DEPRECATED: (back_prop=False). They will be removed in a future version. Instructions for updating: back_prop=False is deprecated. Consider using tf.stop_gradient instead. Instead of: results = tf.while_loop(c, b, vars, back_prop=False) Use: results = tf.nest.map_structure(tf.stop_gradient, tf.while_loop(c, b, vars))

cond is a callable returning a boolean scalar tensor. body is a callable returning a (possibly nested) tuple, namedtuple or list of tensors of the same arity (length and structure) and types as loop_vars. loop_vars is a (possibly nested) tuple, namedtuple or list of tensors that is passed to both cond and body. cond and body both take as many arguments as there are loop_vars.

In addition to regular Tensors or IndexedSlices, the body may accept and return TensorArray objects. The flows of the TensorArray objects will be appropriately forwarded between loops and during gradient calculations.

Note that while_loop calls cond and body exactly once (inside the call to while_loop, and not at all during Session.run()). while_loop stitches together the graph fragments created during the cond and body calls with some additional graph nodes to create the graph flow that repeats body until cond returns false.

For correctness, tf.while_loop() strictly enforces shape invariants for the loop variables. A shape invariant is a (possibly partial) shape that is unchanged across the iterations of the loop. An error will be raised if the shape of a loop variable after an iteration is determined to be more general than or incompatible with its shape invariant. For example, a shape of [11, None] is more general than a shape of [11, 17], and [11, 21] is not compatible with [11, 17]. By default (if the argument shape_invariants is not specified), it is assumed that the initial shape of each tensor in loop_vars is the same in every iteration. The shape_invariants argument allows the caller to specify a less specific shape invariant for each loop variable, which is needed if the shape varies between iterations. The tf.Tensor.set_shape function may also be used in the body function to indicate that the output loop variable has a particular shape. The shape invariant for SparseTensor and IndexedSlices are treated specially as follows:

a) If a loop variable is a SparseTensor, the shape invariant must be TensorShape([r]) where r is the rank of the dense tensor represented by the sparse tensor. It means the shapes of the three tensors of the SparseTensor are ([None], [None, r], [r]). NOTE: The shape invariant here is the shape of the SparseTensor.dense_shape property. It must be the shape of a vector.

b) If a loop variable is an IndexedSlices, the shape invariant must be a shape invariant of the values tensor of the IndexedSlices. It means the shapes of the three tensors of the IndexedSlices are (shape, [shape[0]], [shape.ndims]).

while_loop implements non-strict semantics, enabling multiple iterations to run in parallel. The maximum number of parallel iterations can be controlled by parallel_iterations, which gives users some control over memory consumption and execution order. For correct programs, while_loop should return the same result for any parallel_iterations > 0.

For training, TensorFlow stores the tensors that are produced in the forward inference and are needed in back propagation. These tensors are a main source of memory consumption and often cause OOM errors when training on GPUs. When the flag swap_memory is true, we swap out these tensors from GPU to CPU. This for example allows us to train RNN models with very long sequences and large batches.

参数:
  • cond – A callable that represents the termination condition of the loop.
  • body – A callable that represents the loop body.
  • loop_vars – A (possibly nested) tuple, namedtuple or list of numpy array, Tensor, and TensorArray objects.
  • shape_invariants – The shape invariants for the loop variables.
  • parallel_iterations – The number of iterations allowed to run in parallel. It must be a positive integer.
  • back_prop – (optional) Deprecated. False disables support for back propagation. Prefer using tf.stop_gradient instead.
  • swap_memory – Whether GPU-CPU memory swap is enabled for this loop.
  • maximum_iterations – Optional maximum number of iterations of the while loop to run. If provided, the cond output is AND-ed with an additional condition ensuring the number of iterations executed is no greater than maximum_iterations.
  • name – Optional name prefix for the returned tensors.
返回:

The output tensors for the loop variables after the loop. The return value

has the same structure as loop_vars.

Raises:
  • TypeError – if cond or body is not callable.
  • ValueError – if loop_vars is empty.

Example:

`python i = tf.constant(0) c = lambda i: tf.less(i, 10) b = lambda i: (tf.add(i, 1), ) r = tf.while_loop(c, b, [i]) `

Example with nesting and a namedtuple:

`python import collections Pair = collections.namedtuple('Pair', 'j, k') ijk_0 = (tf.constant(0), Pair(tf.constant(1), tf.constant(2))) c = lambda i, p: i < 10 b = lambda i, p: (i + 1, Pair((p.j + p.k), (p.j - p.k))) ijk_final = tf.while_loop(c, b, ijk_0) `

Example using shape_invariants:

```python i0 = tf.constant(0) m0 = tf.ones([2, 2]) c = lambda i, m: i < 10 b = lambda i, m: [i+1, tf.concat([m, m], axis=0)] tf.while_loop(

c, b, loop_vars=[i0, m0], shape_invariants=[i0.get_shape(), tf.TensorShape([None, 2])])

```

Example which demonstrates non-strict semantics: In the following example, the final value of the counter i does not depend on x. So the while_loop can increment the counter parallel to updates of x. However, because the loop counter at one loop iteration depends on the value at the previous iteration, the loop counter itself cannot be incremented in parallel. Hence if we just want the final value of the counter (which we print on the line print(sess.run(i))), then x will never be incremented, but the counter will be updated on a single thread. Conversely, if we want the value of the output (which we print on the line print(sess.run(out).shape)), then the counter may be incremented on its own thread, while x can be incremented in parallel on a separate thread. In the extreme case, it is conceivable that the thread incrementing the counter runs until completion before x is incremented even a single time. The only thing that can never happen is that the thread updating x can never get ahead of the counter thread because the thread incrementing x depends on the value of the counter.

```python import tensorflow as tf

n = 10000 x = tf.constant(list(range(n))) c = lambda i, x: i < n b = lambda i, x: (tf.compat.v1.Print(i + 1, [i]), tf.compat.v1.Print(x + 1, [i], “x:”)) i, out = tf.while_loop(c, b, (0, x)) with tf.compat.v1.Session() as sess:

print(sess.run(i)) # prints [0] … [9999]

# The following line may increment the counter and x in parallel. # The counter thread may get ahead of the other thread, but not the # other way around. So you may see things like # [9996] x:[9987] # meaning that the counter thread is on iteration 9996, # while the other thread is on iteration 9987 print(sess.run(out).shape)

```

tensorflow.zeros(shape, dtype=tf.float32, name=None)

Creates a tensor with all elements set to zero.

This operation returns a tensor of type dtype with shape shape and all elements set to zero.

>>> tf.zeros([3, 4], tf.int32)
<tf.Tensor: shape=(3, 4), dtype=int32, numpy=
array([[0, 0, 0, 0],
       [0, 0, 0, 0],
       [0, 0, 0, 0]], dtype=int32)>
参数:
  • shape – A list of integers, a tuple of integers, or a 1-D Tensor of type int32.
  • dtype – The DType of an element in the resulting Tensor.
  • name – Optional string. A name for the operation.
返回:

A Tensor with all elements set to zero.

tensorflow.zeros_initializer

tensorflow.python.ops.init_ops_v2.Zeros 的别名

tensorflow.zeros_like(input, dtype=None, name=None)

Creates a tensor with all elements set to zero.

See also tf.zeros.

Given a single tensor or array-like object (input), this operation returns a tensor of the same type and shape as input with all elements set to zero. Optionally, you can use dtype to specify a new type for the returned tensor.

实际案例

>>> tensor = tf.constant([[1, 2, 3], [4, 5, 6]])
>>> tf.zeros_like(tensor)
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[0, 0, 0],
       [0, 0, 0]], dtype=int32)>
>>> tf.zeros_like(tensor, dtype=tf.float32)
<tf.Tensor: shape=(2, 3), dtype=float32, numpy=
array([[0., 0., 0.],
       [0., 0., 0.]], dtype=float32)>
>>> tf.zeros_like([[1, 2, 3], [4, 5, 6]])
<tf.Tensor: shape=(2, 3), dtype=int32, numpy=
array([[0, 0, 0],
       [0, 0, 0]], dtype=int32)>
参数:
  • input – A Tensor or array-like object.
  • dtype – A type for the returned Tensor. Must be float16, float32, float64, int8, uint8, int16, uint16, int32, int64, complex64, complex128, bool or string (optional).
  • name – A name for the operation (optional).
返回:

A Tensor with all elements set to zero.