[][src]Struct tensorflow::Graph

pub struct Graph { /* fields omitted */ }

Represents a computation graph. Graphs may be shared between sessions. Graphs are thread-safe when used as directed.

Methods

impl Graph
[src]

Creates a new graph.

Operation will only be added to graph when finish_operation() is called (assuming finish_operation() does not return an error). graph must not be deleted until after finish_operation() is called.

Returns the operation in the graph with the given name, if it exists. If the operation does not exist, returns Ok(None).

Like operation_by_name, except that failure to find the operation is considered an error.

Important traits for OperationIter<'a>

Iterates over the operations in the graph.

Returns the graph definition as a protobuf.

Returns the number of dimensions of the Tensor referenced by output.

If the number of dimensions in the shape is unknown, returns -1.

Returns an error if:

  • output is not in graph.

Returns the shape of the Tensor referenced by output.

Returns an error if:

  • output is not in graph.

Import the graph serialized in graph_def.

Import the graph serialized in graph_def.

Import the graph serialized in graph_def.

Adds a copy of function func and optionally its gradient function grad to the graph. Once func/grad is added to the graph, it can be called by creating an operation using the function's name. Any changes to func/grad (including deleting it) done after this method returns, won't affect the copy of func/grad in the graph. If func or grad are already in the graph, copy_function has no effect on them, but can establish the function->gradient relationship between them if func does not already have a gradient. If func already has a gradient different from grad, an error is returned.

If grad is None and func is not in the graph, func is added without a gradient. If grad is None and func is in the graph, copy_function is a noop. grad must have appropriate signature as described in the doc of GradientDef in tensorflow/core/framework/function.proto.

If successful, returns () and func and grad are added to the graph. Otherwise, an error is returned and the graph is unmodified.

Create a Function from a Graph.

Arguments

  • fn_name - the name of the new Function. Should match the operation name (OpDef.name) regexp [A-Z][A-Za-z0-9_.\-/]*. If append_hash_to_fn_name is false, fn_name must be distinct from other function and operation names (at least those registered in graphs where this function will be used).
  • append_hash_to_fn_name - If true, the actual name of the function will be fn_name appended with '_<hash_of_this_function's_definition>'. If false, the function's name will be fn_name.
  • opers - Array of operations to become the body of the function or null.
    • If None, all the operations in the graph will become part of the function except operations referenced in inputs. These operations must have a single output (these operations are typically placeholders created for the sole purpose of representing an input. We can relax this constraint if there are compelling use cases).
    • If Some, all operations in it will become part of the function. In particular, no automatic skipping of dummy input operations is performed.
  • inputs - array of Outputs that specify the inputs to the function. The names used for function inputs are normalized names of the operations (usually placeholders) pointed to by inputs. These operation names should start with a letter. Normalization will convert all letters to lowercase and non-alphanumeric characters to '_' to make resulting names match the "[a-z][a-z0-9_]*" pattern for operation argument names. inputs cannot contain the same tensor twice.
  • outputs - array of Outputs that specify the outputs of the function. outputs can contain the same tensor more than once.
  • output_names - The names of the function's outputs. output_names array must either have the same length as outputs or be None. In the former case, the names should match the regular expression for ArgDef names - "[a-z][a-z0-9_]*". In the latter case, names for outputs will be generated automatically.
  • opts - various options for the function, e.g. XLA's inlining control.
  • description - optional human-readable description of this function.

Note that when the same Output is listed as both an input and an output, the corresponding function's output will equal to this input, instead of the original node's output.

Callers must also satisfy the following constraints:

  • inputs cannot refer to Outputs within a control flow context. For example, one cannot use the output of "switch" node as input.
  • inputs and outputs cannot have reference types. Reference types are not exposed through C API and are being replaced with Resources. We support reference types inside function's body to support legacy code. Do not use them in new code.
  • Every node in the function's body must have all of its inputs (including control inputs). In other words, for every node in the body, each input must be either listed in inputs or must come from another node in the body. In particular, it is an error to have a control edge going from a node outside of the body into a node in the body. This applies to control edges going from nodes referenced in inputs to nodes in the body when the former nodes are not in the body (automatically skipped or not included in explicitly specified body).

Returns

A newly created Function instance.

Returns the number of functions registered in the graph.

Returns functions registered in the graph.

Returns the serialized OpDef proto with name op_name, or a bad status if no such op exists. This can return OpDefs of functions copied into the graph.

Returns the serialized VersionDef proto for this graph.

Attempts to evaluate output. This will only be possible if output doesn't depend on any graph inputs (this function is safe to call if this isn't the case though).

If the evaluation is successful, this function returns the tensor. Otherwise returns None. An error status is returned if something is wrong with the graph or input or the type requested doesn't match the type of the tensor.

Adds operations to compute the partial derivatives of sum of ys w.r.t xs, i.e., d(y_1 + y_2 + ...)/dx_1, d(y_1 + y_2 + ...)/dx_2...

dx are used as initial gradients (which represent the symbolic partial derivatives of some loss function L w.r.t. y). dx must be None or have the same length as y. If dx is None, the implementation will use dx of OnesLike for all shapes in y. prefix names the scope into which all gradients operations are being added. prefix must be unique within the provided graph otherwise this operation will fail. If prefix is None, gradient nodes are automatically named under the "gradients/" prefix. To guarantee name uniqueness, subsequent calls to the same graph will append an incremental tag to the prefix: "gradients_1/", "gradients_2/", ...

WARNING: This function does not yet support all the gradients that python supports. See https://www.tensorflow.org/code/tensorflow/cc/gradients/README.md for instructions on how to add C++ more gradients.

Trait Implementations

impl Debug for Graph
[src]

Auto Trait Implementations

impl Send for Graph

impl Sync for Graph

Blanket Implementations

impl<T, U> Into for T where
    U: From<T>, 
[src]

impl<T> From for T
[src]

impl<T, U> TryFrom for T where
    T: From<U>, 
[src]

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> Borrow for T where
    T: ?Sized
[src]

impl<T> Any for T where
    T: 'static + ?Sized
[src]

impl<T, U> TryInto for T where
    U: TryFrom<T>, 
[src]

🔬 This is a nightly-only experimental API. (try_from)

The type returned in the event of a conversion error.

impl<T> BorrowMut for T where
    T: ?Sized
[src]