Struct tensorflow::Graph
source · pub struct Graph { /* private fields */ }
Expand description
Represents a computation graph. Graphs may be shared between sessions. Graphs are thread-safe when used as directed.
Implementations§
source§impl Graph
impl Graph
sourcepub fn new_operation(
&mut self,
op_type: &str,
operation_name: &str
) -> Result<OperationDescription<'_>, NulError>
pub fn new_operation( &mut self, op_type: &str, operation_name: &str ) -> Result<OperationDescription<'_>, NulError>
Operation will only be added to graph when finish_operation() is called (assuming finish_operation() does not return an error). graph must not be deleted until after finish_operation() is called.
sourcepub fn operation_by_name(
&self,
operation_name: &str
) -> Result<Option<Operation>, NulError>
pub fn operation_by_name( &self, operation_name: &str ) -> Result<Option<Operation>, NulError>
Returns the operation in the graph with the given name, if it exists.
If the operation does not exist, returns Ok(None)
.
sourcepub fn operation_by_name_required(
&self,
operation_name: &str
) -> Result<Operation, Status>
pub fn operation_by_name_required( &self, operation_name: &str ) -> Result<Operation, Status>
Like operation_by_name
, except that failure to find the operation is considered an error.
sourcepub fn operation_iter(&self) -> OperationIter<'_> ⓘ
pub fn operation_iter(&self) -> OperationIter<'_> ⓘ
Iterates over the operations in the graph.
sourcepub fn num_dims<I: Into<Output>>(&self, output: I) -> Result<c_int>
pub fn num_dims<I: Into<Output>>(&self, output: I) -> Result<c_int>
Returns the number of dimensions of the Tensor referenced by output
.
If the number of dimensions in the shape is unknown, returns -1.
Returns an error if:
output
is not ingraph
.
sourcepub fn tensor_shape<I: Into<Output>>(&self, output: I) -> Result<Shape>
pub fn tensor_shape<I: Into<Output>>(&self, output: I) -> Result<Shape>
Returns the shape of the Tensor referenced by output
.
Returns an error if:
output
is not ingraph
.
sourcepub fn import_graph_def(
&mut self,
graph_def: &[u8],
options: &ImportGraphDefOptions
) -> Result<()>
pub fn import_graph_def( &mut self, graph_def: &[u8], options: &ImportGraphDefOptions ) -> Result<()>
Import the graph serialized in graph_def
.
sourcepub fn import_graph_def_with_results(
&mut self,
graph_def: &[u8],
options: &ImportGraphDefOptions
) -> Result<ImportGraphDefResults>
pub fn import_graph_def_with_results( &mut self, graph_def: &[u8], options: &ImportGraphDefOptions ) -> Result<ImportGraphDefResults>
Import the graph serialized in graph_def
.
sourcepub fn import_graph_def_with_return_outputs(
&mut self,
graph_def: &[u8],
options: &ImportGraphDefOptions
) -> Result<Vec<Output>>
pub fn import_graph_def_with_return_outputs( &mut self, graph_def: &[u8], options: &ImportGraphDefOptions ) -> Result<Vec<Output>>
Import the graph serialized in graph_def
.
sourcepub fn copy_function(
&mut self,
func: &Function,
grad: Option<&Function>
) -> Result<()>
pub fn copy_function( &mut self, func: &Function, grad: Option<&Function> ) -> Result<()>
Adds a copy of function func
and optionally its gradient function
grad
to the graph. Once func
/grad
is added to the graph, it can be
called by creating an operation using the function’s name. Any changes
to func
/grad
(including deleting it) done after this method returns,
won’t affect the copy of func
/grad
in the graph. If func
or grad
are already in the graph, copy_function
has no effect on them, but can
establish the function->gradient relationship between them if func
does not already have a gradient. If func
already has a gradient
different from grad
, an error is returned.
If grad
is None and func
is not in the graph, func
is added
without a gradient. If grad
is None and func
is in the graph,
copy_function
is a noop. grad
must have appropriate signature as
described in the doc of GradientDef in
tensorflow/core/framework/function.proto.
If successful, returns () and func
and grad
are added to the graph.
Otherwise, an error is returned and the graph is unmodified.
sourcepub fn to_function<S: AsRef<str>>(
&self,
fn_name: &str,
append_hash_to_fn_name: bool,
opers: Option<&[&Operation]>,
inputs: &[Output],
outputs: &[Output],
output_names: Option<&[S]>,
opts: &FunctionOptions,
description: Option<&str>
) -> Result<Function>
pub fn to_function<S: AsRef<str>>( &self, fn_name: &str, append_hash_to_fn_name: bool, opers: Option<&[&Operation]>, inputs: &[Output], outputs: &[Output], output_names: Option<&[S]>, opts: &FunctionOptions, description: Option<&str> ) -> Result<Function>
Create a Function
from a Graph
.
Arguments
fn_name
- the name of the newFunction
. Should match the operation name (OpDef.name) regexp [A-Z][A-Za-z0-9_.\-/]*. Ifappend_hash_to_fn_name
is false,fn_name
must be distinct from other function and operation names (at least those registered in graphs where this function will be used).append_hash_to_fn_name
- If true, the actual name of the function will befn_name
appended with ‘_<hash_of_this_function’s_definition>’. If false, the function’s name will befn_name
.opers
- Array of operations to become the body of the function or null.- If
None
, all the operations in the graph will become part of the function except operations referenced ininputs
. These operations must have a single output (these operations are typically placeholders created for the sole purpose of representing an input. We can relax this constraint if there are compelling use cases). - If
Some
, all operations in it will become part of the function. In particular, no automatic skipping of dummy input operations is performed.
- If
inputs
- array ofOutput
s that specify the inputs to the function. The names used for function inputs are normalized names of the operations (usually placeholders) pointed to byinputs
. These operation names should start with a letter. Normalization will convert all letters to lowercase and non-alphanumeric characters to ‘_’ to make resulting names match the “[a-z][a-z0-9_]*” pattern for operation argument names.inputs
cannot contain the same tensor twice.outputs
- array ofOutput
s that specify the outputs of the function.outputs
can contain the same tensor more than once.output_names
- The names of the function’s outputs.output_names
array must either have the same length asoutputs
or be None. In the former case, the names should match the regular expression for ArgDef names - “[a-z][a-z0-9_]*”. In the latter case, names for outputs will be generated automatically.opts
- various options for the function, e.g. XLA’s inlining control.description
- optional human-readable description of this function.
Note that when the same Output
is listed as both an input and an
output, the corresponding function’s output will equal to this input,
instead of the original node’s output.
Callers must also satisfy the following constraints:
inputs
cannot refer toOutput
s within a control flow context. For example, one cannot use the output of “switch” node as input.inputs
andoutputs
cannot have reference types. Reference types are not exposed through C API and are being replaced with Resources. We support reference types inside function’s body to support legacy code. Do not use them in new code.- Every node in the function’s body must have all of its inputs
(including control inputs). In other words, for every node in the
body, each input must be either listed in
inputs
or must come from another node in the body. In particular, it is an error to have a control edge going from a node outside of the body into a node in the body. This applies to control edges going from nodes referenced ininputs
to nodes in the body when the former nodes are not in the body (automatically skipped or not included in explicitly specified body).
Returns
A newly created Function
instance.
sourcepub fn num_functions(&self) -> c_int
pub fn num_functions(&self) -> c_int
Returns the number of functions registered in the graph.
sourcepub fn get_functions(&self) -> Result<Vec<Function>>
pub fn get_functions(&self) -> Result<Vec<Function>>
Returns functions registered in the graph.
sourcepub fn get_op_def(&self, op_name: &str) -> Result<Vec<u8>>
pub fn get_op_def(&self, op_name: &str) -> Result<Vec<u8>>
Returns the serialized OpDef proto with name op_name
, or a bad status if no
such op exists. This can return OpDefs of functions copied into the graph.
sourcepub fn versions(&self) -> Result<Vec<u8>>
pub fn versions(&self) -> Result<Vec<u8>>
Returns the serialized VersionDef proto for this graph.
sourcepub fn try_evaluate_constant<T: TensorType>(
&self,
output: &Output
) -> Result<Option<Tensor<T>>>
pub fn try_evaluate_constant<T: TensorType>( &self, output: &Output ) -> Result<Option<Tensor<T>>>
Attempts to evaluate output
. This will only be possible if output
doesn’t depend on any graph inputs (this function is safe to call if
this isn’t the case though).
If the evaluation is successful, this function returns the tensor. Otherwise returns None. An error status is returned if something is wrong with the graph or input or the type requested doesn’t match the type of the tensor.
sourcepub fn add_gradients(
&mut self,
prefix: Option<&str>,
y: &[Output],
x: &[Output],
dx: Option<&[Output]>
) -> Result<Vec<Option<Output>>>
pub fn add_gradients( &mut self, prefix: Option<&str>, y: &[Output], x: &[Output], dx: Option<&[Output]> ) -> Result<Vec<Option<Output>>>
Adds operations to compute the partial derivatives of sum of y
s
w.r.t x
s, i.e., d(y_1 + y_2 + …)/dx_1, d(y_1 + y_2 + …)/dx_2…
dx
are used as initial gradients (which represent the symbolic partial
derivatives of some loss function L
w.r.t. y
).
dx
must be None or have the same length as y
.
If dx
is None, the implementation will use dx of OnesLike
for all
shapes in y
.
prefix
names the scope into which all gradients operations are being
added. prefix
must be unique within the provided graph otherwise this
operation will fail. If prefix
is None, gradient nodes are
automatically named under the “gradients/” prefix. To guarantee name
uniqueness, subsequent calls to the same graph will append an
incremental tag to the prefix: “gradients_1/”, “gradients_2/”, …
WARNING: This function does not yet support all the gradients that python supports. See https://www.tensorflow.org/code/tensorflow/cc/gradients/README.md for instructions on how to add C++ more gradients.