API Reference
Complete listing of all documented public functions and types in ITensorNetworks.jl.
ITensorNetworks.ITensorNetwork — Type
ITensorNetwork{V}A tensor network where each vertex holds an ITensor. Storage is split across three fields:
graph::NamedGraph{V}— the network's graph (Vis the vertex type),vertex_data::Dictionary{V, ITensor}— the tensor at each vertex,ind_to_vertices::Dict{Index, Set{V}}— reverse map from eachIndexto the vertices it appears in.
The reverse map keeps vertex lookup by shared Index O(1) and enforces the graph-edge ↔ shared-Index invariant: every Index appears at either one vertex (an external / site index) or two (a bond), and every graph edge corresponds to exactly the pair of vertices sharing at least one Index. Hyperedges (an Index shared by three or more vertices) are rejected.
Construction
ITensorNetwork(tensors)
ITensorNetwork{V}(tensors)tensors is any collection where keys(tensors) are vertex labels and values(tensors) are the ITensors at those vertices (e.g. a Dict, a Dictionary, or a Vector{ITensor} with linear-index vertex labels). Edges are inferred from shared Indexes.
Example
julia> using ITensors: Index, ITensor
julia> i, j, k = Index(2, "i"), Index(2, "j"), Index(2, "k");
julia> tn = ITensorNetwork([ITensor(i, j), ITensor(j, k)]);
See also: IndsNetwork, TreeTensorNetwork.
ITensorNetworks.TreeTensorNetwork — Type
TreeTensorNetwork{V} <: AbstractTreeTensorNetwork{V}A tensor network whose underlying graph is a tree. Storage mirrors ITensorNetwork — graph, vertex_data, and ind_to_vertices — plus an ortho_region tracking the orthogonality center.
TTN is an alias for TreeTensorNetwork.
Use the TreeTensorNetwork constructors to build instances, and orthogonalize to bring the network into a canonical gauge.
See also: ITensorNetwork.
ITensorNetworks.TreeTensorNetwork — Method
TreeTensorNetwork(tensors) -> TreeTensorNetworkConstruct a TreeTensorNetwork from any collection of tensors accepted by ITensorNetwork (e.g. a Dict, Dictionary, a Vector{ITensor}, or another AbstractITensorNetwork). Edges are inferred from shared Indexes; the underlying graph must be a tree.
The result starts with ortho_region == vertices(tn) — i.e. no particular gauge is assumed. Use orthogonalize to bring the network into a canonical gauge centered at a chosen vertex or region.
Example
julia> using ITensors: Index, ITensor
julia> i, j, k = Index(2, "i"), Index(2, "j"), Index(2, "k");
julia> ttn = TreeTensorNetwork([ITensor(i, j), ITensor(j, k)]);
See also: ITensorNetwork, orthogonalize.
ITensorNetworks.TreeTensorNetwork — Method
TreeTensorNetwork(os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)
TreeTensorNetwork(eltype::Type{<:Number}, os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)Convert an OpSum object os to a TreeTensorNetwork, with indices given by sites.
Base.:+ — Method
+(tn1::AbstractTreeTensorNetwork, tn2::AbstractTreeTensorNetwork; alg="directsum", kwargs...) -> TreeTensorNetworkAdd two tree tensor networks by growing the bond dimension, returning a network that represents the state tn1 + tn2. The bond dimension of the result is the sum of the bond dimensions of the two inputs.
Use truncate afterward to compress the resulting network.
Keyword Arguments
alg="directsum": Algorithm for combining the networks. Currently only"directsum"is supported for trees.
Both networks must share the same graph structure and site indices.
Base.truncate — Method
truncate(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetworkTruncate the bond across edge in tn by performing an SVD and discarding small singular values. edge may be an AbstractEdge or a Pair of vertices.
This is the user-facing verb for bond compression and delegates to left_orth, which names the underlying left-orthogonal gauge transformation. Truncation parameters are passed as keyword arguments and forwarded to ITensors.svd:
cutoff: Drop singular values smaller than this threshold.maxdim: Maximum number of singular values to keep.mindim: Minimum number of singular values to keep.
This operates on a single bond. For TreeTensorNetwork, the no-argument form truncate(tn; kwargs...) sweeps all bonds and is generally preferred for full recompression after addition or subspace expansion.
See also: left_orth, Base.truncate(::AbstractTreeTensorNetwork).
Base.truncate — Method
truncate(tn::AbstractTreeTensorNetwork; root_vertex=..., kwargs...) -> TreeTensorNetworkTruncate the bond dimensions of tn by sweeping from the leaves toward root_vertex and performing an SVD-based truncation on each bond.
Before truncating each bond the relevant subtree is first orthogonalized (controlled truncation), ensuring that discarded singular values correspond to actual truncation error. Truncation parameters are passed through kwargs.
Keyword Arguments
root_vertex: Root of the DFS traversal. Defaults todefault_root_vertex(tn).cutoff: Drop singular values smaller than this threshold (relative or absolute).maxdim: Maximum number of singular values to retain on each bond.
See also: orthogonalize.
ITensorNetworks.add — Method
add(tn1::AbstractITensorNetwork, tn2::AbstractITensorNetwork) -> ITensorNetworkAdd two ITensorNetworks together by taking their direct sum (growing the bond dimension). The result represents the state tn1 + tn2, with bond dimension on each edge equal to the sum of the bond dimensions of tn1 and tn2.
Both networks must have the same vertex set and matching site indices at each vertex.
Use truncate on the result to compress back to a lower bond dimension.
See also: Base.:+ for TreeTensorNetwork, truncate.
ITensorNetworks.add — Method
add(tns::AbstractTreeTensorNetwork...; kwargs...) -> TreeTensorNetworkAdd tree tensor networks together by growing the bond dimension. Equivalent to +(tns...).
ITensorNetworks.dmrg — Method
dmrg(operator, init_state; kwargs...) -> (eigenvalue, state)Find the lowest eigenvalue and eigenvector of operator using the Density Matrix Renormalization Group (DMRG) algorithm. This is an alias for eigsolve.
See eigsolve for the full description of arguments and keyword arguments.
Example
energy, psi = dmrg(H, psi0;
nsweeps = 10,
nsites = 2,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50)
)ITensorNetworks.eigsolve — Method
eigsolve(operator, init_state; nsweeps, nsites=1, factorize_kwargs, sweep_kwargs...) -> (eigenvalue, state)Find the lowest eigenvalue and corresponding eigenvector of operator using a DMRG-like sweep algorithm on a TreeTensorNetwork.
Arguments
operator: The operator to diagonalize, typically aTreeTensorNetworkrepresenting a Hamiltonian constructed from anOpSum(e.g. viaTreeTensorNetwork(opsum, sites)).init_state: Initial guess for the eigenvector as aTreeTensorNetwork.nsweeps: Number of sweeps over the network.nsites=1: Number of sites optimized simultaneously per local update step (1 or 2).factorize_kwargs: Keyword arguments controlling bond truncation after each local solve, e.g.(; cutoff=1e-10, maxdim=50).outputlevel=0: Level of output to print (0 = no output, 1 = sweep level information, 2 = step details)
Returns
A tuple (eigenvalue, state) where eigenvalue is the converged lowest eigenvalue and state is the optimized TreeTensorNetwork eigenvector.
Example
energy, psi = eigsolve(H, psi0;
nsweeps = 10,
nsites = 2,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
outputlevel = 1
)See also: dmrg, time_evolve.
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::Op; alg="bp", kwargs...) -> NumberCompute the expectation value ⟨ψ|op|ψ⟩ / ⟨ψ|ψ⟩ for a single ITensors.Op object.
The default algorithm is belief propagation ("bp"); use alg="exact" for exact contraction.
See also: expect(ψ, op::String).
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::String, vertices; alg="bp", kwargs...) -> DictionaryCompute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at each vertex in vertices.
See expect(ψ, op::String) for full documentation.
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::String; alg="bp", kwargs...) -> DictionaryCompute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at every vertex of ψ.
Arguments
ψ: The tensor network state.op: Name of the local operator (e.g."Sz","N","Sx"), passed toITensors.op.alg="bp": Contraction algorithm."bp"uses belief propagation (efficient for loopy or large networks);"exact"performs full contraction.
Keyword Arguments (alg="bp" only)
cache!: OptionalRefto a pre-built belief propagation cache. If provided, the cache is reused across multipleexpectcalls for efficiency.update_cache=true: Whether to update the cache before computing expectation values.
Returns
A Dictionary mapping each vertex of ψ to its expectation value.
See also: expect(ψ, op::String, vertices), expect(operator, state::AbstractTreeTensorNetwork).
ITensorNetworks.expect — Method
expect(operator::String, state::AbstractTreeTensorNetwork; vertices=vertices(state), root_vertex=...) -> DictionaryCompute local expectation values ⟨state|op_v|state⟩ / ⟨state|state⟩ for each vertex v in vertices using exact contraction via successive orthogonalization.
The state is normalized before computing expectation values. The operator name is passed to ITensors.op; each vertex must carry exactly one site index.
Arguments
operator: Name of the local operator, e.g."Sz","N","Sx".state: The tree tensor network state.vertices: Subset of vertices at which to evaluate the operator. Defaults to all vertices.root_vertex: Root used for the DFS traversal order.
Returns
A Dictionary mapping each vertex to its (real-typed) expectation value.
Example
sz = expect("Sz", psi)
sz_sub = expect("Sz", psi; vertices = [1, 3, 5])See also: expect(ψ, op::String) for general ITensorNetwork states with belief propagation support.
ITensorNetworks.left_orth! — Method
left_orth!(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetworkMutate tn so the bond at edge is left-orthogonal: SVD-factorize tn[src(edge)] into U * S * V, replace tn[src(edge)] with the left-isometric U, and absorb S * V into tn[dst(edge)]. edge may be an AbstractEdge or a Pair of vertices.
kwargs... are forwarded to ITensors.svd, so passing cutoff / maxdim / mindim truncates the bond in addition to gauging it. For a non-truncating left-orthogonal gauge, prefer LinearAlgebra.qr!(tn, edge), which uses QR instead of SVD.
See also: left_orth, Base.truncate(::AbstractITensorNetwork, ::AbstractEdge).
ITensorNetworks.left_orth — Method
left_orth(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetworkNon-mutating version of left_orth!: return a copy of tn with the bond at edge made left-orthogonal.
ITensorNetworks.loginner — Method
loginner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute log(⟨ϕ|ψ⟩) in a numerically stable way by accumulating logarithms during contraction rather than computing the inner product directly.
Useful when the inner product would overflow or underflow in floating-point arithmetic.
Keyword Arguments
alg="bp": Contraction algorithm,"bp"(default) or"exact".
See also: inner, lognorm.
ITensorNetworks.ortho_region — Method
ortho_region(tn::TreeTensorNetwork) -> IndicesReturn the set of vertices that currently form the orthogonality center of tn.
See also: orthogonalize.
ITensorNetworks.orthogonalize — Method
orthogonalize(ttn::AbstractTreeTensorNetwork, region) -> TreeTensorNetworkBring ttn into an orthogonal gauge with orthogonality center at region. region may be a single vertex or a vector of vertices.
QR decompositions are applied along the unique tree path from the current ortho_region to region, so that all tensors outside region are left- or right-orthogonal with respect to that path.
See also: ortho_region, truncate.
ITensorNetworks.time_evolve — Method
time_evolve(operator, time_points, init_state; sweep_kwargs...) -> stateTime-evolve init_state under operator using the Time-Dependent Variational Principle (TDVP) algorithm.
The state is evolved from t=0 through the successive time points in time_points. The operator should represent the Hamiltonian H; internally the evolution exp(-i H t) is applied via a "local solver".
Arguments
operator: The Hamiltonian as a tensor network operator (e.g. built from anOpSum).time_points: A vector (or range) of time values. Can be real or complex.init_state: The initial tensor network state.
Keyword Arguments
nsites=2: Number of sites optimized per local update (1 or 2).order=4: Order of the TDVP sweep pattern and time step increments.factorize_kwargs: Keyword arguments for bond truncation, e.g.(; cutoff=1e-10, maxdim=50).outputlevel=0: Verbosity level (0=silent, 1=print after each time step).solver_kwargs: Additional keyword arguments forwarded to the local solver (time stepping algorithm).
Returns
The evolved state at last(time_points).
Example
times = 0.1:0.1:1.0
psi_t = time_evolve(H, times, psi0;
nsites = 2,
order = 4,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
outputlevel = 1
)ITensorNetworks.update — Method
More generic interface for update, with default params
ITensorNetworks.update_factors — Method
Update the tensornetwork inside the cache out-of-place
ITensorNetworks.update_iteration — Method
Do parallel updates between groups of edges of all message tensors Currently we send the full message tensor data struct to update for each edge_group. But really we only need the mts relevant to that group.
ITensorNetworks.update_iteration — Method
Do a sequential update of the message tensors on edges
ITensors.inner — Method
inner(ϕ::AbstractITensorNetwork, A::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute the matrix element ⟨ϕ|A|ψ⟩ where A is a tensor network operator.
Keyword Arguments
alg="bp": Contraction algorithm."bp"(default) or"exact".
See also: inner(ϕ, ψ).
ITensors.inner — Method
inner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute the inner product ⟨ϕ|ψ⟩ by contracting the combined bra-ket network.
Keyword Arguments
alg="bp": Contraction algorithm."bp"uses belief propagation (default, efficient for large or loopy networks);"exact"uses full contraction with an optimized sequence.
See also: loginner, norm, inner(ϕ, A, ψ).
ITensors.inner — Method
inner(x::AbstractTreeTensorNetwork, y::AbstractTreeTensorNetwork) -> NumberCompute the inner product ⟨x|y⟩ by contracting the bra-ket network using a post-order DFS traversal rooted at root_vertex.
Both networks must have the same graph structure and compatible site indices.
See also: loginner, norm, inner(y, A, x).
LinearAlgebra.normalize — Method
normalize(tn::AbstractITensorNetwork; alg="exact", kwargs...) -> AbstractITensorNetworkReturn a copy of tn rescaled so that norm(tn) ≈ 1.
The rescaling is distributed evenly across all tensors in the network (each tensor is multiplied by the same scalar factor).
Keyword Arguments
alg="exact": Normalization algorithm."exact"contracts ⟨ψ|ψ⟩ exactly;"bp"uses belief propagation for large networks.
See also: norm, inner.