API Reference
Complete listing of all documented public functions and types in ITensorNetworks.jl, ITensorNetworks.ModelNetworks, and ITensorNetworks.ModelHamiltonians.
ITensorNetworks.ITensorNetwork — Type
ITensorNetwork{V}A tensor network where each vertex holds an ITensor. The network graph is a NamedGraph{V} and edges represent shared indices between neighboring tensors.
Constructors
From an IndsNetwork (most common):
ITensorNetwork(is::IndsNetwork; link_space = 1)
ITensorNetwork(f, is::IndsNetwork; link_space = 1)
ITensorNetwork(eltype, undef, is::IndsNetwork; link_space = 1)- With no function argument
f, tensors are initialized to zero. - With a function
f(v)that returns a state label (e.g."Up","Dn") or anITensorconstructor, tensors are initialized accordingly. link_spacecontrols the bond-index dimension (default 1).
From a graph (site indices inferred as trivial):
ITensorNetwork(graph::AbstractNamedGraph; link_space = 1)
ITensorNetwork(f, graph::AbstractNamedGraph; link_space = 1)From a collection of ITensors:
ITensorNetwork(ts::AbstractVector{ITensor})
ITensorNetwork(vs, ts::AbstractVector{ITensor})
ITensorNetwork(ts::AbstractVector{<:Pair{<:Any, ITensor}})
ITensorNetwork(ts::AbstractDict{<:Any, ITensor})Edges are inferred from shared indices between tensors.
From a single ITensor:
ITensorNetwork(t::ITensor)Wraps the tensor in a single-vertex network.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_grid
julia> g = named_grid((4,));
julia> s = siteinds("S=1/2", g);
julia> tn = ITensorNetwork(s; link_space = 2);
julia> tn = ITensorNetwork("Up", s);
See also: IndsNetwork, ttn, TreeTensorNetwork.
ITensorNetworks.ITensorNetwork — Method
ITensorNetwork(tn::TreeTensorNetwork) -> ITensorNetworkConvert a TreeTensorNetwork to a plain ITensorNetwork, discarding orthogonality metadata. The returned network shares the same underlying tensor data.
See also: TreeTensorNetwork, ttn.
ITensorNetworks.TreeTensorNetwork — Type
TreeTensorNetwork{V} <: AbstractTreeTensorNetwork{V}A tensor network whose underlying graph is a tree. In addition to the tensor data, it tracks an ortho_region: the set of vertices that currently form the orthogonality center of the network.
TTN is an alias for TreeTensorNetwork.
Use ttn or mps to construct instances, and orthogonalize to bring the network into a canonical gauge.
See also: ITensorNetwork, ttn, mps, random_ttn.
ITensorNetworks.TreeTensorNetwork — Method
TreeTensorNetwork(tn::ITensorNetwork; ortho_region=vertices(tn)) -> TreeTensorNetworkConstruct a TreeTensorNetwork from an ITensorNetwork with tree graph structure.
The ortho_region keyword specifies which vertices currently form the orthogonality center. By default all vertices are included, meaning no particular gauge is assumed. To enforce an actual orthogonal gauge, call orthogonalize afterward.
Throws an error if the underlying graph of tn is not a tree.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> using Graphs: vertices
julia> g = named_comb_tree((2, 2));
julia> s = siteinds("S=1/2", g);
julia> itn = ITensorNetwork(s; link_space = 2);
julia> root_vertex = first(vertices(itn));
julia> ttn_state = TreeTensorNetwork(itn; ortho_region = [root_vertex]);
See also: ttn, ITensorNetwork, orthogonalize.
Base.:+ — Method
+(tn1::AbstractTreeTensorNetwork, tn2::AbstractTreeTensorNetwork; alg="directsum", kwargs...) -> TreeTensorNetworkAdd two tree tensor networks by growing the bond dimension, returning a network that represents the state tn1 + tn2. The bond dimension of the result is the sum of the bond dimensions of the two inputs.
Use truncate afterward to compress the resulting network.
Keyword Arguments
alg="directsum": Algorithm for combining the networks. Currently only"directsum"is supported for trees.
Both networks must share the same graph structure and site indices.
Base.truncate — Method
truncate(tn::AbstractITensorNetwork, edge; kwargs...) -> ITensorNetworkTruncate the bond across edge in tn by performing an SVD and discarding small singular values. edge may be an AbstractEdge or a Pair of vertices.
Truncation parameters are passed as keyword arguments and forwarded to ITensors.svd:
cutoff: Drop singular values smaller than this threshold.maxdim: Maximum number of singular values to keep.mindim: Minimum number of singular values to keep.
This operates on a single bond. For TreeTensorNetwork, the no-argument form truncate(ttn; kwargs...) sweeps all bonds and is generally preferred for full recompression after addition or subspace expansion.
See also: Base.truncate(::AbstractTreeTensorNetwork).
Base.truncate — Method
truncate(tn::AbstractTreeTensorNetwork; root_vertex=..., kwargs...) -> TreeTensorNetworkTruncate the bond dimensions of tn by sweeping from the leaves toward root_vertex and performing an SVD-based truncation on each bond.
Before truncating each bond the relevant subtree is first orthogonalized (controlled truncation), ensuring that discarded singular values correspond to actual truncation error. Truncation parameters are passed through kwargs.
Keyword Arguments
root_vertex: Root of the DFS traversal. Defaults todefault_root_vertex(tn).cutoff: Drop singular values smaller than this threshold (relative or absolute).maxdim: Maximum number of singular values to retain on each bond.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> g = named_comb_tree((2, 2));
julia> s = siteinds("S=1/2", g);
julia> psi = random_ttn(s; link_space = 4);
julia> psi_trunc = truncate(psi; cutoff = 1e-10, maxdim = 2);
See also: orthogonalize.
ITensorNetworks.add — Method
add(tn1::AbstractITensorNetwork, tn2::AbstractITensorNetwork) -> ITensorNetworkAdd two ITensorNetworks together by taking their direct sum (growing the bond dimension). The result represents the state tn1 + tn2, with bond dimension on each edge equal to the sum of the bond dimensions of tn1 and tn2.
Both networks must have the same vertex set and matching site indices at each vertex.
Use truncate on the result to compress back to a lower bond dimension.
See also: Base.:+ for TreeTensorNetwork, truncate.
ITensorNetworks.add — Method
add(tns::AbstractTreeTensorNetwork...; kwargs...) -> TreeTensorNetworkAdd tree tensor networks together by growing the bond dimension. Equivalent to +(tns...).
ITensorNetworks.dmrg — Method
dmrg(operator, init_state; kwargs...) -> (eigenvalue, state)Find the lowest eigenvalue and eigenvector of operator using the Density Matrix Renormalization Group (DMRG) algorithm. This is an alias for eigsolve.
See eigsolve for the full description of arguments and keyword arguments.
Example
energy, psi = dmrg(H, psi0;
nsweeps = 10,
nsites = 2,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50)
)ITensorNetworks.eigsolve — Method
eigsolve(operator, init_state; nsweeps, nsites=1, factorize_kwargs, sweep_kwargs...) -> (eigenvalue, state)Find the lowest eigenvalue and corresponding eigenvector of operator using a DMRG-like sweep algorithm on a TreeTensorNetwork.
Arguments
operator: The operator to diagonalize, typically aTreeTensorNetworkrepresenting a Hamiltonian constructed from anOpSum(e.g. viattn(opsum, sites)).init_state: Initial guess for the eigenvector as aTreeTensorNetwork.nsweeps: Number of sweeps over the network.nsites=1: Number of sites optimized simultaneously per local update step (1 or 2).factorize_kwargs: Keyword arguments controlling bond truncation after each local solve, e.g.(; cutoff=1e-10, maxdim=50).outputlevel=0: Level of output to print (0 = no output, 1 = sweep level information, 2 = step details)
Returns
A tuple (eigenvalue, state) where eigenvalue is the converged lowest eigenvalue and state is the optimized TreeTensorNetwork eigenvector.
Example
energy, psi = eigsolve(H, psi0;
nsweeps = 10,
nsites = 2,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
outputlevel = 1
)See also: dmrg, time_evolve.
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::Op; alg="bp", kwargs...) -> NumberCompute the expectation value ⟨ψ|op|ψ⟩ / ⟨ψ|ψ⟩ for a single ITensors.Op object.
The default algorithm is belief propagation ("bp"); use alg="exact" for exact contraction.
See also: expect(ψ, op::String).
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::String, vertices; alg="bp", kwargs...) -> DictionaryCompute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at each vertex in vertices.
See expect(ψ, op::String) for full documentation.
ITensorNetworks.expect — Method
expect(ψ::AbstractITensorNetwork, op::String; alg="bp", kwargs...) -> DictionaryCompute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at every vertex of ψ.
Arguments
ψ: The tensor network state.op: Name of the local operator (e.g."Sz","N","Sx"), passed toITensors.op.alg="bp": Contraction algorithm."bp"uses belief propagation (efficient for loopy or large networks);"exact"performs full contraction.
Keyword Arguments (alg="bp" only)
cache!: OptionalRefto a pre-built belief propagation cache. If provided, the cache is reused across multipleexpectcalls for efficiency.update_cache=true: Whether to update the cache before computing expectation values.
Returns
A Dictionary mapping each vertex of ψ to its expectation value.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_grid
julia> g = named_grid((4,));
julia> s = siteinds("S=1/2", g);
julia> psi = random_ttn(s; link_space = 2);
julia> sz = expect(psi, "Sz");
julia> sz_exact = expect(psi, "Sz"; alg = "exact");
See also: expect(ψ, op::String, vertices), expect(operator, state::AbstractTreeTensorNetwork).
ITensorNetworks.expect — Method
expect(operator::String, state::AbstractTreeTensorNetwork; vertices=vertices(state), root_vertex=...) -> DictionaryCompute local expectation values ⟨state|op_v|state⟩ / ⟨state|state⟩ for each vertex v in vertices using exact contraction via successive orthogonalization.
The state is normalized before computing expectation values. The operator name is passed to ITensors.op; each vertex must carry exactly one site index.
Arguments
operator: Name of the local operator, e.g."Sz","N","Sx".state: The tree tensor network state.vertices: Subset of vertices at which to evaluate the operator. Defaults to all vertices.root_vertex: Root used for the DFS traversal order.
Returns
A Dictionary mapping each vertex to its (real-typed) expectation value.
Example
sz = expect("Sz", psi)
sz_sub = expect("Sz", psi; vertices = [1, 3, 5])See also: expect(ψ, op::String) for general ITensorNetwork states with belief propagation support.
ITensorNetworks.loginner — Method
loginner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute log(⟨ϕ|ψ⟩) in a numerically stable way by accumulating logarithms during contraction rather than computing the inner product directly.
Useful when the inner product would overflow or underflow in floating-point arithmetic.
Keyword Arguments
alg="bp": Contraction algorithm,"bp"(default) or"exact".
See also: inner, lognorm.
ITensorNetworks.mps — Method
mps(args...; ortho_region=nothing) -> TreeTensorNetworkConstruct a matrix product state (MPS) as a TreeTensorNetwork on a 1D path graph. The interface is identical to ttn but is intended for 1D (chain) topologies.
See also: ttn, random_mps.
ITensorNetworks.mps — Method
mps(f, is::Vector{<:Index}; kwargs...) -> TreeTensorNetworkConstruct a matrix product state (MPS) from a function f and a flat vector of site indices is. The indices are arranged on a 1D path graph automatically.
Example
julia> s = siteinds("S=1/2", 6);
julia> psi = mps(v -> "Up", s);
ITensorNetworks.ortho_region — Method
ortho_region(tn::TreeTensorNetwork) -> IndicesReturn the set of vertices that currently form the orthogonality center of tn.
See also: orthogonalize.
ITensorNetworks.orthogonalize — Method
orthogonalize(ttn::AbstractTreeTensorNetwork, region; kwargs...) -> TreeTensorNetworkBring ttn into an orthogonal gauge with orthogonality center at region. region may be a single vertex or a vector of vertices.
QR decompositions are applied along the unique tree path from the current ortho_region to region, so that all tensors outside region are left- or right-orthogonal with respect to that path.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> using Graphs: vertices
julia> g = named_comb_tree((2, 2));
julia> s = siteinds("S=1/2", g);
julia> psi = random_ttn(s; link_space = 2);
julia> vs = collect(vertices(psi));
julia> psi = orthogonalize(psi, vs[1]);
julia> psi = orthogonalize(psi, [vs[1], vs[2]]);
See also: ortho_region, truncate.
ITensorNetworks.random_mps — Method
random_mps(args...; kwargs...) -> TreeTensorNetworkConstruct a random, unit-norm matrix product state (MPS) as a TreeTensorNetwork. Arguments are forwarded to random_ttn.
Example
julia> s = siteinds("S=1/2", 6);
julia> psi = random_mps(s; link_space = 2);
See also: mps, random_ttn.
ITensorNetworks.random_mps — Method
random_mps(f, is::Vector{<:Index}; kwargs...) -> TreeTensorNetworkConstruct a random MPS from a function f and a flat vector of site indices is.
ITensorNetworks.random_mps — Method
random_mps(s::Vector{<:Index}; kwargs...) -> TreeTensorNetworkConstruct a random MPS from a flat vector of site indices s.
ITensorNetworks.random_ttn — Method
random_ttn(args...; kwargs...) -> TreeTensorNetworkConstruct a random, unit-norm TreeTensorNetwork. Arguments are forwarded to random_tensornetwork, which accepts the same interface as ITensorNetwork.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> g = named_comb_tree((2, 2));
julia> s = siteinds("S=1/2", g);
julia> psi = random_ttn(s; link_space = 2);
See also: ttn, random_mps.
ITensorNetworks.time_evolve — Method
time_evolve(operator, time_points, init_state; sweep_kwargs...) -> stateTime-evolve init_state under operator using the Time-Dependent Variational Principle (TDVP) algorithm.
The state is evolved from t=0 through the successive time points in time_points. The operator should represent the Hamiltonian H; internally the evolution exp(-i H t) is applied via a "local solver".
Arguments
operator: The Hamiltonian as a tensor network operator (e.g. built from anOpSum).time_points: A vector (or range) of time values. Can be real or complex.init_state: The initial tensor network state.
Keyword Arguments
nsites=2: Number of sites optimized per local update (1 or 2).order=4: Order of the TDVP sweep pattern and time step increments.factorize_kwargs: Keyword arguments for bond truncation, e.g.(; cutoff=1e-10, maxdim=50).outputlevel=0: Verbosity level (0=silent, 1=print after each time step).solver_kwargs: Additional keyword arguments forwarded to the local solver (time stepping algorithm).
Returns
The evolved state at last(time_points).
Example
times = 0.1:0.1:1.0
psi_t = time_evolve(H, times, psi0;
nsites = 2,
order = 4,
factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
outputlevel = 1
)ITensorNetworks.ttn — Method
ttn(args...; ortho_region=nothing) -> TreeTensorNetworkConstruct a TreeTensorNetwork (TTN) using the same interface as ITensorNetwork. All positional and keyword arguments are forwarded to the ITensorNetwork constructor.
If ortho_region is not specified, no particular gauge is assumed. Call orthogonalize to impose a gauge.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> g = named_comb_tree((2, 2));
julia> s = siteinds("S=1/2", g);
julia> psi = ttn(v -> "Up", s);
See also: mps, random_ttn, TreeTensorNetwork.
ITensorNetworks.ttn — Method
ttn(a::ITensor, is::IndsNetwork; ortho_region=..., kwargs...) -> TreeTensorNetworkDecompose a dense ITensor a into a TreeTensorNetwork with the tree structure described by the IndsNetwork is.
Successive QR/SVD factorizations are applied following a post-order DFS traversal from the root vertex, then the network is orthogonalized to ortho_region (defaults to the root). Extra kwargs (e.g. cutoff, maxdim) are forwarded to the factorization.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree
julia> using ITensors: ITensors
julia> g = named_comb_tree((3, 1));
julia> s = siteinds("S=1/2", g);
julia> A = ITensors.random_itensor(only(s[(1, 1)]), only(s[(2, 1)]), only(s[(3, 1)]));
julia> ttn_A = ttn(A, s);
ITensorNetworks.ttn — Method
ttn(os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)
ttn(eltype::Type{<:Number}, os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)Convert an OpSum object os to a TreeTensorNetwork, with indices given by sites.
ITensorNetworks.update — Method
More generic interface for update, with default params
ITensorNetworks.update_factors — Method
Update the tensornetwork inside the cache out-of-place
ITensorNetworks.update_iteration — Method
Do parallel updates between groups of edges of all message tensors Currently we send the full message tensor data struct to update for each edge_group. But really we only need the mts relevant to that group.
ITensorNetworks.update_iteration — Method
Do a sequential update of the message tensors on edges
ITensors.inner — Method
inner(ϕ::AbstractITensorNetwork, A::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute the matrix element ⟨ϕ|A|ψ⟩ where A is a tensor network operator.
Keyword Arguments
alg="bp": Contraction algorithm."bp"(default) or"exact".
See also: inner(ϕ, ψ).
ITensors.inner — Method
inner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> NumberCompute the inner product ⟨ϕ|ψ⟩ by contracting the combined bra-ket network.
Keyword Arguments
alg="bp": Contraction algorithm."bp"uses belief propagation (default, efficient for large or loopy networks);"exact"uses full contraction with an optimized sequence.
See also: loginner, norm, inner(ϕ, A, ψ).
ITensors.inner — Method
inner(x::AbstractTreeTensorNetwork, y::AbstractTreeTensorNetwork) -> NumberCompute the inner product ⟨x|y⟩ by contracting the bra-ket network using a post-order DFS traversal rooted at root_vertex.
Both networks must have the same graph structure and compatible site indices.
See also: loginner, norm, inner(y, A, x).
LinearAlgebra.normalize — Method
normalize(tn::AbstractITensorNetwork; alg="exact", kwargs...) -> AbstractITensorNetworkReturn a copy of tn rescaled so that norm(tn) ≈ 1.
The rescaling is distributed evenly across all tensors in the network (each tensor is multiplied by the same scalar factor).
Keyword Arguments
alg="exact": Normalization algorithm."exact"contracts ⟨ψ|ψ⟩ exactly;"bp"uses belief propagation for large networks.
Example
julia> using NamedGraphs.NamedGraphGenerators: named_grid
julia> using LinearAlgebra: norm
julia> g = named_grid((4,));
julia> s = siteinds("S=1/2", g);
julia> psi = random_ttn(s; link_space = 2);
julia> psi = normalize(psi);
julia> norm(psi) ≈ 1
trueSee also: norm, inner.