API Reference

Complete listing of all documented public functions and types in ITensorNetworks.jl, ITensorNetworks.ModelNetworks, and ITensorNetworks.ModelHamiltonians.

ITensorNetworks.ITensorNetworkType
ITensorNetwork{V}

A tensor network where each vertex holds an ITensor. The network graph is a NamedGraph{V} and edges represent shared indices between neighboring tensors.

Constructors

From an IndsNetwork (most common):

ITensorNetwork(is::IndsNetwork; link_space = 1)
ITensorNetwork(f, is::IndsNetwork; link_space = 1)
ITensorNetwork(eltype, undef, is::IndsNetwork; link_space = 1)
  • With no function argument f, tensors are initialized to zero.
  • With a function f(v) that returns a state label (e.g. "Up", "Dn") or an ITensor constructor, tensors are initialized accordingly.
  • link_space controls the bond-index dimension (default 1).

From a graph (site indices inferred as trivial):

ITensorNetwork(graph::AbstractNamedGraph; link_space = 1)
ITensorNetwork(f, graph::AbstractNamedGraph; link_space = 1)

From a collection of ITensors:

ITensorNetwork(ts::AbstractVector{ITensor})
ITensorNetwork(vs, ts::AbstractVector{ITensor})
ITensorNetwork(ts::AbstractVector{<:Pair{<:Any, ITensor}})
ITensorNetwork(ts::AbstractDict{<:Any, ITensor})

Edges are inferred from shared indices between tensors.

From a single ITensor:

ITensorNetwork(t::ITensor)

Wraps the tensor in a single-vertex network.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_grid

julia> g = named_grid((4,));

julia> s = siteinds("S=1/2", g);

julia> tn = ITensorNetwork(s; link_space = 2);

julia> tn = ITensorNetwork("Up", s);

See also: IndsNetwork, ttn, TreeTensorNetwork.

source
ITensorNetworks.ITensorNetworkMethod
ITensorNetwork(tn::TreeTensorNetwork) -> ITensorNetwork

Convert a TreeTensorNetwork to a plain ITensorNetwork, discarding orthogonality metadata. The returned network shares the same underlying tensor data.

See also: TreeTensorNetwork, ttn.

source
ITensorNetworks.TreeTensorNetworkType
TreeTensorNetwork{V} <: AbstractTreeTensorNetwork{V}

A tensor network whose underlying graph is a tree. In addition to the tensor data, it tracks an ortho_region: the set of vertices that currently form the orthogonality center of the network.

TTN is an alias for TreeTensorNetwork.

Use ttn or mps to construct instances, and orthogonalize to bring the network into a canonical gauge.

See also: ITensorNetwork, ttn, mps, random_ttn.

source
ITensorNetworks.TreeTensorNetworkMethod
TreeTensorNetwork(tn::ITensorNetwork; ortho_region=vertices(tn)) -> TreeTensorNetwork

Construct a TreeTensorNetwork from an ITensorNetwork with tree graph structure.

The ortho_region keyword specifies which vertices currently form the orthogonality center. By default all vertices are included, meaning no particular gauge is assumed. To enforce an actual orthogonal gauge, call orthogonalize afterward.

Throws an error if the underlying graph of tn is not a tree.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> using Graphs: vertices

julia> g = named_comb_tree((2, 2));

julia> s = siteinds("S=1/2", g);

julia> itn = ITensorNetwork(s; link_space = 2);

julia> root_vertex = first(vertices(itn));

julia> ttn_state = TreeTensorNetwork(itn; ortho_region = [root_vertex]);

See also: ttn, ITensorNetwork, orthogonalize.

source
Base.:+Method
+(tn1::AbstractTreeTensorNetwork, tn2::AbstractTreeTensorNetwork; alg="directsum", kwargs...) -> TreeTensorNetwork

Add two tree tensor networks by growing the bond dimension, returning a network that represents the state tn1 + tn2. The bond dimension of the result is the sum of the bond dimensions of the two inputs.

Use truncate afterward to compress the resulting network.

Keyword Arguments

  • alg="directsum": Algorithm for combining the networks. Currently only "directsum" is supported for trees.

Both networks must share the same graph structure and site indices.

See also: add, truncate.

source
Base.truncateMethod
truncate(tn::AbstractITensorNetwork, edge; kwargs...) -> ITensorNetwork

Truncate the bond across edge in tn by performing an SVD and discarding small singular values. edge may be an AbstractEdge or a Pair of vertices.

Truncation parameters are passed as keyword arguments and forwarded to ITensors.svd:

  • cutoff: Drop singular values smaller than this threshold.
  • maxdim: Maximum number of singular values to keep.
  • mindim: Minimum number of singular values to keep.

This operates on a single bond. For TreeTensorNetwork, the no-argument form truncate(ttn; kwargs...) sweeps all bonds and is generally preferred for full recompression after addition or subspace expansion.

See also: Base.truncate(::AbstractTreeTensorNetwork).

source
Base.truncateMethod
truncate(tn::AbstractTreeTensorNetwork; root_vertex=..., kwargs...) -> TreeTensorNetwork

Truncate the bond dimensions of tn by sweeping from the leaves toward root_vertex and performing an SVD-based truncation on each bond.

Before truncating each bond the relevant subtree is first orthogonalized (controlled truncation), ensuring that discarded singular values correspond to actual truncation error. Truncation parameters are passed through kwargs.

Keyword Arguments

  • root_vertex: Root of the DFS traversal. Defaults to default_root_vertex(tn).
  • cutoff: Drop singular values smaller than this threshold (relative or absolute).
  • maxdim: Maximum number of singular values to retain on each bond.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> g = named_comb_tree((2, 2));

julia> s = siteinds("S=1/2", g);

julia> psi = random_ttn(s; link_space = 4);

julia> psi_trunc = truncate(psi; cutoff = 1e-10, maxdim = 2);

See also: orthogonalize.

source
ITensorNetworks.addMethod
add(tn1::AbstractITensorNetwork, tn2::AbstractITensorNetwork) -> ITensorNetwork

Add two ITensorNetworks together by taking their direct sum (growing the bond dimension). The result represents the state tn1 + tn2, with bond dimension on each edge equal to the sum of the bond dimensions of tn1 and tn2.

Both networks must have the same vertex set and matching site indices at each vertex.

Use truncate on the result to compress back to a lower bond dimension.

See also: Base.:+ for TreeTensorNetwork, truncate.

source
ITensorNetworks.addMethod
add(tns::AbstractTreeTensorNetwork...; kwargs...) -> TreeTensorNetwork

Add tree tensor networks together by growing the bond dimension. Equivalent to +(tns...).

See also: +(tns...), truncate.

source
ITensorNetworks.dmrgMethod
dmrg(operator, init_state; kwargs...) -> (eigenvalue, state)

Find the lowest eigenvalue and eigenvector of operator using the Density Matrix Renormalization Group (DMRG) algorithm. This is an alias for eigsolve.

See eigsolve for the full description of arguments and keyword arguments.

Example

energy, psi = dmrg(H, psi0;
    nsweeps = 10,
    nsites = 2,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50)
)
source
ITensorNetworks.eigsolveMethod
eigsolve(operator, init_state; nsweeps, nsites=1, factorize_kwargs, sweep_kwargs...) -> (eigenvalue, state)

Find the lowest eigenvalue and corresponding eigenvector of operator using a DMRG-like sweep algorithm on a TreeTensorNetwork.

Arguments

  • operator: The operator to diagonalize, typically a TreeTensorNetwork representing a Hamiltonian constructed from an OpSum (e.g. via ttn(opsum, sites)).
  • init_state: Initial guess for the eigenvector as a TreeTensorNetwork.
  • nsweeps: Number of sweeps over the network.
  • nsites=1: Number of sites optimized simultaneously per local update step (1 or 2).
  • factorize_kwargs: Keyword arguments controlling bond truncation after each local solve, e.g. (; cutoff=1e-10, maxdim=50).
  • outputlevel=0: Level of output to print (0 = no output, 1 = sweep level information, 2 = step details)

Returns

A tuple (eigenvalue, state) where eigenvalue is the converged lowest eigenvalue and state is the optimized TreeTensorNetwork eigenvector.

Example

energy, psi = eigsolve(H, psi0;
    nsweeps = 10,
    nsites = 2,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
    outputlevel = 1
)

See also: dmrg, time_evolve.

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::Op; alg="bp", kwargs...) -> Number

Compute the expectation value ⟨ψ|op|ψ⟩ / ⟨ψ|ψ⟩ for a single ITensors.Op object.

The default algorithm is belief propagation ("bp"); use alg="exact" for exact contraction.

See also: expect(ψ, op::String).

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::String, vertices; alg="bp", kwargs...) -> Dictionary

Compute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at each vertex in vertices.

See expect(ψ, op::String) for full documentation.

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::String; alg="bp", kwargs...) -> Dictionary

Compute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at every vertex of ψ.

Arguments

  • ψ: The tensor network state.
  • op: Name of the local operator (e.g. "Sz", "N", "Sx"), passed to ITensors.op.
  • alg="bp": Contraction algorithm. "bp" uses belief propagation (efficient for loopy or large networks); "exact" performs full contraction.

Keyword Arguments (alg="bp" only)

  • cache!: Optional Ref to a pre-built belief propagation cache. If provided, the cache is reused across multiple expect calls for efficiency.
  • update_cache=true: Whether to update the cache before computing expectation values.

Returns

A Dictionary mapping each vertex of ψ to its expectation value.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_grid

julia> g = named_grid((4,));

julia> s = siteinds("S=1/2", g);

julia> psi = random_ttn(s; link_space = 2);

julia> sz = expect(psi, "Sz");

julia> sz_exact = expect(psi, "Sz"; alg = "exact");

See also: expect(ψ, op::String, vertices), expect(operator, state::AbstractTreeTensorNetwork).

source
ITensorNetworks.expectMethod
expect(operator::String, state::AbstractTreeTensorNetwork; vertices=vertices(state), root_vertex=...) -> Dictionary

Compute local expectation values ⟨state|op_v|state⟩ / ⟨state|state⟩ for each vertex v in vertices using exact contraction via successive orthogonalization.

The state is normalized before computing expectation values. The operator name is passed to ITensors.op; each vertex must carry exactly one site index.

Arguments

  • operator: Name of the local operator, e.g. "Sz", "N", "Sx".
  • state: The tree tensor network state.
  • vertices: Subset of vertices at which to evaluate the operator. Defaults to all vertices.
  • root_vertex: Root used for the DFS traversal order.

Returns

A Dictionary mapping each vertex to its (real-typed) expectation value.

Example

sz = expect("Sz", psi)
sz_sub = expect("Sz", psi; vertices = [1, 3, 5])

See also: expect(ψ, op::String) for general ITensorNetwork states with belief propagation support.

source
ITensorNetworks.loginnerMethod
loginner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute log(⟨ϕ|ψ⟩) in a numerically stable way by accumulating logarithms during contraction rather than computing the inner product directly.

Useful when the inner product would overflow or underflow in floating-point arithmetic.

Keyword Arguments

  • alg="bp": Contraction algorithm, "bp" (default) or "exact".

See also: inner, lognorm.

source
ITensorNetworks.mpsMethod
mps(args...; ortho_region=nothing) -> TreeTensorNetwork

Construct a matrix product state (MPS) as a TreeTensorNetwork on a 1D path graph. The interface is identical to ttn but is intended for 1D (chain) topologies.

See also: ttn, random_mps.

source
ITensorNetworks.mpsMethod
mps(f, is::Vector{<:Index}; kwargs...) -> TreeTensorNetwork

Construct a matrix product state (MPS) from a function f and a flat vector of site indices is. The indices are arranged on a 1D path graph automatically.

Example

julia> s = siteinds("S=1/2", 6);

julia> psi = mps(v -> "Up", s);
source
ITensorNetworks.orthogonalizeMethod
orthogonalize(ttn::AbstractTreeTensorNetwork, region; kwargs...) -> TreeTensorNetwork

Bring ttn into an orthogonal gauge with orthogonality center at region. region may be a single vertex or a vector of vertices.

QR decompositions are applied along the unique tree path from the current ortho_region to region, so that all tensors outside region are left- or right-orthogonal with respect to that path.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> using Graphs: vertices

julia> g = named_comb_tree((2, 2));

julia> s = siteinds("S=1/2", g);

julia> psi = random_ttn(s; link_space = 2);

julia> vs = collect(vertices(psi));

julia> psi = orthogonalize(psi, vs[1]);

julia> psi = orthogonalize(psi, [vs[1], vs[2]]);

See also: ortho_region, truncate.

source
ITensorNetworks.random_mpsMethod
random_mps(args...; kwargs...) -> TreeTensorNetwork

Construct a random, unit-norm matrix product state (MPS) as a TreeTensorNetwork. Arguments are forwarded to random_ttn.

Example

julia> s = siteinds("S=1/2", 6);

julia> psi = random_mps(s; link_space = 2);

See also: mps, random_ttn.

source
ITensorNetworks.random_mpsMethod
random_mps(f, is::Vector{<:Index}; kwargs...) -> TreeTensorNetwork

Construct a random MPS from a function f and a flat vector of site indices is.

source
ITensorNetworks.random_mpsMethod
random_mps(s::Vector{<:Index}; kwargs...) -> TreeTensorNetwork

Construct a random MPS from a flat vector of site indices s.

source
ITensorNetworks.random_ttnMethod
random_ttn(args...; kwargs...) -> TreeTensorNetwork

Construct a random, unit-norm TreeTensorNetwork. Arguments are forwarded to random_tensornetwork, which accepts the same interface as ITensorNetwork.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> g = named_comb_tree((2, 2));

julia> s = siteinds("S=1/2", g);

julia> psi = random_ttn(s; link_space = 2);

See also: ttn, random_mps.

source
ITensorNetworks.time_evolveMethod
time_evolve(operator, time_points, init_state; sweep_kwargs...) -> state

Time-evolve init_state under operator using the Time-Dependent Variational Principle (TDVP) algorithm.

The state is evolved from t=0 through the successive time points in time_points. The operator should represent the Hamiltonian H; internally the evolution exp(-i H t) is applied via a "local solver".

Arguments

  • operator: The Hamiltonian as a tensor network operator (e.g. built from an OpSum).
  • time_points: A vector (or range) of time values. Can be real or complex.
  • init_state: The initial tensor network state.

Keyword Arguments

  • nsites=2: Number of sites optimized per local update (1 or 2).
  • order=4: Order of the TDVP sweep pattern and time step increments.
  • factorize_kwargs: Keyword arguments for bond truncation, e.g. (; cutoff=1e-10, maxdim=50).
  • outputlevel=0: Verbosity level (0=silent, 1=print after each time step).
  • solver_kwargs: Additional keyword arguments forwarded to the local solver (time stepping algorithm).

Returns

The evolved state at last(time_points).

Example

times = 0.1:0.1:1.0
psi_t = time_evolve(H, times, psi0;
    nsites = 2,
    order = 4,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
    outputlevel = 1
)
source
ITensorNetworks.ttnMethod
ttn(args...; ortho_region=nothing) -> TreeTensorNetwork

Construct a TreeTensorNetwork (TTN) using the same interface as ITensorNetwork. All positional and keyword arguments are forwarded to the ITensorNetwork constructor.

If ortho_region is not specified, no particular gauge is assumed. Call orthogonalize to impose a gauge.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> g = named_comb_tree((2, 2));

julia> s = siteinds("S=1/2", g);

julia> psi = ttn(v -> "Up", s);

See also: mps, random_ttn, TreeTensorNetwork.

source
ITensorNetworks.ttnMethod
ttn(a::ITensor, is::IndsNetwork; ortho_region=..., kwargs...) -> TreeTensorNetwork

Decompose a dense ITensor a into a TreeTensorNetwork with the tree structure described by the IndsNetwork is.

Successive QR/SVD factorizations are applied following a post-order DFS traversal from the root vertex, then the network is orthogonalized to ortho_region (defaults to the root). Extra kwargs (e.g. cutoff, maxdim) are forwarded to the factorization.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_comb_tree

julia> using ITensors: ITensors

julia> g = named_comb_tree((3, 1));

julia> s = siteinds("S=1/2", g);

julia> A = ITensors.random_itensor(only(s[(1, 1)]), only(s[(2, 1)]), only(s[(3, 1)]));

julia> ttn_A = ttn(A, s);
source
ITensorNetworks.ttnMethod
ttn(os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)
ttn(eltype::Type{<:Number}, os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)

Convert an OpSum object os to a TreeTensorNetwork, with indices given by sites.

source
ITensorNetworks.update_iterationMethod

Do parallel updates between groups of edges of all message tensors Currently we send the full message tensor data struct to update for each edge_group. But really we only need the mts relevant to that group.

source
ITensors.innerMethod
inner(ϕ::AbstractITensorNetwork, A::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute the matrix element ⟨ϕ|A|ψ⟩ where A is a tensor network operator.

Keyword Arguments

  • alg="bp": Contraction algorithm. "bp" (default) or "exact".

See also: inner(ϕ, ψ).

source
ITensors.innerMethod
inner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute the inner product ⟨ϕ|ψ⟩ by contracting the combined bra-ket network.

Keyword Arguments

  • alg="bp": Contraction algorithm. "bp" uses belief propagation (default, efficient for large or loopy networks); "exact" uses full contraction with an optimized sequence.

See also: loginner, norm, inner(ϕ, A, ψ).

source
ITensors.innerMethod
inner(x::AbstractTreeTensorNetwork, y::AbstractTreeTensorNetwork) -> Number

Compute the inner product ⟨x|y⟩ by contracting the bra-ket network using a post-order DFS traversal rooted at root_vertex.

Both networks must have the same graph structure and compatible site indices.

See also: loginner, norm, inner(y, A, x).

source
LinearAlgebra.normalizeMethod
normalize(tn::AbstractITensorNetwork; alg="exact", kwargs...) -> AbstractITensorNetwork

Return a copy of tn rescaled so that norm(tn) ≈ 1.

The rescaling is distributed evenly across all tensors in the network (each tensor is multiplied by the same scalar factor).

Keyword Arguments

  • alg="exact": Normalization algorithm. "exact" contracts ⟨ψ|ψ⟩ exactly; "bp" uses belief propagation for large networks.

Example

julia> using NamedGraphs.NamedGraphGenerators: named_grid

julia> using LinearAlgebra: norm

julia> g = named_grid((4,));

julia> s = siteinds("S=1/2", g);

julia> psi = random_ttn(s; link_space = 2);

julia> psi = normalize(psi);

julia> norm(psi) ≈ 1
true

See also: norm, inner.

source