API Reference

Complete listing of all documented public functions and types in ITensorNetworks.jl.

ITensorNetworks.ITensorNetworkType
ITensorNetwork{V}

A tensor network where each vertex holds an ITensor. Storage is split across three fields:

  • graph::NamedGraph{V} — the network's graph (V is the vertex type),
  • vertex_data::Dictionary{V, ITensor} — the tensor at each vertex,
  • ind_to_vertices::Dict{Index, Set{V}} — reverse map from each Index to the vertices it appears in.

The reverse map keeps vertex lookup by shared Index O(1) and enforces the graph-edge ↔ shared-Index invariant: every Index appears at either one vertex (an external / site index) or two (a bond), and every graph edge corresponds to exactly the pair of vertices sharing at least one Index. Hyperedges (an Index shared by three or more vertices) are rejected.

Construction

ITensorNetwork(tensors)
ITensorNetwork{V}(tensors)

tensors is any collection where keys(tensors) are vertex labels and values(tensors) are the ITensors at those vertices (e.g. a Dict, a Dictionary, or a Vector{ITensor} with linear-index vertex labels). Edges are inferred from shared Indexes.

Example

julia> using ITensors: Index, ITensor

julia> i, j, k = Index(2, "i"), Index(2, "j"), Index(2, "k");

julia> tn = ITensorNetwork([ITensor(i, j), ITensor(j, k)]);

See also: IndsNetwork, TreeTensorNetwork.

source
ITensorNetworks.TreeTensorNetworkMethod
TreeTensorNetwork(tensors) -> TreeTensorNetwork

Construct a TreeTensorNetwork from any collection of tensors accepted by ITensorNetwork (e.g. a Dict, Dictionary, a Vector{ITensor}, or another AbstractITensorNetwork). Edges are inferred from shared Indexes; the underlying graph must be a tree.

The result starts with ortho_region == vertices(tn) — i.e. no particular gauge is assumed. Use orthogonalize to bring the network into a canonical gauge centered at a chosen vertex or region.

Example

julia> using ITensors: Index, ITensor

julia> i, j, k = Index(2, "i"), Index(2, "j"), Index(2, "k");

julia> ttn = TreeTensorNetwork([ITensor(i, j), ITensor(j, k)]);

See also: ITensorNetwork, orthogonalize.

source
ITensorNetworks.TreeTensorNetworkMethod
TreeTensorNetwork(os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)
TreeTensorNetwork(eltype::Type{<:Number}, os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)

Convert an OpSum object os to a TreeTensorNetwork, with indices given by sites.

source
Base.:+Method
+(tn1::AbstractTreeTensorNetwork, tn2::AbstractTreeTensorNetwork; alg="directsum", kwargs...) -> TreeTensorNetwork

Add two tree tensor networks by growing the bond dimension, returning a network that represents the state tn1 + tn2. The bond dimension of the result is the sum of the bond dimensions of the two inputs.

Use truncate afterward to compress the resulting network.

Keyword Arguments

  • alg="directsum": Algorithm for combining the networks. Currently only "directsum" is supported for trees.

Both networks must share the same graph structure and site indices.

See also: add, truncate.

source
Base.truncateMethod
truncate(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetwork

Truncate the bond across edge in tn by performing an SVD and discarding small singular values. edge may be an AbstractEdge or a Pair of vertices.

This is the user-facing verb for bond compression and delegates to left_orth, which names the underlying left-orthogonal gauge transformation. Truncation parameters are passed as keyword arguments and forwarded to ITensors.svd:

  • cutoff: Drop singular values smaller than this threshold.
  • maxdim: Maximum number of singular values to keep.
  • mindim: Minimum number of singular values to keep.

This operates on a single bond. For TreeTensorNetwork, the no-argument form truncate(tn; kwargs...) sweeps all bonds and is generally preferred for full recompression after addition or subspace expansion.

See also: left_orth, Base.truncate(::AbstractTreeTensorNetwork).

source
Base.truncateMethod
truncate(tn::AbstractTreeTensorNetwork; root_vertex=..., kwargs...) -> TreeTensorNetwork

Truncate the bond dimensions of tn by sweeping from the leaves toward root_vertex and performing an SVD-based truncation on each bond.

Before truncating each bond the relevant subtree is first orthogonalized (controlled truncation), ensuring that discarded singular values correspond to actual truncation error. Truncation parameters are passed through kwargs.

Keyword Arguments

  • root_vertex: Root of the DFS traversal. Defaults to default_root_vertex(tn).
  • cutoff: Drop singular values smaller than this threshold (relative or absolute).
  • maxdim: Maximum number of singular values to retain on each bond.

See also: orthogonalize.

source
ITensorNetworks.addMethod
add(tn1::AbstractITensorNetwork, tn2::AbstractITensorNetwork) -> ITensorNetwork

Add two ITensorNetworks together by taking their direct sum (growing the bond dimension). The result represents the state tn1 + tn2, with bond dimension on each edge equal to the sum of the bond dimensions of tn1 and tn2.

Both networks must have the same vertex set and matching site indices at each vertex.

Use truncate on the result to compress back to a lower bond dimension.

See also: Base.:+ for TreeTensorNetwork, truncate.

source
ITensorNetworks.addMethod
add(tns::AbstractTreeTensorNetwork...; kwargs...) -> TreeTensorNetwork

Add tree tensor networks together by growing the bond dimension. Equivalent to +(tns...).

See also: +(tns...), truncate.

source
ITensorNetworks.dmrgMethod
dmrg(operator, init_state; kwargs...) -> (eigenvalue, state)

Find the lowest eigenvalue and eigenvector of operator using the Density Matrix Renormalization Group (DMRG) algorithm. This is an alias for eigsolve.

See eigsolve for the full description of arguments and keyword arguments.

Example

energy, psi = dmrg(H, psi0;
    nsweeps = 10,
    nsites = 2,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50)
)
source
ITensorNetworks.eigsolveMethod
eigsolve(operator, init_state; nsweeps, nsites=1, factorize_kwargs, sweep_kwargs...) -> (eigenvalue, state)

Find the lowest eigenvalue and corresponding eigenvector of operator using a DMRG-like sweep algorithm on a TreeTensorNetwork.

Arguments

  • operator: The operator to diagonalize, typically a TreeTensorNetwork representing a Hamiltonian constructed from an OpSum (e.g. via TreeTensorNetwork(opsum, sites)).
  • init_state: Initial guess for the eigenvector as a TreeTensorNetwork.
  • nsweeps: Number of sweeps over the network.
  • nsites=1: Number of sites optimized simultaneously per local update step (1 or 2).
  • factorize_kwargs: Keyword arguments controlling bond truncation after each local solve, e.g. (; cutoff=1e-10, maxdim=50).
  • outputlevel=0: Level of output to print (0 = no output, 1 = sweep level information, 2 = step details)

Returns

A tuple (eigenvalue, state) where eigenvalue is the converged lowest eigenvalue and state is the optimized TreeTensorNetwork eigenvector.

Example

energy, psi = eigsolve(H, psi0;
    nsweeps = 10,
    nsites = 2,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
    outputlevel = 1
)

See also: dmrg, time_evolve.

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::Op; alg="bp", kwargs...) -> Number

Compute the expectation value ⟨ψ|op|ψ⟩ / ⟨ψ|ψ⟩ for a single ITensors.Op object.

The default algorithm is belief propagation ("bp"); use alg="exact" for exact contraction.

See also: expect(ψ, op::String).

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::String, vertices; alg="bp", kwargs...) -> Dictionary

Compute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at each vertex in vertices.

See expect(ψ, op::String) for full documentation.

source
ITensorNetworks.expectMethod
expect(ψ::AbstractITensorNetwork, op::String; alg="bp", kwargs...) -> Dictionary

Compute local expectation values ⟨ψ|op_v|ψ⟩ / ⟨ψ|ψ⟩ for the operator named op at every vertex of ψ.

Arguments

  • ψ: The tensor network state.
  • op: Name of the local operator (e.g. "Sz", "N", "Sx"), passed to ITensors.op.
  • alg="bp": Contraction algorithm. "bp" uses belief propagation (efficient for loopy or large networks); "exact" performs full contraction.

Keyword Arguments (alg="bp" only)

  • cache!: Optional Ref to a pre-built belief propagation cache. If provided, the cache is reused across multiple expect calls for efficiency.
  • update_cache=true: Whether to update the cache before computing expectation values.

Returns

A Dictionary mapping each vertex of ψ to its expectation value.

See also: expect(ψ, op::String, vertices), expect(operator, state::AbstractTreeTensorNetwork).

source
ITensorNetworks.expectMethod
expect(operator::String, state::AbstractTreeTensorNetwork; vertices=vertices(state), root_vertex=...) -> Dictionary

Compute local expectation values ⟨state|op_v|state⟩ / ⟨state|state⟩ for each vertex v in vertices using exact contraction via successive orthogonalization.

The state is normalized before computing expectation values. The operator name is passed to ITensors.op; each vertex must carry exactly one site index.

Arguments

  • operator: Name of the local operator, e.g. "Sz", "N", "Sx".
  • state: The tree tensor network state.
  • vertices: Subset of vertices at which to evaluate the operator. Defaults to all vertices.
  • root_vertex: Root used for the DFS traversal order.

Returns

A Dictionary mapping each vertex to its (real-typed) expectation value.

Example

sz = expect("Sz", psi)
sz_sub = expect("Sz", psi; vertices = [1, 3, 5])

See also: expect(ψ, op::String) for general ITensorNetwork states with belief propagation support.

source
ITensorNetworks.left_orth!Method
left_orth!(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetwork

Mutate tn so the bond at edge is left-orthogonal: SVD-factorize tn[src(edge)] into U * S * V, replace tn[src(edge)] with the left-isometric U, and absorb S * V into tn[dst(edge)]. edge may be an AbstractEdge or a Pair of vertices.

kwargs... are forwarded to ITensors.svd, so passing cutoff / maxdim / mindim truncates the bond in addition to gauging it. For a non-truncating left-orthogonal gauge, prefer LinearAlgebra.qr!(tn, edge), which uses QR instead of SVD.

See also: left_orth, Base.truncate(::AbstractITensorNetwork, ::AbstractEdge).

source
ITensorNetworks.left_orthMethod
left_orth(tn::AbstractITensorNetwork, edge; kwargs...) -> AbstractITensorNetwork

Non-mutating version of left_orth!: return a copy of tn with the bond at edge made left-orthogonal.

source
ITensorNetworks.loginnerMethod
loginner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute log(⟨ϕ|ψ⟩) in a numerically stable way by accumulating logarithms during contraction rather than computing the inner product directly.

Useful when the inner product would overflow or underflow in floating-point arithmetic.

Keyword Arguments

  • alg="bp": Contraction algorithm, "bp" (default) or "exact".

See also: inner, lognorm.

source
ITensorNetworks.orthogonalizeMethod
orthogonalize(ttn::AbstractTreeTensorNetwork, region) -> TreeTensorNetwork

Bring ttn into an orthogonal gauge with orthogonality center at region. region may be a single vertex or a vector of vertices.

QR decompositions are applied along the unique tree path from the current ortho_region to region, so that all tensors outside region are left- or right-orthogonal with respect to that path.

See also: ortho_region, truncate.

source
ITensorNetworks.time_evolveMethod
time_evolve(operator, time_points, init_state; sweep_kwargs...) -> state

Time-evolve init_state under operator using the Time-Dependent Variational Principle (TDVP) algorithm.

The state is evolved from t=0 through the successive time points in time_points. The operator should represent the Hamiltonian H; internally the evolution exp(-i H t) is applied via a "local solver".

Arguments

  • operator: The Hamiltonian as a tensor network operator (e.g. built from an OpSum).
  • time_points: A vector (or range) of time values. Can be real or complex.
  • init_state: The initial tensor network state.

Keyword Arguments

  • nsites=2: Number of sites optimized per local update (1 or 2).
  • order=4: Order of the TDVP sweep pattern and time step increments.
  • factorize_kwargs: Keyword arguments for bond truncation, e.g. (; cutoff=1e-10, maxdim=50).
  • outputlevel=0: Verbosity level (0=silent, 1=print after each time step).
  • solver_kwargs: Additional keyword arguments forwarded to the local solver (time stepping algorithm).

Returns

The evolved state at last(time_points).

Example

times = 0.1:0.1:1.0
psi_t = time_evolve(H, times, psi0;
    nsites = 2,
    order = 4,
    factorize_kwargs = (; cutoff = 1e-10, maxdim = 50),
    outputlevel = 1
)
source
ITensorNetworks.update_iterationMethod

Do parallel updates between groups of edges of all message tensors Currently we send the full message tensor data struct to update for each edge_group. But really we only need the mts relevant to that group.

source
ITensors.innerMethod
inner(ϕ::AbstractITensorNetwork, A::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute the matrix element ⟨ϕ|A|ψ⟩ where A is a tensor network operator.

Keyword Arguments

  • alg="bp": Contraction algorithm. "bp" (default) or "exact".

See also: inner(ϕ, ψ).

source
ITensors.innerMethod
inner(ϕ::AbstractITensorNetwork, ψ::AbstractITensorNetwork; alg="bp", kwargs...) -> Number

Compute the inner product ⟨ϕ|ψ⟩ by contracting the combined bra-ket network.

Keyword Arguments

  • alg="bp": Contraction algorithm. "bp" uses belief propagation (default, efficient for large or loopy networks); "exact" uses full contraction with an optimized sequence.

See also: loginner, norm, inner(ϕ, A, ψ).

source
ITensors.innerMethod
inner(x::AbstractTreeTensorNetwork, y::AbstractTreeTensorNetwork) -> Number

Compute the inner product ⟨x|y⟩ by contracting the bra-ket network using a post-order DFS traversal rooted at root_vertex.

Both networks must have the same graph structure and compatible site indices.

See also: loginner, norm, inner(y, A, x).

source
LinearAlgebra.normalizeMethod
normalize(tn::AbstractITensorNetwork; alg="exact", kwargs...) -> AbstractITensorNetwork

Return a copy of tn rescaled so that norm(tn) ≈ 1.

The rescaling is distributed evenly across all tensors in the network (each tensor is multiplied by the same scalar factor).

Keyword Arguments

  • alg="exact": Normalization algorithm. "exact" contracts ⟨ψ|ψ⟩ exactly; "bp" uses belief propagation for large networks.

See also: norm, inner.

source