Reference

ITensorNetworks._DensityMartrixAlgGraphType

The struct stores data used in the density matrix algorithm. partition: The given tn partition outtree: the binary tree structure of the output ITensorNetwork root: root vertex of the bfstree for truncation innerindstosim: mapping each inner index of the tn represented by partition to a sim index caches: all the cached density matrices

source
ITensorNetworks._DensityMatrixAlgCachesType

The struct contains cached density matrices and cached partial density matrices for each edge / set of edges in the tensor network.

Density matrix example: Consider a tensor network below, 1 /9 2 / /3 6 /| /4 5 7 8 / | | The density matrix for the edge NamedEdge(2, 3) squares the subgraph with vertices 3, 4, 5 | 3 /| 4 5 | | 4 5 |/ 3 | The density matrix for the edge NamedEdge(5, 3) squares the subgraph with vertices 1, 2, 3, 4, 6, 7, 8, 9 1 // 2 / // 3 6 9 /| /| 4 7 8 | | | | | 4 7 8 | |/ | / | 3 6 | | / | | / | 2 9 / |/ 1 The density matrix for the edge NamedEdge(4, 3) squares the subgraph with vertices 1, 2, 3, 5, 6, 7, 8, 9 1 // 2 / // 3 6 9 /| /| 5 7 8 | | | | | 5 7 8 | |/ | / | 3 6 | | / | | / | 2 9 / |/ 1

Partial density matrix example: Consider a tensor network below, 1 /9 2 / /3 6 /| /4 5 7 8 / | | The partial density matrix for the Edge set Set([NamedEdge(2, 3), NamedEdge(5, 3)]) squares the subgraph with vertices 4, and contract with the tensor 3 | 3 / 4 - 4 - The partial density matrix for the Edge set Set([NamedEdge(4, 3), NamedEdge(5, 3)]) squares the subgraph with vertices 1, 2, 6, 7, 8, 9, and contract with the tensor 3 1 // 2 / // 3 6 9 /| /| 7 8 | | | | 7 8 | | / | 6 | / | | / | 2 9 / |/ 1 The density matrix for the Edge set Set([NamedEdge(4, 3), NamedEdge(2, 3)]) squares the subgraph with vertices 5. and contract with the tensor 3 | 3 / 5 - 5 -

source
ITensorNetworks._approx_itensornetwork_ttn_svd!Method

Approximate a partition into an output ITensorNetwork with the binary tree structure defined by out_tree by first transforming the partition into a ttn, then truncating the ttn using a sequence of SVDs.

source
ITensorNetworks._binary_tree_structureMethod

Given a tn and outinds (a subset of noncommoninds of tn), get a DataGraph with binary tree structure of outinds that will be used in the binary tree partition. If maximally_unbalanced=true, the binary tree will have a line/mps structure. The binary tree is recursively constructed from leaves to the root.

Example:

TODO

source
ITensorNetworks._contract_deltasMethod

Given an input tensor network tn, remove redundent delta tensors in tn and change inds accordingly to make the output tn represent the same tensor network but with less delta tensors.

======== Example: julia> is = [Index(2, string(i)) for i in 1:6] julia> a = ITensor(is[1], is[2]) julia> b = ITensor(is[2], is[3]) julia> delta1 = delta(is[3], is[4]) julia> delta2 = delta(is[5], is[6]) julia> tn = ITensorNetwork([a, b, delta1, delta2]) julia> ITensorNetworks.contractdeltas(tn) ITensorNetwork{Int64} with 3 vertices: 3-element Vector{Int64}: 1 2 4

and 1 edge(s): 1 => 2

with vertex data: 3-element Dictionaries.Dictionary{Int64, Any} 1 │ ((dim=2|id=457|"1"), (dim=2|id=296|"2")) 2 │ ((dim=2|id=296|"2"), (dim=2|id=613|"4")) 4 │ ((dim=2|id=626|"6"), (dim=2|id=237|"5"))

source
ITensorNetworks._contract_deltas_ignore_leaf_partitionsMethod

Given an input partition, contract redundent delta tensors of non-leaf vertices in partition without changing the tensor network value. root is the root of the dfs_tree that defines the leaves. Note: for each vertex v of partition, the number of non-delta tensors in partition[v] will not be changed. Note: only delta tensors of non-leaf vertices will be contracted. Note: this function assumes that all noncommoninds of the partition are in leaf partitions.

source
ITensorNetworks._delta_inds_disjointsetsMethod

Given a list of delta tensors deltas, return a DisjointSets of all its indices such that each pair of indices adjacent to any delta tensor must be in the same disjoint set. If a disjoint set contains indices in rootinds, then one of such indices in rootinds must be the root of this set.

source
ITensorNetworks._maxweightoutinds_tnMethod

create a tn with empty ITensors whose outinds weights are MAXWEIGHT The maxweighttn is constructed so that only commoninds of the tn will be considered in mincut.

source
ITensorNetworks._mincutMethod

Calculate the mincut between two subsets of the uncontracted inds (sourceinds and terminalinds) of the input tn. Mincut of two inds list is defined as the mincut of two newly added vertices, each one neighboring to one inds subset.

source
ITensorNetworks._mincut_indsMethod

Find a vector of indices within sourceindslist yielding the mincut of given tnpair. Args: tnpair: a pair of tns (tn1 => tn2), where tn2 is generated via _maxweightoutindstn(tn1) outtomaxweightind: a dict mapping each out ind in tn1 to out ind in tn2 sourceindslist: a list of vector of indices to be considered Note: For each sourceinds in sourceindslist, we consider its mincut within both tns (tn1, tn2) given in tnpair. The mincut in tn1 represents the rank upper bound when splitting sourceinds with other inds in outinds. The mincut in tn2 represents the rank upper bound when the weights of outinds are very large. The first mincut upper_bounds the number of non-zero singular values, while the second empirically reveals the singular value decay. We output the sourceinds where the first mincut value is the minimum, the secound mincut value is also the minimum under the condition that the first mincut is optimal, and the sourceinds have the lowest all-pair shortest path.

source
ITensorNetworks._partitionMethod

Given an input tn and a rooted binary tree of indices, return a partition of tn with the same binary tree structure as indsbtree. Note: in the output partition, we add multiple delta tensors to the network so that the output graph is guaranteed to be the same binary tree as indsbtree. Note: in the output partition, we add multiple scalar tensors. These tensors are used to make the output partition connected, even if the input tn is disconnected. Note: in the output partition, tensor vertex names will be changed. For a given input tensor with vertex name v, its name in the output partition will be(v, 1). Any delta tensor will have name(v, 2), and any scalar tensor used to maintain the connectivity of the partition will have name(v, 3). Note: for a given binary tree with n indices, the output partition will contain 2n-1 vertices, with each leaf vertex corresponding to a sub tn adjacent to one output index. Keeping these leaf vertices in the partition makes laterapproxitensornetworkalgorithms more efficient. Note: name of vertices in the output partition are the same as the name of vertices inindsbtree`.

source
ITensorNetworks._rem_leaf_vertices!Method

For a given ITensorNetwork tn and a root vertex, remove leaf vertices in the directed tree with root root without changing the tensor represented by tn. In particular, the tensor of each leaf vertex is contracted with the tensor of its parent vertex to keep the tensor unchanged.

source
ITensorNetworks._rem_vertex!Method

Perform truncation and remove root vertex in the partition and out_tree of alg_graph.

Example: Consider an alg_graphwhoseouttreeis shown below, 1 /9 2 / /3 6 /| /4 5 7 8 / | | whenroot = 4, the outputouttreewill be 1 /9 2 / /3 6 /| /5 7 8 | | and the returned tensorU` will be the projector at vertex 4 in the output tn.

source
ITensorNetworks._root_union!Method

Rewrite of the function DataStructures.root_union!(s::DisjointSet{T}, x::T, y::T). The difference is that in the output of _root_union!, x is guaranteed to be the root of y when setting left_root=true, and y will be the root of x when setting left_root=false. In DataStructures.root_union!, the root value cannot be specified. A specified root is useful in functions such as _remove_deltas, where when we union two indices into one disjointset, we want the index that is the outinds if the given tensor network to always be the root in the DisjointSets.

source
ITensorNetworks._simMethod

Replace the inds of partialdmtensor that are in keys of inds_to_siminds to the corresponding value, and replace the inds that are in values of inds_to_siminds to the corresponding key.

source
ITensorNetworks._update!Method

Update caches.e_to_dm[e] and caches.es_to_pdm[es]. caches: the caches of the density matrix algorithm. edge: the edge defining the density matrix children: the children vertices of dst(edge) in the dfstree network: the tensor network at vertex dst(edge) indsto_sim: a dict mapping inds to sim inds

source
ITensorNetworks.addMethod

Add two itensornetworks together by growing the bond dimension. The network structures need to be have the same vertex names, same site index on each vertex

source
ITensorNetworks.alternating_updateMethod
tdvp(Hs::Vector{MPO},init_state::MPS,t::Number; kwargs...)
tdvp(Hs::Vector{MPO},init_state::MPS,t::Number, sweeps::Sweeps; kwargs...)

Use the time dependent variational principle (TDVP) algorithm to compute exp(t*H)*init_state using an efficient algorithm based on alternating optimization of the MPS tensors and local Krylov exponentiation of H.

This version of tdvp accepts a representation of H as a Vector of MPOs, Hs = [H1,H2,H3,...] such that H is defined as H = H1+H2+H3+... Note that this sum of MPOs is not actually computed; rather the set of MPOs [H1,H2,H3,..] is efficiently looped over at each step of the algorithm when optimizing the MPS.

Returns:

  • state::MPS - time-evolved MPS
source
ITensorNetworks.contract_approxFunction

Approximate a given ITensorNetwork tn into an output ITensorNetwork with output_structure. output_structure outputs a directed binary tree DataGraph defining the desired graph structure.

source
ITensorNetworks.contract_approxMethod

Approximate a binary_tree_partition into an output ITensorNetwork with the same binary tree structure. root is the root vertex of the pre-order depth-first-search traversal used to perform the truncations.

source
ITensorNetworks.contract_approxMethod

Approximate a given ITensorNetwork tn into an output ITensorNetwork with a binary tree structure. The binary tree structure is defined based on inds_btree, which is a directed binary tree DataGraph of indices.

source
ITensorNetworks.contraction_sequence_to_graphMethod

Take a contractionsequence and return a graphical representation of it. The leaves of the graph represent the leaves of the sequence whilst the internalnodes of the graph define a tripartition of the graph and thus are named as an n = 3 element tuples, which each element specifying the keys involved. Edges connect parents/children within the contraction sequence.

source
ITensorNetworks.delta_networkMethod

RETURN A TENSOR NETWORK WITH COPY TENSORS ON EACH VERTEX. Note that passing a link_space will mean the indices of the resulting network don't match those of the input indsnetwork

source
ITensorNetworks.fidelityMethod

Calculate the overlap of the gate acting on the previous p and q versus the new p and q in the presence of environments. This is the cost function that optimisepq will minimise

source
ITensorNetworks.optimise_p_qMethod

Do Full Update Sweeping, Optimising the tensors p and q in the presence of the environments envs, Specifically this functions find the pcur and qcur which optimise envsgatepqdag(prime(pcur))*dag(prime(qcur))

source
ITensorNetworks.path_graph_structureMethod

Given a tn and outinds (a subset of noncommoninds of tn), outputs a maximimally unbalanced directed binary tree DataGraph of outinds defining the desired graph structure

source
ITensorNetworks.random_tensornetworkMethod

Build an ITensor network on a graph specified by the inds network s. Bonddim is given by linkspace and entries are randomized. The random distribution is based on the input argument distribution.

source
ITensorNetworks.random_tensornetworkMethod

Build an ITensor network on a graph specified by the inds network s. Bonddim is given by linkspace and entries are randomised (normal distribution, mean 0 std 1)

source
ITensorNetworks.tdvpMethod
tdvp(operator::TTN, t::Number, init_state::TTN; kwargs...)

Use the time dependent variational principle (TDVP) algorithm to approximately compute exp(operator*t)*init_state using an efficient algorithm based on alternating optimization of the state tensors and local Krylov exponentiation of operator. The time parameter t can be a real or complex number.

Returns:

  • state - time-evolved state

Optional keyword arguments:

  • time_step::Number = t - time step to use when evolving the state. Smaller time steps generally give more accurate results but can make the algorithm take more computational time to run.
  • nsteps::Integer - evolve by the requested total time t by performing nsteps of the TDVP algorithm. More steps can result in more accurate results but require more computational time to run. (Note that only one of the time_step or nsteps parameters can be provided, not both.)
  • outputlevel::Int = 1 - larger outputlevel values resulting in printing more information and 0 means no output
  • observer - object implementing the Observer interface which can perform measurements and stop early
  • write_when_maxdim_exceeds::Int - when the allowed maxdim exceeds this value, begin saving tensors to disk to free memory in large calculations
source
ITensorNetworks.to_vecMethod
to_vec(x)

Transform x into a Vector. Returns the vector and a closure which inverts the transformation.

Modeled after FiniteDifferences.to_vec:

https://github.com/JuliaDiff/FiniteDifferences.jl/blob/main/src/to_vec.jl

source
ITensorNetworks.ttnMethod
ttn(os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)
ttn(eltype::Type{<:Number}, os::OpSum, sites::IndsNetwork{<:Index}; kwargs...)

Convert an OpSum object os to a TreeTensorNetwork, with indices given by sites.

source
ITensorNetworks.ttn_svdMethod
ttn_svd(os::OpSum, sites::IndsNetwork, root_vertex, kwargs...)

Construct a TreeTensorNetwork from a symbolic OpSum representation of a Hamiltonian, compressing shared interaction channels.

source
ITensorNetworks.updateMethod

Do parallel updates between groups of edges of all message tensors Currently we send the full message tensor data struct to update for each edge_group. But really we only need the mts relevant to that group.

source
ITensors.applyMethod

Apply() function for an ITN in the Vidal Gauge. Hence the bond tensors are required. Gate does not necessarily need to be passed. Can supply an edge to do an identity update instead. Uses Simple Update procedure assuming gate is two-site

source
KrylovKit.linsolveFunction
linsolve(
    A::ITensorNetworks.AbstractTreeTensorNetwork,
    b::ITensorNetworks.AbstractTreeTensorNetwork,
    x₀::ITensorNetworks.AbstractTreeTensorNetwork;
    ...
)
linsolve(
    A::ITensorNetworks.AbstractTreeTensorNetwork,
    b::ITensorNetworks.AbstractTreeTensorNetwork,
    x₀::ITensorNetworks.AbstractTreeTensorNetwork,
    a₀::Number;
    ...
)
linsolve(
    A::ITensorNetworks.AbstractTreeTensorNetwork,
    b::ITensorNetworks.AbstractTreeTensorNetwork,
    x₀::ITensorNetworks.AbstractTreeTensorNetwork,
    a₀::Number,
    a₁::Number;
    updater,
    nsites,
    nsweeps,
    updater_kwargs,
    kwargs...
)

Compute a solution x to the linear system:

(a₀ + a₁ * A)*x = b

using starting guess x₀. Leaving a₀, a₁ set to their default values solves the system A*x = b.

To adjust the balance between accuracy of solution and speed of the algorithm, it is recommed to first try adjusting the solver_tol keyword argument descibed below.

Keyword arguments:

  • ishermitian::Bool=false - should set to true if the MPO A is Hermitian
  • solver_krylovdim::Int=30 - max number of Krylov vectors to build on each solver iteration
  • solver_maxiter::Int=100 - max number outer iterations (restarts) to do in the solver step
  • solver_tol::Float64=1E-14 - tolerance or error goal of the solver

Overload of KrylovKit.linsolve.

source
ITensorNetworks.ModelNetworks.ising_networkMethod

BUILD Z OF CLASSICAL ISING MODEL ON A GIVEN GRAPH AT INVERSE TEMP BETA H = -\sum{(v,v') \in edges}\sigma^{z}{v}\sigma^{z}_{v'} OPTIONAL ARGUMENT: h: EXTERNAL MAGNETIC FIELD szverts: A LIST OF VERTICES OVER WHICH TO APPLY A SZ. THE RESULTANT NETWORK CAN THEN BE CONTRACTED AND DIVIDED BY THE ACTUAL PARTITION FUNCTION TO GET THAT OBSERVABLE INDSNETWORK IS ASSUMED TO BE BUILT FROM A GRAPH (NO SITE INDS) AND OF LINK SPACE 2

source