Symmetric (QN Conserving) Tensors: Background and Usage

Here is a collection of background material and example codes for understanding how symmetric tensors (tensors with conserved quantum numbers) work in ITensors.jl

Combiners and Symmetric Tensors

In ITensors.jl, combiners are special sparse tensors that represent the action of taking the tensor product of one or more indices. It generalizes the idea of reshaping and permuting. For dense ITensors, a combiner is just the action of permuting and reshaping the data of the tensor. For symmetric tensors (quantum number conserving tensors represented as block sparse tensors), the combiner also fuses symmetry sectors together. They can be used for various purposes. Generally they are used internally in the library, for example in order to reshape a high order ITensor into an order 2 ITensor to perform a matrix decomposition like an SVD or eigendecomposition.

For example:

julia> using ITensors
       
       # This is a short code showing how a combiner
       # can be used to "flip" the direction of an Index
julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=114|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3
julia> j = Index([QN(0) => 2, QN(1) => 3], "j")(dim=5|id=875|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3
julia> A = random_itensor(i, dag(j))ITensor ord=2 (dim=5|id=114|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3 (dim=5|id=875|"j") <In> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
julia> C = combiner(i, dag(j); tags = "c", dir = dir(i))ITensor ord=3 (dim=25|id=839|"c") <Out> 1: QN(-1) => 6 2: QN(0) => 13 3: QN(1) => 6 (dim=5|id=114|"i") <In> 1: QN(0) => 2 2: QN(1) => 3 (dim=5|id=875|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.Combiner
julia> inds(A)((dim=5|id=114|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3, (dim=5|id=875|"j") <In> 1: QN(0) => 2 2: QN(1) => 3)
julia> inds(A * C)((dim=25|id=839|"c") <Out> 1: QN(-1) => 6 2: QN(0) => 13 3: QN(1) => 6,)

You can see that the combiner reshapes the indices of A into a single Index that contains the tensor product of the two input spaces. The spaces have size QN(-1) => 2 * 3, QN(0) => 2 * 2 + 3 * 3, and QN(0) => 2 * 3 (determined from all of the combinations of combining the sectors of the different indices, where the QNs are added and the block dimensions are multiplied). The ordering of the sectors is determined internally by ITensors.jl.

You can also use a combiner on a single Index, which can be helpful for changing the direction of an Index or combining multiple sectors of the same symmetry into a single sector:

julia> using ITensors
       
       # This is a short code showing how a combiner
       # can be used to "flip" the direction of an Index
julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=801|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3
julia> j = dag(Index([QN(0) => 2, QN(1) => 3], "j"))(dim=5|id=247|"j") <In> 1: QN(0) => 2 2: QN(1) => 3
julia> A = random_itensor(i, j)ITensor ord=2 (dim=5|id=801|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3 (dim=5|id=247|"j") <In> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
julia> C = combiner(j; tags = "jflip", dir = -dir(j))ITensor ord=2 (dim=5|id=117|"jflip") <Out> 1: QN(-1) => 3 2: QN(0) => 2 (dim=5|id=247|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.Combiner
julia> inds(A)((dim=5|id=801|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3, (dim=5|id=247|"j") <In> 1: QN(0) => 2 2: QN(1) => 3)
julia> inds(A * C)((dim=5|id=117|"jflip") <Out> 1: QN(-1) => 3 2: QN(0) => 2, (dim=5|id=801|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3)

Unless you are writing very specialized custom code with symmetric tensors, this is generally not needed.

Block Sparsity and Quantum Numbers

In general, not all blocks that are allowed according to the flux will actually exist in the tensor (which helps in many cases for efficiency). Usually this would happen when the tensor is first constructed and not all blocks are explicitly set:

julia> using ITensors
julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=808) <Out> 1: QN(0) => 1 2: QN(1) => 1
julia> A = ITensor(i', dag(i));
julia> A[2, 2] = 1.0;
julia> @show A;A = ITensor ord=2 Dim 1: (dim=2|id=808)' <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=2|id=808) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×2 Block(2, 2) [2:2, 2:2] 1.0
julia> D, U = eigen(A; ishermitian=true);
julia> @show D;D = ITensor ord=2 Dim 1: (dim=1|id=859|"Link,eigen")' <Out> 1: QN(1) => 1 Dim 2: (dim=1|id=859|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} 1×1 Block(1, 1) [1:1, 1:1] 1.0
julia> @show U;U = ITensor ord=2 Dim 1: (dim=2|id=808) <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=1|id=859|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] 1.0

If we had set A[1, 1] = 0.0 as well, then all of the allowed blocks (according to the flux QN(0) would exist and would be included in the eigendecomposition:

julia> using ITensors
julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=150) <Out> 1: QN(0) => 1 2: QN(1) => 1
julia> A = ITensor(i', dag(i));
julia> A[2, 2] = 1.0;
julia> A[1, 1] = 0.0;
julia> @show A;A = ITensor ord=2 Dim 1: (dim=2|id=150)' <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=2|id=150) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×2 Block(2, 2) [2:2, 2:2] 1.0 Block(1, 1) [1:1, 1:1] 0.0
julia> D, U = eigen(A; ishermitian=true);
julia> @show D;D = ITensor ord=2 Dim 1: (dim=2|id=552|"Link,eigen")' <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=2|id=552|"Link,eigen") <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} 2×2 Block(1, 1) [1:1, 1:1] 0.0 Block(2, 2) [2:2, 2:2] 1.0
julia> @show U;U = ITensor ord=2 Dim 1: (dim=2|id=150) <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=2|id=552|"Link,eigen") <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×2 Block(1, 1) [1:1, 1:1] 1.0 Block(2, 2) [2:2, 2:2] 1.0

"Missing" blocks can also occur with tensor contractions, since the final blocks of the output tensor are made from combinations of contractions of blocks from the input tensors, and there is no guarantee that all flux-consistent blocks will end up in the result:

julia> using ITensors
julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=139) <Out> 1: QN(0) => 1 2: QN(1) => 1
julia> j = Index([QN(0) => 1])(dim=1|id=412) <Out> 1: QN(0) => 1
julia> A = ITensor(i, dag(j));
julia> A[2, 1] = 1.0;
julia> @show A;A = ITensor ord=2 Dim 1: (dim=2|id=139) <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=1|id=412) <In> 1: QN(0) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] 1.0
julia> A2 = prime(A, i) * dag(A);
julia> @show A2;A2 = ITensor ord=2 Dim 1: (dim=2|id=139)' <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=2|id=139) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×2 Block(2, 2) [2:2, 2:2] 1.0
julia> D, U = eigen(A2; ishermitian=true);
julia> @show D;D = ITensor ord=2 Dim 1: (dim=1|id=798|"Link,eigen")' <Out> 1: QN(1) => 1 Dim 2: (dim=1|id=798|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} 1×1 Block(1, 1) [1:1, 1:1] 1.0
julia> @show U;U = ITensor ord=2 Dim 1: (dim=2|id=139) <Out> 1: QN(0) => 1 2: QN(1) => 1 Dim 2: (dim=1|id=798|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] 1.0