ITensor Index identity: dimension labels and Einstein notation

Many tensor contraction libraries use Einstein notation, such as NumPy's einsum function, ncon, and various Julia packages such as TensorOperations.jl, Tullio.jl, OMEinsum.jl, and Einsum.jl, among others.

ITensor also uses Einstein notation, however the labels are stored inside the tensor and carried around with them during various operations. In addition, the labels that determine if tensor indices match with each other, and therefore automatically contract when doing * or match when adding or subtracting, are more sophisticated than simple characters or strings. ITensor indices are given a unique random ID number when they are constructed, and additionally users can add additional information like prime levels and tags which uniquely determine an Index. This is in contrast to simpler implementations of the same idea, such as the NamedDims.jl package, which only allow symbols as the metadata for uniquely identifying a tensor/array dimension.

Index identity

Here is an illustration of how the different types of Index metadata (random ID, prime level, and tags) work for Index identity:

julia> i = Index(2)(dim=2|id=471)
julia> j = Index(2)(dim=2|id=185)
julia> i == jfalse
julia> id(i)0xd2fc0ba4d928d7a7
julia> id(j)0xf44a2144997b5bf9
julia> ip = i'(dim=2|id=471)'
julia> ip == ifalse
julia> plev(i) == 0true
julia> plev(ip) == 1true
julia> noprime(ip) == itrue
julia> ix = addtags(i, "x")(dim=2|id=471|"x")
julia> ix == ifalse
julia> removetags(ix, "x") == itrue
julia> ixyz = addtags(ix, "y,z")(dim=2|id=471|"x,y,z")
julia> ixyz == addtags(i, "z,y,x")true

The different metadata that are stored inside of ITensor indices that determine their identity are useful in different contexts. The random ID is particularly useful in the case when a new Index needs to be generated internally by ITensor, such as when performing a matrix factorization. In the case of a matrix factorization, we want to make sure that the new Index will not accidentally clash with an existing one, for example:

julia> i = Index(2, "i")(dim=2|id=401|"i")
julia> j = Index(2, "j")(dim=2|id=191|"j")
julia> A = random_itensor(i, j)ITensor ord=2 (dim=2|id=401|"i") (dim=2|id=191|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> U, S, V = svd(A, i; lefttags="i", righttags="j");
julia> inds(U)((dim=2|id=401|"i"), (dim=2|id=181|"i"))
julia> inds(S)((dim=2|id=181|"i"), (dim=2|id=174|"j"))
julia> inds(V)((dim=2|id=191|"j"), (dim=2|id=174|"j"))
julia> norm(U * S * V - A)2.7128146828510983e-15

You can see that it would have been a problem here if there wasn't a new ID assigned to the Index, since it would have clashed with the original index. In this case, it could be avoided by giving the new indices different tags (with the keyword arguments lefttags and righttags), but in more complicated examples where it is not practical to do that (such as a case where many new indices are being introduced, for example for a tensor train (TT)/matrix product state (MPS)), it is convenient to not force users to come up with unique prime levels or tags themselves. It can also help to avoid accidental contractions in more complicated tensor network algorithms where there are many indices that can potentially have the same prime levels or tags.

In contrast, using multiple indices with the same Index ID but different prime levels and tags can be useful in situations where there is a more fundamental relationship between the spaces. For example, in the case of an ITensor corresponding to a Hermitian operator, it is helpful to make the bra space and ket spaces the same up to a prime level:

i = Index(2, "i")
j = Index(3, "j")
A = random_itensor(i', j', dag(i), dag(j))
H = 0.5 * (A + swapprime(dag(A), 0 => 1))
v = random_itensor(i, j)
Hv = noprime(H * v)
vH = dag(v)' * H
norm(Hv - dag(vH))

Note that we have added dag in a few places, which is superfluous in this case since the tensors are real and dense but become important when the tensors are complex and/or have symmetries. You can see that in this case, it is very useful to relate the bra and ket spaces by prime levels, since it makes it much easier to perform operations that map from one space to another. We could have created A from 4 entirely different indices with different ID numbers, but it would make the operations a bit more cumbersome, as shown below:

julia> i = Index(2, "i")(dim=2|id=765|"i")
julia> j = Index(3, "j")(dim=3|id=284|"j")
julia> ip = Index(2, "i")(dim=2|id=518|"i")
julia> jp = Index(3, "jp")(dim=3|id=179|"jp")
julia> A = random_itensor(ip, jp, dag(i), dag(j))ITensor ord=4 (dim=2|id=518|"i") (dim=3|id=179|"jp") (dim=2|id=765|"i") (dim=3|id=284|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> H = 0.5 * (A + swapinds(dag(A), (i, j), (ip, jp)))ITensor ord=4 (dim=2|id=518|"i") (dim=3|id=179|"jp") (dim=2|id=765|"i") (dim=3|id=284|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> v = random_itensor(i, j)ITensor ord=2 (dim=2|id=765|"i") (dim=3|id=284|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> Hv = replaceinds(H * v, (ip, jp) => (i, j))ITensor ord=2 (dim=2|id=765|"i") (dim=3|id=284|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> vH = replaceinds(dag(v), (i, j) => (ip, jp)) * HITensor ord=2 (dim=2|id=765|"i") (dim=3|id=284|"j") NDTensors.Dense{Float64, Vector{Float64}}
julia> norm(Hv - dag(vH))0.0

Relationship to other Einstein notation-based libraries

Here we show examples of different ways to perform the contraction "ab,bc,cd->ad" in ITensor.

julia> da, dc = 2, 3;
julia> db, dd = da, dc;
julia> tags = ("a", "b", "c", "d");
julia> dims = (da, db, dc, dd);
julia> a, b, c, d = Index.(dims, tags);
julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=745|"a") (dim=2|id=43|"b") NDTensors.Dense{Float64, Vector{Float64}}
julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=43|"b") (dim=3|id=520|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> Ccd = random_itensor(c, d) # "ab,bc,cd->ad"ITensor ord=2 (dim=3|id=520|"c") (dim=3|id=621|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=745|"a") (dim=3|id=621|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out1, (a, d)) # # Using replaceinds (most general way) # # "ba,bc,dc->ad"hassameinds(out1, (a, d)) = true true
julia> Aba = replaceinds(Aab, (a, b) => (b, a))ITensor ord=2 (dim=2|id=43|"b") (dim=2|id=745|"a") NDTensors.Dense{Float64, Vector{Float64}}
julia> Cdc = replaceinds(Ccd, (c, d) => (d, c))ITensor ord=2 (dim=3|id=621|"d") (dim=3|id=520|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=745|"a") (dim=3|id=621|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out2, (a, d)) # # Using setinds # # This is a bit lower level # since it doesn't check if the indices # are compatible in dimension, # so is not recommended in general.hassameinds(out2, (a, d)) = true true
julia> using ITensors: setinds
julia> Aba = setinds(Aab, (b, a))ITensor ord=2 (dim=2|id=43|"b") (dim=2|id=745|"a") NDTensors.Dense{Float64, Vector{Float64}}
julia> Cdc = setinds(Ccd, (d, c))ITensor ord=2 (dim=3|id=621|"d") (dim=3|id=520|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=745|"a") (dim=3|id=621|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out2, (a, d)) # # Using prime levels (assuming # the indices were made with these # prime levels in the first place) #hassameinds(out2, (a, d)) = true true
julia> a = Index(da, "a")(dim=2|id=946|"a")
julia> c = Index(dc, "c")(dim=3|id=47|"c")
julia> b, d = a', c'((dim=2|id=946|"a")', (dim=3|id=47|"c")')
julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=946|"a") (dim=2|id=946|"a")' NDTensors.Dense{Float64, Vector{Float64}}
julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=946|"a")' (dim=3|id=47|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=47|"c") (dim=3|id=47|"c")' NDTensors.Dense{Float64, Vector{Float64}}
julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=946|"a") (dim=3|id=47|"c")' NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true true
julia> Aba = swapprime(Aab, 0 => 1)ITensor ord=2 (dim=2|id=946|"a")' (dim=2|id=946|"a") NDTensors.Dense{Float64, Vector{Float64}}
julia> Cdc = swapprime(Ccd, 0 => 1)ITensor ord=2 (dim=3|id=47|"c")' (dim=3|id=47|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=946|"a") (dim=3|id=47|"c")' NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out2, (a, d)) # # Using tags (assuming # the indices were made with these # tags in the first place) #hassameinds(out2, (a, d)) = true true
julia> a = Index(da, "a")(dim=2|id=109|"a")
julia> c = Index(dc, "c")(dim=3|id=472|"c")
julia> b, d = settags(a, "b"), settags(c, "d")((dim=2|id=109|"b"), (dim=3|id=472|"d"))
julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=109|"a") (dim=2|id=109|"b") NDTensors.Dense{Float64, Vector{Float64}}
julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=109|"b") (dim=3|id=472|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=472|"c") (dim=3|id=472|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=109|"a") (dim=3|id=472|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true true
julia> Aba = swaptags(Aab, "a", "b")ITensor ord=2 (dim=2|id=109|"b") (dim=2|id=109|"a") NDTensors.Dense{Float64, Vector{Float64}}
julia> Cdc = swaptags(Ccd, "c", "d")ITensor ord=2 (dim=3|id=472|"d") (dim=3|id=472|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=109|"a") (dim=3|id=472|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out2, (a, d)) # # Using Julia Arrays #hassameinds(out2, (a, d)) = true true
julia> A = randn(da, db)2×2 Matrix{Float64}: 1.10291 0.634388 -0.882228 -0.322246
julia> B = randn(db, dc)2×3 Matrix{Float64}: 1.12768 0.334594 -0.854006 -1.26482 0.972584 0.0521089
julia> C = randn(dc, dd)3×3 Matrix{Float64}: 0.073576 -0.572066 0.237091 -0.355748 -0.244972 0.662303 1.91204 1.69045 0.713253
julia> tags = ("a", "b", "c", "d")("a", "b", "c", "d")
julia> dims = (da, db, dc, dd)(2, 2, 3, 3)
julia> a, b, c, d = Index.(dims, tags)((dim=2|id=423|"a"), (dim=2|id=454|"b"), (dim=3|id=772|"c"), (dim=3|id=950|"d"))
julia> Aab = itensor(A, a, b)ITensor ord=2 (dim=2|id=423|"a") (dim=2|id=454|"b") NDTensors.Dense{Float64, Vector{Float64}}
julia> Bbc = itensor(B, b, c)ITensor ord=2 (dim=2|id=454|"b") (dim=3|id=772|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> Ccd = itensor(C, c, d)ITensor ord=2 (dim=3|id=772|"c") (dim=3|id=950|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=423|"a") (dim=3|id=950|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true true
julia> Aba = itensor(A, b, a)ITensor ord=2 (dim=2|id=454|"b") (dim=2|id=423|"a") NDTensors.Dense{Float64, Vector{Float64}}
julia> Cdc = itensor(C, d, c)ITensor ord=2 (dim=3|id=950|"d") (dim=3|id=772|"c") NDTensors.Dense{Float64, Vector{Float64}}
julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=423|"a") (dim=3|id=950|"d") NDTensors.Dense{Float64, Vector{Float64}}
julia> @show hassameinds(out2, (a, d)) # # Note that we may start allowing # this notation in future: # (https://github.com/ITensor/ITensors.jl/issues/673) # #out1 = A[a, b] * B[b, c] * C[c, d] #@show hassameinds(out1, (a, d)) # #out2 = A[b, a] * B[b, c] * C[d, c] #@show hassameinds(out2, (a, d))hassameinds(out2, (a, d)) = true true