# ITensor Index identity: dimension labels and Einstein notation

Many tensor contraction libraries use Einstein notation, such as NumPy's einsum function, ncon, and various Julia packages such as TensorOperations.jl, Tullio.jl, OMEinsum.jl, and Einsum.jl, among others.

ITensor also uses Einstein notation, however the labels are stored inside the tensor and carried around with them during various operations. In addition, the labels that determine if tensor indices match with each other, and therefore automatically contract when doing * or match when adding or subtracting, are more sophisticated than simple characters or strings. ITensor indices are given a unique random ID number when they are constructed, and additionally users can add additional information like prime levels and tags which uniquely determine an Index. This is in contrast to simpler implementations of the same idea, such as the NamedDims.jl package, which only allow symbols as the metadata for uniquely identifying a tensor/array dimension.

## Index identity

Here is an illustration of how the different types of Index metadata (random ID, prime level, and tags) work for Index identity:

julia> i = Index(2)
(dim=2|id=672)

julia> j = Index(2)
(dim=2|id=611)

julia> i == j
false

julia> id(i)
0x98cc43836fa57e88

julia> id(j)
0x16eabe7b4b3b610b

julia> ip = i'
(dim=2|id=672)'

julia> ip == i
false

julia> plev(i) == 0
true

julia> plev(ip) == 1
true

julia> noprime(ip) == i
true

(dim=2|id=672|"x")

julia> ix == i
false

julia> removetags(ix, "x") == i
true

(dim=2|id=672|"x,y,z")

true

The different metadata that are stored inside of ITensor indices that determine their identity are useful in different contexts. The random ID is particularly useful in the case when a new Index needs to be generated internally by ITensor, such as when performing a matrix factorization. In the case of a matrix factorization, we want to make sure that the new Index will not accidentally clash with an existing one, for example:

julia> i = Index(2, "i")
(dim=2|id=412|"i")

julia> j = Index(2, "j")
(dim=2|id=921|"j")

julia> A = randomITensor(i, j)
ITensor ord=2 (dim=2|id=412|"i") (dim=2|id=921|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> U, S, V = svd(A, i; lefttags="i", righttags="j");

julia> inds(U)
((dim=2|id=412|"i"), (dim=2|id=368|"i"))

julia> inds(S)
((dim=2|id=368|"i"), (dim=2|id=843|"j"))

julia> inds(V)
((dim=2|id=921|"j"), (dim=2|id=843|"j"))

julia> norm(U * S * V - A)
3.779755061834631e-15

You can see that it would have been a problem here if there wasn't a new ID assigned to the Index, since it would have clashed with the original index. In this case, it could be avoided by giving the new indices different tags (with the keyword arguments lefttags and righttags), but in more complicated examples where it is not practical to do that (such as a case where many new indices are being introduced, for example for a tensor train (TT)/matrix product state (MPS)), it is convenient to not force users to come up with unique prime levels or tags themselves. It can also help to avoid accidental contractions in more complicated tensor network algorithms where there are many indices that can potentially have the same prime levels or tags.

In contrast, using multiple indices with the same Index ID but different prime levels and tags can be useful in situations where there is a more fundamental relationship between the spaces. For example, in the case of an ITensor corresponding to a Hermitian operator, it is helpful to make the bra space and ket spaces the same up to a prime level:

i = Index(2, "i")
j = Index(3, "j")
A = randomITensor(i', j', dag(i), dag(j))
H = 0.5 * (A + swapprime(dag(A), 0 => 1))
v = randomITensor(i, j)
Hv = noprime(H * v)
vH = dag(v)' * H
norm(Hv - dag(vH))

Note that we have added dag in a few places, which is superfluous in this case since the tensors are real and dense but become important when the tensors are complex and/or have symmetries. You can see that in this case, it is very useful to relate the bra and ket spaces by prime levels, since it makes it much easier to perform operations that map from one space to another. We could have created A from 4 entirely different indices with different ID numbers, but it would make the operations a bit more cumbersome, as shown below:

julia> i = Index(2, "i")
(dim=2|id=750|"i")

julia> j = Index(3, "j")
(dim=3|id=26|"j")

julia> ip = Index(2, "i")
(dim=2|id=876|"i")

julia> jp = Index(3, "jp")
(dim=3|id=229|"jp")

julia> A = randomITensor(ip, jp, dag(i), dag(j))
ITensor ord=4 (dim=2|id=876|"i") (dim=3|id=229|"jp") (dim=2|id=750|"i") (dim=3|id=26|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> H = 0.5 * (A + swapinds(dag(A), (i, j), (ip, jp)))
ITensor ord=4 (dim=2|id=876|"i") (dim=3|id=229|"jp") (dim=2|id=750|"i") (dim=3|id=26|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> v = randomITensor(i, j)
ITensor ord=2 (dim=2|id=750|"i") (dim=3|id=26|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Hv = replaceinds(H * v, (ip, jp) => (i, j))
ITensor ord=2 (dim=2|id=750|"i") (dim=3|id=26|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> vH = replaceinds(dag(v), (i, j) => (ip, jp)) * H
ITensor ord=2 (dim=2|id=750|"i") (dim=3|id=26|"j")
NDTensors.Dense{Float64, Vector{Float64}}

julia> norm(Hv - dag(vH))
0.0

## Relationship to other Einstein notation-based libraries

Here we show examples of different ways to perform the contraction "ab,bc,cd->ad" in ITensor.

julia> da, dc = 2, 3;

julia> db, dd = da, dc;

julia> tags = ("a", "b", "c", "d");

julia> dims = (da, db, dc, dd);

julia> a, b, c, d = Index.(dims, tags);

julia> Aab = randomITensor(a, b)
ITensor ord=2 (dim=2|id=350|"a") (dim=2|id=310|"b")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Bbc = randomITensor(b, c)
ITensor ord=2 (dim=2|id=310|"b") (dim=3|id=499|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Ccd = randomITensor(c, d)
ITensor ord=2 (dim=3|id=499|"c") (dim=3|id=261|"d")
NDTensors.Dense{Float64, Vector{Float64}}

out1 = Aab * Bbc * Ccd
ITensor ord=2 (dim=2|id=350|"a") (dim=3|id=261|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out1, (a, d))
hassameinds(out1, (a, d)) = true
true

julia> #
# Using replaceinds (most general way)
#

Aba = replaceinds(Aab, (a, b) => (b, a))
ITensor ord=2 (dim=2|id=310|"b") (dim=2|id=350|"a")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Cdc = replaceinds(Ccd, (c, d) => (d, c))
ITensor ord=2 (dim=3|id=261|"d") (dim=3|id=499|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out2 = Aba * Bbc * Cdc
ITensor ord=2 (dim=2|id=350|"a") (dim=3|id=261|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out2, (a, d))
hassameinds(out2, (a, d)) = true
true

julia> #
# Using setinds
#

# This is a bit lower level
# since it doesn't check if the indices
# are compatible in dimension,
# so is not recommended in general.
using ITensors: setinds

julia> Aba = setinds(Aab, (b, a))
ITensor ord=2 (dim=2|id=310|"b") (dim=2|id=350|"a")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Cdc = setinds(Ccd, (d, c))
ITensor ord=2 (dim=3|id=261|"d") (dim=3|id=499|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out2 = Aba * Bbc * Cdc
ITensor ord=2 (dim=2|id=350|"a") (dim=3|id=261|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out2, (a, d))
hassameinds(out2, (a, d)) = true
true

julia> #
# Using prime levels (assuming
# the indices were made with these
# prime levels in the first place)
#

a = Index(da, "a")
(dim=2|id=724|"a")

julia> c = Index(dc, "c")
(dim=3|id=943|"c")

julia> b, d = a', c'
((dim=2|id=724|"a")', (dim=3|id=943|"c")')

julia> Aab = randomITensor(a, b)
ITensor ord=2 (dim=2|id=724|"a") (dim=2|id=724|"a")'
NDTensors.Dense{Float64, Vector{Float64}}

julia> Bbc = randomITensor(b, c)
ITensor ord=2 (dim=2|id=724|"a")' (dim=3|id=943|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Ccd = randomITensor(c, d)
ITensor ord=2 (dim=3|id=943|"c") (dim=3|id=943|"c")'
NDTensors.Dense{Float64, Vector{Float64}}

julia> out1 = Aab * Bbc * Ccd
ITensor ord=2 (dim=2|id=724|"a") (dim=3|id=943|"c")'
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out1, (a, d))
hassameinds(out1, (a, d)) = true
true

julia> Aba = swapprime(Aab, 0 => 1)
ITensor ord=2 (dim=2|id=724|"a")' (dim=2|id=724|"a")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Cdc = swapprime(Ccd, 0 => 1)
ITensor ord=2 (dim=3|id=943|"c")' (dim=3|id=943|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out2 = Aba * Bbc * Cdc
ITensor ord=2 (dim=2|id=724|"a") (dim=3|id=943|"c")'
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out2, (a, d))
hassameinds(out2, (a, d)) = true
true

julia> #
# Using tags (assuming
# the indices were made with these
# tags in the first place)
#

a = Index(da, "a")
(dim=2|id=391|"a")

julia> c = Index(dc, "c")
(dim=3|id=376|"c")

julia> b, d = settags(a, "b"), settags(c, "d")
((dim=2|id=391|"b"), (dim=3|id=376|"d"))

julia> Aab = randomITensor(a, b)
ITensor ord=2 (dim=2|id=391|"a") (dim=2|id=391|"b")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Bbc = randomITensor(b, c)
ITensor ord=2 (dim=2|id=391|"b") (dim=3|id=376|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Ccd = randomITensor(c, d)
ITensor ord=2 (dim=3|id=376|"c") (dim=3|id=376|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out1 = Aab * Bbc * Ccd
ITensor ord=2 (dim=2|id=391|"a") (dim=3|id=376|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out1, (a, d))
hassameinds(out1, (a, d)) = true
true

julia> Aba = swaptags(Aab, "a", "b")
ITensor ord=2 (dim=2|id=391|"b") (dim=2|id=391|"a")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Cdc = swaptags(Ccd, "c", "d")
ITensor ord=2 (dim=3|id=376|"d") (dim=3|id=376|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out2 = Aba * Bbc * Cdc
ITensor ord=2 (dim=2|id=391|"a") (dim=3|id=376|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out2, (a, d))
hassameinds(out2, (a, d)) = true
true

julia> #
# Using Julia Arrays
#

A = randn(da, db)
2×2 Matrix{Float64}:
1.10291    0.634388
-0.882228  -0.322246

julia> B = randn(db, dc)
2×3 Matrix{Float64}:
1.12768  0.334594  -0.854006
-1.26482  0.972584   0.0521089

julia> C = randn(dc, dd)
3×3 Matrix{Float64}:
0.073576  -0.572066  0.237091
-0.355748  -0.244972  0.662303
1.91204    1.69045   0.713253

julia> tags = ("a", "b", "c", "d")
("a", "b", "c", "d")

julia> dims = (da, db, dc, dd)
(2, 2, 3, 3)

julia> a, b, c, d = Index.(dims, tags)
((dim=2|id=229|"a"), (dim=2|id=97|"b"), (dim=3|id=711|"c"), (dim=3|id=884|"d"))

julia> Aab = itensor(A, a, b)
ITensor ord=2 (dim=2|id=229|"a") (dim=2|id=97|"b")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Bbc = itensor(B, b, c)
ITensor ord=2 (dim=2|id=97|"b") (dim=3|id=711|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Ccd = itensor(C, c, d)
ITensor ord=2 (dim=3|id=711|"c") (dim=3|id=884|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out1 = Aab * Bbc * Ccd
ITensor ord=2 (dim=2|id=229|"a") (dim=3|id=884|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out1, (a, d))
hassameinds(out1, (a, d)) = true
true

julia> Aba = itensor(A, b, a)
ITensor ord=2 (dim=2|id=97|"b") (dim=2|id=229|"a")
NDTensors.Dense{Float64, Vector{Float64}}

julia> Cdc = itensor(C, d, c)
ITensor ord=2 (dim=3|id=884|"d") (dim=3|id=711|"c")
NDTensors.Dense{Float64, Vector{Float64}}

julia> out2 = Aba * Bbc * Cdc
ITensor ord=2 (dim=2|id=229|"a") (dim=3|id=884|"d")
NDTensors.Dense{Float64, Vector{Float64}}

julia> @show hassameinds(out2, (a, d))
hassameinds(out2, (a, d)) = true
true

julia> #
# Note that we may start allowing
# this notation in future:
# (https://github.com/ITensor/ITensors.jl/issues/673)
#
#out1 = A[a, b] * B[b, c] * C[c, d]
#@show hassameinds(out1, (a, d))
#
#out2 = A[b, a] * B[b, c] * C[d, c]
#@show hassameinds(out2, (a, d))