ITensor
Description
ITensors.ITensor
— TypeITensor
An ITensor is a tensor whose interface is independent of its memory layout. Therefore it is not necessary to know the ordering of an ITensor's indices, only which indices an ITensor has. Operations like contraction and addition of ITensors automatically handle any memory permutations.
Examples
julia> i = Index(2, "i")
(dim=2|id=287|"i")
#
# Make an ITensor with random elements:
#
julia> A = random_itensor(i', i)
ITensor ord=2 (dim=2|id=287|"i")' (dim=2|id=287|"i")
NDTensors.Dense{Float64,Array{Float64,1}}
julia> @show A;
A = ITensor ord=2
Dim 1: (dim=2|id=287|"i")'
Dim 2: (dim=2|id=287|"i")
NDTensors.Dense{Float64,Array{Float64,1}}
2×2
0.28358594718392427 1.4342219756446355
1.6620103556283987 -0.40952231269251566
julia> @show inds(A);
inds(A) = ((dim=2|id=287|"i")', (dim=2|id=287|"i"))
#
# Set the i==1, i'==2 element to 1.0:
#
julia> A[i => 1, i' => 2] = 1;
julia> @show A;
A = ITensor ord=2
Dim 1: (dim=2|id=287|"i")'
Dim 2: (dim=2|id=287|"i")
NDTensors.Dense{Float64,Array{Float64,1}}
2×2
0.28358594718392427 1.4342219756446355
1.0 -0.40952231269251566
julia> @show storage(A);
storage(A) = [0.28358594718392427, 1.0, 1.4342219756446355, -0.40952231269251566]
julia> B = random_itensor(i, i');
julia> @show B;
B = ITensor ord=2
Dim 1: (dim=2|id=287|"i")
Dim 2: (dim=2|id=287|"i")'
NDTensors.Dense{Float64,Array{Float64,1}}
2×2
-0.6510816500352691 0.2579101497658179
0.256266641521826 -0.9464735926768166
#
# Can add or subtract ITensors as long as they
# have the same indices, in any order:
#
julia> @show A + B;
A + B = ITensor ord=2
Dim 1: (dim=2|id=287|"i")'
Dim 2: (dim=2|id=287|"i")
NDTensors.Dense{Float64,Array{Float64,1}}
2×2
-0.3674957028513448 1.6904886171664615
1.2579101497658178 -1.3559959053693322
Dense Constructors
ITensors.ITensor
— MethodITensor([::Type{ElT} = Float64, ]inds)
ITensor([::Type{ElT} = Float64, ]inds::Index...)
Construct an ITensor filled with zeros having indices inds
and element type ElT
. If the element type is not specified, it defaults to Float64
.
The storage will have NDTensors.Dense
type.
Examples
i = Index(2,"index_i")
j = Index(4,"index_j")
k = Index(3,"index_k")
A = ITensor(i,j)
B = ITensor(ComplexF64,k,j)
ITensors.ITensor
— MethodITensor([::Type{ElT} = Float64, ]::UndefInitializer, inds)
ITensor([::Type{ElT} = Float64, ]::UndefInitializer, inds::Index...)
Construct an ITensor filled with undefined elements having indices inds
and element type ElT
. If the element type is not specified, it defaults to Float64
. One purpose for using this constructor is that initializing the elements in an undefined way is faster than initializing them to a set value such as zero.
The storage will have NDTensors.Dense
type.
Examples
i = Index(2,"index_i")
j = Index(4,"index_j")
k = Index(3,"index_k")
A = ITensor(undef,i,j)
B = ITensor(ComplexF64,undef,k,j)
ITensors.ITensor
— MethodITensor([ElT::Type, ]x::Number, inds)
ITensor([ElT::Type, ]x::Number, inds::Index...)
Construct an ITensor with all elements set to x
and indices inds
.
If x isa Int
or x isa Complex{Int}
then the elements will be set to float(x)
unless specified otherwise by the first input.
The storage will have NDTensors.Dense
type.
Examples
```julia i = Index(2,"indexi"); j = Index(4,"indexj"); k = Index(3,"index_k");
A = ITensor(1.0, i, j) A = ITensor(1, i, j) # same as above B = ITensor(2.0+3.0im, j, k) ```
!!! warning In future versions this may not automatically convert integer inputs with float
, and in that case the particular element type should not be relied on.
ITensors.ITensor
— MethodITensor([ElT::Type, ]A::AbstractArray, inds)
ITensor([ElT::Type, ]A::AbstractArray, inds::Index...)
itensor([ElT::Type, ]A::AbstractArray, inds)
itensor([ElT::Type, ]A::AbstractArray, inds::Index...)
Construct an ITensor from an AbstractArray A
and indices inds
. The ITensor will be a view of the AbstractArray data if possible (if no conversion to a different element type is necessary).
If specified, the ITensor will have element type ElT
.
If the element type of A
is Int
or Complex{Int}
and the desired element type isn't specified, it will be converted to Float64
or Complex{Float64}
automatically. To keep the element type as an integer, specify it explicitly, for example with:
i = Index(2, "i")
A = [0 1; 1 0]
T = ITensor(eltype(A), A, i', dag(i))
Examples
i = Index(2,"index_i")
j = Index(2,"index_j")
M = [1. 2;
3 4]
T = ITensor(M, i, j)
T[i => 1, j => 1] = 3.3
M[1, 1] == 3.3
T[i => 1, j => 1] == 3.3
In future versions this may not automatically convert Int
/Complex{Int}
inputs to floating point versions with float
(once tensor operations using Int
/Complex{Int}
are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i))
if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).
ITensors.random_itensor
— Methodrandom_itensor([rng=Random.default_rng()], [ElT=Float64], inds)
random_itensor([rng=Random.default_rng()], [ElT=Float64], inds::Index...)
Construct an ITensor with type ElT
and indices inds
, whose elements are normally distributed random numbers. If the element type is not specified, it defaults to Float64
.
Examples
i = Index(2,"index_i")
j = Index(4,"index_j")
k = Index(3,"index_k")
A = random_itensor(i,j)
B = random_itensor(ComplexF64,undef,k,j)
ITensors.onehot
— Functiononehot(ivs...)
setelt(ivs...)
onehot(::Type, ivs...)
setelt(::Type, ivs...)
Create an ITensor with all zeros except the specified value, which is set to 1.
Examples
i = Index(2,"i")
A = onehot(i=>2)
# A[i=>2] == 1, all other elements zero
# Specify the element type
A = onehot(Float32, i=>2)
j = Index(3,"j")
B = onehot(i=>1,j=>3)
# B[i=>1,j=>3] == 1, all other element zero
Dense View Constructors
ITensors.itensor
— Methoditensor(args...; kwargs...)
Like the ITensor
constructor, but with attempt to make a view of the input data when possible.
QN BlockSparse Constructors
ITensors.ITensor
— MethodITensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]inds)
ITensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]inds::Index...)
Construct an ITensor with BlockSparse storage filled with zero(ElT)
where the nonzero blocks are determined by flux
.
If ElT
is not specified it defaults to Float64
.
If flux
is not specified, the ITensor will be empty (it will contain no blocks, and have an undefined flux). The flux will be set by the first element that is set.
Examples
julia> i
(dim=3|id=212|"i") <Out>
1: QN(0) => 1
2: QN(1) => 2
julia> @show ITensor(QN(0), i', dag(i));
ITensor(QN(0), i', dag(i)) = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
3×3
Block(1, 1)
[1:1, 1:1]
0.0
Block(2, 2)
[2:3, 2:3]
0.0 0.0
0.0 0.0
julia> @show ITensor(QN(1), i', dag(i));
ITensor(QN(1), i', dag(i)) = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
3×3
Block(2, 1)
[2:3, 1:1]
0.0
0.0
julia> @show ITensor(ComplexF64, QN(1), i', dag(i));
ITensor(ComplexF64, QN(1), i', dag(i)) = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{ComplexF64, Vector{ComplexF64}, 2}
3×3
Block(2, 1)
[2:3, 1:1]
0.0 + 0.0im
0.0 + 0.0im
julia> @show ITensor(undef, QN(1), i', dag(i));
ITensor(undef, QN(1), i', dag(i)) = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
3×3
Block(2, 1)
[2:3, 1:1]
0.0
1.63e-322
Construction with undefined flux:
julia> A = ITensor(i', dag(i));
julia> @show A;
A = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.EmptyStorage{NDTensors.EmptyNumber, NDTensors.BlockSparse{NDTensors.EmptyNumber, Vector{NDTensors.EmptyNumber}, 2}}
3×3
julia> isnothing(flux(A))
true
julia> A[i' => 1, i => 2] = 2
2
julia> @show A;
A = ITensor ord=2
Dim 1: (dim=3|id=212|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=212|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{Int64, Vector{Int64}, 2}
3×3
Block(1, 2)
[1:1, 2:3]
2 0
julia> flux(A)
QN(-1)
ITensors.ITensor
— MethodITensor([ElT::Type, ]A::AbstractArray, inds)
ITensor([ElT::Type, ]A::AbstractArray, inds::Index...)
itensor([ElT::Type, ]A::AbstractArray, inds)
itensor([ElT::Type, ]A::AbstractArray, inds::Index...)
Construct an ITensor from an AbstractArray A
and indices inds
. The ITensor will be a view of the AbstractArray data if possible (if no conversion to a different element type is necessary).
If specified, the ITensor will have element type ElT
.
If the element type of A
is Int
or Complex{Int}
and the desired element type isn't specified, it will be converted to Float64
or Complex{Float64}
automatically. To keep the element type as an integer, specify it explicitly, for example with:
i = Index(2, "i")
A = [0 1; 1 0]
T = ITensor(eltype(A), A, i', dag(i))
Examples
i = Index(2,"index_i")
j = Index(2,"index_j")
M = [1. 2;
3 4]
T = ITensor(M, i, j)
T[i => 1, j => 1] = 3.3
M[1, 1] == 3.3
T[i => 1, j => 1] == 3.3
In future versions this may not automatically convert Int
/Complex{Int}
inputs to floating point versions with float
(once tensor operations using Int
/Complex{Int}
are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i))
if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).
ITensor([ElT::Type, ]::AbstractArray, inds; tol=0.0, checkflux=true)
Create a block sparse ITensor from the input Array, and collection of QN indices. Zeros are dropped and nonzero blocks are determined from the zero values of the array.
Optionally, you can set a tolerance such that elements less than or equal to the tolerance are dropped.
By default, this will check that the flux of the nonzero blocks are consistent with each other. You can disable this check by setting checkflux=false
.
Examples
julia> i = Index([QN(0)=>1, QN(1)=>2], "i");
julia> A = [1e-9 0.0 0.0;
0.0 2.0 3.0;
0.0 1e-10 4.0];
julia> @show ITensor(A, i', dag(i); tol = 1e-8);
ITensor(A, i', dag(i); tol = 1.0e-8) = ITensor ord=2
Dim 1: (dim=3|id=468|"i")' <Out>
1: QN(0) => 1
2: QN(1) => 2
Dim 2: (dim=3|id=468|"i") <In>
1: QN(0) => 1
2: QN(1) => 2
NDTensors.BlockSparse{Float64,Array{Float64,1},2}
3×3
Block: (2, 2)
[2:3, 2:3]
2.0 3.0
0.0 4.0
ITensors.ITensor
— MethodITensor([::Type{ElT} = Float64,] ::UndefInitializer, flux::QN, inds)
ITensor([::Type{ElT} = Float64,] ::UndefInitializer, flux::QN, inds::Index...)
Construct an ITensor with indices inds
and BlockSparse storage with undefined elements of type ElT
, where the nonzero (allocated) blocks are determined by the provided QN flux
. One purpose for using this constructor is that initializing the elements in an undefined way is faster than initializing them to a set value such as zero.
The storage will have NDTensors.BlockSparse
type.
Examples
i = Index([QN(0)=>1, QN(1)=>2], "i")
A = ITensor(undef,QN(0),i',dag(i))
B = ITensor(Float64,undef,QN(0),i',dag(i))
C = ITensor(ComplexF64,undef,QN(0),i',dag(i))
Diagonal constructors
ITensors.diag_itensor
— Methoddiag_itensor([::Type{ElT} = Float64, ]inds)
diag_itensor([::Type{ElT} = Float64, ]inds::Index...)
Make a sparse ITensor of element type ElT
with only elements along the diagonal stored. Defaults to having zero(T)
along the diagonal.
The storage will have NDTensors.Diag
type.
ITensors.diag_itensor
— Methoddiag_itensor([ElT::Type, ]v::AbstractVector, inds...)
diagitensor([ElT::Type, ]v::AbstractVector, inds...)
Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be those stored in v
and the ITensor will have element type eltype(v)
, unless specified explicitly by ElT
. The storage will have NDTensors.Diag
type.
In the case when eltype(v) isa Union{Int, Complex{Int}}
, by default it will be converted to float(v)
. Note that this behavior is subject to change in the future.
The version diag_itensor
will never output an ITensor whose storage data is an alias of the input vector data.
The version diagitensor
might output an ITensor whose storage data is an alias of the input vector data in order to minimize operations.
ITensors.diag_itensor
— Methoddiag_itensor([ElT::Type, ]x::Number, inds...)
diagitensor([ElT::Type, ]x::Number, inds...)
Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be set to the value x
and the ITensor will have element type eltype(x)
, unless specified explicitly by ElT
. The storage will have NDTensors.Diag
type.
In the case when x isa Union{Int, Complex{Int}}
, by default it will be converted to float(x)
. Note that this behavior is subject to change in the future.
ITensors.delta
— Methoddelta([::Type{ElT} = Float64, ]inds)
delta([::Type{ElT} = Float64, ]inds::Index...)
Make a uniform diagonal ITensor with all diagonal elements one(ElT)
. Only a single diagonal element is stored.
This function has an alias δ
.
QN Diagonal constructors
ITensors.diag_itensor
— Methoddiag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
diag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)
Make an ITensor with storage type NDTensors.DiagBlockSparse
with elements zero(ElT)
. The ITensor only has diagonal blocks consistent with the specified flux
.
If the element type is not specified, it defaults to Float64
. If theflux is not specified, it defaults to QN()
.
ITensors.delta
— Methoddelta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
delta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)
Make an ITensor with storage type NDTensors.DiagBlockSparse
with uniform elements one(ElT)
. The ITensor only has diagonal blocks consistent with the specified flux
.
If the element type is not specified, it defaults to Float64
. If theflux is not specified, it defaults to QN()
.
Convert to Array
Core.Array
— MethodArray{ElT, N}(T::ITensor, i:Index...)
Array{ElT}(T::ITensor, i:Index...)
Array(T::ITensor, i:Index...)
Matrix{ElT}(T::ITensor, row_i:Index, col_i::Index)
Matrix(T::ITensor, row_i:Index, col_i::Index)
Vector{ElT}(T::ITensor)
Vector(T::ITensor)
Given an ITensor T
with indices i...
, returns an Array with a copy of the ITensor's elements. The order in which the indices are provided indicates the order of the data in the resulting Array.
NDTensors.array
— Methodarray(T::ITensor, inds...)
Convert an ITensor T
to an Array.
The ordering of the elements in the Array are specified by the input indices inds
. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.
Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal
type for an ITensor with Diag
storage. The specific storage type shouldn't be relied upon.
NDTensors.matrix
— Methodmatrix(T::ITensor, inds...)
Convert an ITensor T
to a Matrix.
The ordering of the elements in the Matrix are specified by the input indices inds
. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.
Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal
type for an ITensor with Diag
storage. The specific storage type shouldn't be relied upon.
NDTensors.vector
— Methodvector(T::ITensor, inds...)
Convert an ITensor T
to an Vector.
The ordering of the elements in the Array are specified by the input indices inds
. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.
Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal
type for an ITensor with Diag
storage. The specific storage type shouldn't be relied upon.
NDTensors.array
— Methodarray(T::ITensor)
Given an ITensor T
, returns an Array with a copy of the ITensor's elements, or a view in the case the the ITensor's storage is Dense.
The ordering of the elements in the Array, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.
This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).
NDTensors.matrix
— Methodmatrix(T::ITensor)
Given an ITensor T
with two indices, returns a Matrix with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.
The ordering of the elements in the Matrix, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.
This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).
NDTensors.vector
— Methodvector(T::ITensor)
Given an ITensor T
with one index, returns a Vector with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.
Getting and setting elements
Base.getindex
— Methodgetindex(T::ITensor, ivs...)
Get the specified element of the ITensor, using a list of IndexVal
s or Pair{<:Index, Int}
.
Example
i = Index(2; tags = "i")
A = ITensor(2.0, i, i')
A[i => 1, i' => 2] # 2.0, same as: A[i' => 2, i => 1]
Base.setindex!
— Methodsetindex!(T::ITensor, x::Number, ivs...)
setindex!(T::ITensor, x::Number, I::Integer...)
setindex!(T::ITensor, x::Number, I::CartesianIndex)
Set the specified element of the ITensor, using a list of Pair{<:Index, Integer}
(or IndexVal
).
If just integers are used, set the specified element of the ITensor using internal Index ordering of the ITensor (only for advanced usage, only use if you know the axact ordering of the indices).
Example
i = Index(2; tags = "i")
A = ITensor(i, i')
A[i => 1, i' => 2] = 1.0 # same as: A[i' => 2, i => 1] = 1.0
A[1, 2] = 1.0 # same as: A[i => 1, i' => 2] = 1.0
# Some simple slicing is also supported
A[i => 2, i' => :] = [2.0 3.0]
A[2, :] = [2.0 3.0]
Properties
NDTensors.inds
— Methodinds(T::ITensor)
Return the indices of the ITensor as a Tuple.
NDTensors.ind
— Methodind(T::ITensor, i::Int)
Get the Index of the ITensor along dimension i.
ITensors.dir
— Methoddir(A::ITensor, i::Index)
Return the direction of the Index i
in the ITensor A
.
Priming and tagging
ITensors.prime
— Methodprime[!](A::ITensor, plinc::Int = 1; <keyword arguments>) -> ITensor
prime(inds, plinc::Int = 1; <keyword arguments>) -> IndexSet
Increase the prime level of the indices of an ITensor or collection of indices.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.setprime
— Methodsetprime[!](A::ITensor, plev::Int; <keyword arguments>) -> ITensor
setprime(inds, plev::Int; <keyword arguments>) -> IndexSet
Set the prime level of the indices of an ITensor or collection of indices.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.noprime
— Methodnoprime[!](A::ITensor; <keyword arguments>) -> ITensor
noprime(inds; <keyword arguments>) -> IndexSet
Set the prime level of the indices of an ITensor or collection of indices to zero.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.mapprime
— Methodreplaceprime[!](A::ITensor, plold::Int, plnew::Int; <keyword arguments>) -> ITensor
replaceprime[!](A::ITensor, plold => plnew; <keyword arguments>) -> ITensor
mapprime[!](A::ITensor, <arguments>; <keyword arguments>) -> ITensor
replaceprime(inds, plold::Int, plnew::Int; <keyword arguments>)
replaceprime(inds::IndexSet, plold => plnew; <keyword arguments>)
mapprime(inds, <arguments>; <keyword arguments>)
Set the prime level of the indices of an ITensor or collection of indices with prime level plold
to plnew
.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.swapprime
— Methodswapprime[!](A::ITensor, pl1::Int, pl2::Int; <keyword arguments>) -> ITensor
swapprime[!](A::ITensor, pl1 => pl2; <keyword arguments>) -> ITensor
swapprime(inds, pl1::Int, pl2::Int; <keyword arguments>)
swapprime(inds, pl1 => pl2; <keyword arguments>)
Set the prime level of the indices of an ITensor or collection of indices with prime level pl1
to pl2
, and those with prime level pl2
to pl1
.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.TagSets.addtags
— Methodaddtags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
addtags(inds, ts::String; <keyword arguments>)
Add the tags ts
to the indices of an ITensor or collection of indices.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.TagSets.removetags
— Methodremovetags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
removetags(inds, ts::String; <keyword arguments>)
Remove the tags ts
from the indices of an ITensor or collection of indices.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.TagSets.replacetags
— Methodreplacetags[!](A::ITensor, tsold::String, tsnew::String; <keyword arguments>) -> ITensor
replacetags(is::IndexSet, tsold::String, tsnew::String; <keyword arguments>) -> IndexSet
Replace the tags tsold
with tsnew
for the indices of an ITensor.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.settags
— Methodsettags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
settags(is::IndexSet, ts::String; <keyword arguments>) -> IndexSet
Set the tags of the indices of an ITensor or IndexSet to ts
.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
ITensors.swaptags
— Methodswaptags[!](A::ITensor, ts1::String, ts2::String; <keyword arguments>) -> ITensor
swaptags(is::IndexSet, ts1::String, ts2::String; <keyword arguments>) -> IndexSet
Swap the tags ts1
with ts2
for the indices of an ITensor.
Optionally, only modify the indices with the specified keyword arguments.
Arguments
tags = nothing
: if specified, only modify Indexi
ifhastags(i, tags) == true
.plev = nothing
: if specified, only modify Indexi
ifhasplev(i, plev) == true
.
The ITensor functions come in two versions, f
and f!
. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).
Index collections set operations
ITensors.commoninds
— Functioncommoninds(A, B; kwargs...)
Return a Vector with indices that are common between the indices of A
and B
(the set intersection, similar to Base.intersect
).
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.commonind
— Functioncommonind(A, B; kwargs...)
Return the first Index
common between the indices of A
and B
.
See also commoninds
.
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.uniqueinds
— Functionuniqueinds(A, B; kwargs...)
Return Vector with indices that are unique to the set of indices of A
and not in B
(the set difference, similar to Base.setdiff
).
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.uniqueind
— Functionuniqueind(A, B; kwargs...)
Return the first Index
unique to the set of indices of A
and not in B
.
See also uniqueinds
.
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.noncommoninds
— Functionnoncommoninds(A, B; kwargs...)
Return a Vector with indices that are not common between the indices of A
and B
(the symmetric set difference, similar to Base.symdiff
).
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.noncommonind
— Functionnoncommonind(A, B; kwargs...)
Return the first Index
not common between the indices of A
and B
.
See also noncommoninds
.
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.unioninds
— Functionunioninds(A, B; kwargs...)
Return a Vector with indices that are the union of the indices of A
and B
(the set union, similar to Base.union
).
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.unionind
— Functionunionind(A, B; kwargs...)
Return the first Index
in the union of the indices of A
and B
.
See also unioninds
.
Optional keyword arguments:
- tags::String - a tag name or comma separated list of tag names that the returned indices must all have
- plev::Int - common prime level that the returned indices must all have
- inds - Index or collection of indices. Returned indices must come from this set of indices.
ITensors.hascommoninds
— Functionhascommoninds(A, B; kwargs...)
hascommoninds(B; kwargs...) -> f::Function
Check if the ITensors or sets of indices A
and B
have common indices.
If only one ITensor or set of indices B
is passed, return a function f
such that f(A) = hascommoninds(A, B; kwargs...)
Index Manipulations
ITensors.replaceind
— Methodreplaceind[!](A::ITensor, i1::Index, i2::Index) -> ITensor
Replace the Index i1
with the Index i2
in the ITensor.
The indices must have the same space (i.e. the same dimension and QNs, if applicable).
ITensors.replaceinds
— Methodreplaceinds(A::ITensor, inds1, inds2) -> ITensor
replaceinds!(A::ITensor, inds1, inds2)
Replace the Index inds1[n]
with the Index inds2[n]
in the ITensor, where n
runs from 1
to length(inds1) == length(inds2)
.
The indices must have the same space (i.e. the same dimension and QNs, if applicable).
The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).
ITensors.swapind
— Methodswapind(A::ITensor, i1::Index, i2::Index) -> ITensor
swapind!(A::ITensor, i1::Index, i2::Index)
Swap the Index i1
with the Index i2
in the ITensor.
The indices must have the same space (i.e. the same dimension and QNs, if applicable).
ITensors.swapinds
— Methodswapinds(A::ITensor, inds1, inds2) -> ITensor
swapinds!(A::ITensor, inds1, inds2)
Swap the Index inds1[n]
with the Index inds2[n]
in the ITensor, where n
runs from 1
to length(inds1) == length(inds2)
.
The indices must have the same space (i.e. the same dimension and QNs, if applicable).
The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).
Math operations
Base.:*
— MethodA::ITensor * B::ITensor
contract(A::ITensor, B::ITensor)
Contract ITensors A and B to obtain a new ITensor. This contraction *
operator finds all matching indices common to A and B and sums over them, such that the result will have only the unique indices of A and B. To prevent indices from matching, their prime level or tags can be modified such that they no longer compare equal - for more information see the documentation on Index objects.
Examples
i = Index(2,"index_i"); j = Index(4,"index_j"); k = Index(3,"index_k")
A = random_itensor(i,j)
B = random_itensor(j,k)
C = A * B # contract over Index j
A = random_itensor(i,i')
B = random_itensor(i,i'')
C = A * B # contract over Index i
A = random_itensor(i)
B = random_itensor(j)
C = A * B # outer product of A and B, no contraction
A = random_itensor(i,j,k)
B = random_itensor(k,i,j)
C = A * B # inner product of A and B, all indices contracted
ITensors.dag
— Methoddag(T::ITensor; allow_alias = true)
Complex conjugate the elements of the ITensor T
and dagger the indices.
By default, an alias of the ITensor is returned (i.e. the output ITensor may share data with the input ITensor). If allow_alias = false
, an alias is never returned.
ITensors.directsum
— Methoddirectsum(A::Pair{ITensor}, B::Pair{ITensor}, ...; tags)
directsum(output_inds, A::Pair{ITensor}, B::Pair{ITensor}, ...; tags)
Given a list of pairs of ITensors and indices, perform a partial direct sum of the tensors over the specified indices. Indices that are not specified to be summed must match between the tensors.
(Note: Pair{ITensor}
in Julia is short for Pair{ITensor,<:Any}
which means any pair T => x
where T
is an ITensor.)
If all indices are specified then the operation is equivalent to creating a block diagonal tensor.
Returns the ITensor representing the partial direct sum as well as the new direct summed indices. The tags of the direct summed indices are specified by the keyword arguments.
Optionally, pass the new direct summed indices of the output tensor as the first argument (either a single Index or a collection), which must be proper direct sums of the input indices that are specified to be direct summed.
See Section 2.3 of https://arxiv.org/abs/1405.7786 for a definition of a partial direct sum of tensors.
Examples
x = Index(2, "x")
i1 = Index(3, "i1")
j1 = Index(4, "j1")
i2 = Index(5, "i2")
j2 = Index(6, "j2")
A1 = random_itensor(x, i1)
A2 = random_itensor(x, i2)
S, s = directsum(A1 => i1, A2 => i2)
dim(s) == dim(i1) + dim(i2)
i1i2 = directsum(i1, i2)
S = directsum(i1i2, A1 => i1, A2 => i2)
hasind(S, i1i2)
A3 = random_itensor(x, j1)
S, s = directsum(A1 => i1, A2 => i2, A3 => j1)
dim(s) == dim(i1) + dim(i2) + dim(j1)
A1 = random_itensor(i1, x, j1)
A2 = random_itensor(x, j2, i2)
S, s = directsum(A1 => (i1, j1), A2 => (i2, j2); tags = ["sum_i", "sum_j"])
length(s) == 2
dim(s[1]) == dim(i1) + dim(i2)
dim(s[2]) == dim(j1) + dim(j2)
Base.exp
— Methodexp(A::ITensor, Linds=Rinds', Rinds=inds(A,plev=0); ishermitian = false)
Compute the exponential of the tensor A
by treating it as a matrix $A_{lr}$ with the left index l
running over all indices in Linds
and r
running over all indices in Rinds
.
Only accepts index lists Linds
,Rinds
such that: (1) length(Linds) + length(Rinds) == length(inds(A))
(2) length(Linds) == length(Rinds)
(3) For each pair of indices (Linds[n],Rinds[n])
, Linds[n]
and Rinds[n]
represent the same Hilbert space (the same QN structure in the QN case, or just the same length in the dense case), and appear in A
with opposite directions.
When ishermitian=true
the exponential of Hermitian(A_{lr})
is computed internally.
LinearAlgebra.nullspace
— Methodnullspace(T::ITensor, left_inds...; tags="n", atol=1E-12, kwargs...)
Viewing the ITensor T
as a matrix with the provided left_inds
viewed as the row space and remaining indices viewed as the right indices or column space, the nullspace
function computes the right null space. That is, it will return a tensor N
acting on the right indices of T
such that T*N
is zero. The returned tensor N
will also have a new index with the label "n" which indexes through the 'vectors' in the null space.
For example, if T
has the indices i,j,k
, calling N = nullspace(T,i,k)
returns N
with index j
such that
___ ___
i --| | | |
| T |--j--| N |--n ≈ 0
k --| | | |
--- ---
The index n
can be obtained by calling n = uniqueindex(N,T)
Note that the implementation of this function is subject to change in the future, in which case the precise atol
value that gives a certain null space size may change in future versions of ITensor.
Keyword arguments:
atol::Float64=1E-12
- singular values of T†*T below this value define the null spacetags::String="n"
- choose the tags of the index selecting elements of the null space
Decompositions
LinearAlgebra.svd
— Methodsvd(A::ITensor, inds::Index...; <keyword arguments>)
Singular value decomposition (SVD) of an ITensor A
, computed by treating the "left indices" provided collectively as a row index, and the remaining "right indices" as a column index (matricization of a tensor).
The first three return arguments are U
, S
, and V
, such that A ≈ U * S * V
.
Whether or not the SVD performs a trunction depends on the keyword arguments provided.
If the left or right set of indices are empty, all input indices are put on V
or U
respectively. To specify an empty set of left indices, you must explicitly use svd(A, ())
(svd(A)
is currently undefined).
Examples
Computing the SVD of an order-three ITensor, such that the indices i and k end up on U and j ends up on V
i = Index(2)
j = Index(5)
k = Index(2)
A = random_itensor(i, j, k)
U, S, V = svd(A, i, k);
@show norm(A - U * S * V) <= 10 * eps() * norm(A)
The following code will truncate the last 2 singular values, since the total number of singular values is 4. The norm of the difference with the original tensor will be the sqrt root of the sum of the squares of the singular values that get truncated.
trunc, Strunc, Vtrunc = svd(A, i, k; maxdim=2);
@show norm(A - Utrunc * Strunc * Vtrunc) ≈ sqrt(S[3, 3]^2 + S[4, 4]^2)
Alternatively we can specify that we want to truncate the weights of the singular values up to a certain cutoff, so the total error will be no larger than the cutoff.
Utrunc2, Strunc2, Vtrunc2 = svd(A, i, k; cutoff=1e-10);
@show norm(A - Utrunc2 * Strunc2 * Vtrunc2) <= 1e-10
Keywords
maxdim::Int
: the maximum number of singular values to keep.mindim::Int
: the minimum number of singular values to keep.cutoff::Float64
: set the desired truncation error of the SVD, by default defined as the sum of the squares of the smallest singular values.lefttags::String = "Link,u"
: set the tags of the Index shared byU
andS
.righttags::String = "Link,v"
: set the tags of the Index shared byS
andV
.alg::String = "divide_and_conquer"
. Options:"divide_and_conquer"
- A divide-and-conquer algorithm. LAPACK's gesdd. Fast, but may lead to some innacurate singular values for very ill-conditioned matrices. Also may sometimes fail to converge, leading to errors (in which case "qr_iteration" or "recursive" can be tried)."qr_iteration"
- Typically slower but more accurate for very ill-conditioned matrices compared to"divide_and_conquer"
. LAPACK's gesvd."recursive"
- ITensor's custom svd. Very reliable, but may be slow if high precision is needed. To get ansvd
of a matrixA
, an eigendecomposition of $A^{\dagger} A$ is used to computeU
and then aqr
of $A^{\dagger} U$ is used to computeV
. This is performed recursively to compute small singular values.
use_absolute_cutoff::Bool = false
: set if all probability weights below thecutoff
value should be discarded, rather than the sum of discarded weights.use_relative_cutoff::Bool = true
: set if the singular values should be normalized for the sake of truncation.min_blockdim::Int = 0
: for SVD of block-sparse or QN ITensors, require that the number of singular values kept be greater than or equal to this value when possible
LinearAlgebra.eigen
— Methodeigen(A::ITensor[, Linds, Rinds]; <keyword arguments>)
Eigendecomposition of an ITensor A
, computed by treating the "left indices" Linds
provided collectively as a row index, and remaining "right indices" Rinds
as a column index (matricization of a tensor).
If no indices are provided, pairs of primed and unprimed indices are searched for, with Linds
taken to be the primed indices and Rinds
taken to be the unprimed indices.
The return arguments are the eigenvalues D
and eigenvectors U
as tensors, such that A * U ∼ U * D
(more precisely they are approximately equal up to proper replacements of indices, see the example for details).
Whether or not eigen
performs a trunction depends on the keyword arguments provided. Note that truncation is only well defined for positive semidefinite matrices.
Arguments
- `maxdim::Int`: the maximum number of singular values to keep.
- `mindim::Int`: the minimum number of singular values to keep.
- `cutoff::Float64`: set the desired truncation error of the eigenvalues,
by default defined as the sum of the squares of the smallest eigenvalues.
For now truncation is only well defined for positive semi-definite
eigenspectra.
- `ishermitian::Bool = false`: specify if the matrix is Hermitian, in which
case a specialized diagonalization routine will be used and it is
guaranteed that real eigenvalues will be returned.
- `plev::Int = 0`: set the prime level of the Indices of `D`. Default prime
levels are subject to change.
- `leftplev::Int = plev`: set the prime level of the Index unique to `D`.
Default prime levels are subject to change.
- `rightplev::Int = leftplev+1`: set the prime level of the Index shared
by `D` and `U`. Default tags are subject to change.
- `tags::String = "Link,eigen"`: set the tags of the Indices of `D`.
Default tags are subject to change.
- `lefttags::String = tags`: set the tags of the Index unique to `D`.
Default tags are subject to change.
- `righttags::String = tags`: set the tags of the Index shared by `D` and `U`.
Default tags are subject to change.
- `use_absolute_cutoff::Bool = false`: set if all probability weights below
the `cutoff` value should be discarded, rather than the sum of discarded
weights.
- `use_relative_cutoff::Bool = true`: set if the singular values should
be normalized for the sake of truncation.
Examples
i, j, k, l = Index(2, "i"), Index(2, "j"), Index(2, "k"), Index(2, "l")
A = random_itensor(i, j, k, l)
Linds = (i, k)
Rinds = (j, l)
D, U = eigen(A, Linds, Rinds)
dl, dr = uniqueind(D, U), commonind(D, U)
Ul = replaceinds(U, (Rinds..., dr) => (Linds..., dl))
A * U ≈ Ul * D # true
LinearAlgebra.factorize
— Methodfactorize(A::ITensor, Linds::Index...; <keyword arguments>)
Perform a factorization of A
into ITensors L
and R
such that A ≈ L * R
.
Arguments
ortho::String = "left"
: Choose orthogonality properties of the factorization."left"
: the left factorL
is an orthogonal basis such thatL * dag(prime(L, commonind(L,R))) ≈ I
."right"
: the right factorR
forms an orthogonal basis."none"
, neither of the factors form an orthogonal basis, and in general are made as symmetrically as possible (depending on the decomposition used).
which_decomp::Union{String, Nothing} = nothing
: choose what kind of decomposition is used.nothing
: choose the decomposition automatically based on the other arguments. For example, whennothing
is chosen andortho = "left"
or"right"
, and a cutoff is provided,svd
oreigen
is used depending on the provided cutoff (eigen
is only used when the cutoff is greater than1e-12
, since it has a lower precision). When no truncation is requestedqr
is used for dense ITensors andsvd
for block-sparse ITensors (in the futureqr
will be used also for block-sparse ITensors in this case)."svd"
:L = U
andR = S * V
forortho = "left"
,L = U * S
andR = V
forortho = "right"
, andL = U * sqrt.(S)
andR = sqrt.(S) * V
forortho = "none"
. To control whichsvd
algorithm is choose, use thesvd_alg
keyword argument. See the documentation forsvd
for the supported algorithms, which are the same as those accepted by thealg
keyword argument."eigen"
:L = U
and $R = U^{\dagger} A$ whereU
is determined from the eigendecompositon $A A^{\dagger} = U D U^{\dagger}$ forortho = "left"
(and vice versa forortho = "right"
)."eigen"
is not supported forortho = "none"
."qr"
:L=Q
andR
an upper-triangular matrix whenortho = "left"
, andR = Q
andL
a lower-triangular matrix whenortho = "right"
(currently supported for dense ITensors only). In the future, other decompositions like QR (for block-sparse ITensors), polar, cholesky, LU, etc. are expected to be supported.
For truncation arguments, see: svd
Memory operations
ITensors.permute
— Methodpermute(T::ITensor, inds...; allow_alias = false)
Return a new ITensor T
with indices permuted according to the input indices inds
. The storage of the ITensor is permuted accordingly.
If called with allow_alias = true
, it avoids copying data if possible. Therefore, it may return an alias of the input ITensor (an ITensor that shares the same data), such as if the permutation turns out to be trivial.
By default, allow_alias = false
, and it never returns an alias of the input ITensor.
Examples
i = Index(2, "index_i"); j = Index(4, "index_j"); k = Index(3, "index_k");
T = random_itensor(i, j, k)
pT_1 = permute(T, k, i, j)
pT_2 = permute(T, j, i, k)
pT_noalias_1 = permute(T, i, j, k)
pT_noalias_1[1, 1, 1] = 12
T[1, 1, 1] != pT_noalias_1[1, 1, 1]
pT_noalias_2 = permute(T, i, j, k; allow_alias = false)
pT_noalias_2[1, 1, 1] = 12
T[1, 1, 1] != pT_noalias_1[1, 1, 1]
pT_alias = permute(T, i, j, k; allow_alias = true)
pT_alias[1, 1, 1] = 12
T[1, 1, 1] == pT_alias[1, 1, 1]
NDTensors.dense
— Methoddense(T::ITensor)
Make a new ITensor where the storage is the closest Dense storage, avoiding allocating new data if possible. For example, an ITensor with Diag storage will become Dense storage, filled with zeros except for the diagonal values.
NDTensors.denseblocks
— Methoddenseblocks(T::ITensor)
Make a new ITensor where any blocks which have a sparse format, such as diagonal sparsity, are made dense while still preserving the outer block-sparse structure. This method avoids allocating new data if possible.
For example, an ITensor with DiagBlockSparse storage will have BlockSparse storage afterwards.