MPO_new
The main exported function is MPO_new, which takes an OpSum and transforms it into an MPO. The input has three constraints:
The operator must be expressed as a sum of products of single-site operators. For example, a CNOT could not appear in the sum since it is a two-site operator.
When dealing with fermionic systems, the parity of each term in the sum must be even. That is, the combined number of creation and annihilation operators in each term must be even. It should be possible to relax this constraint.
Each term in the sum-of-products representation can only have a single operator acting on any one site. For example, a term such as $\mathbf{X}^{(1)} \mathbf{X}^{(1)}$ is not allowed. However, there is a preprocessing utility that can automatically replace $\mathbf{X}^{(1)} \mathbf{X}^{(1)}$ with $\mathbf{I}^{(1)}$.
ITensorMPOConstruction.MPO_new — Function
MPO_new(ValType, os::OpIDSum, sites; basis_op_cache_vec=nothing,
check_for_errors=true, checkflux=true, splitblocks=true, output_level=0,
kwargs...) -> MPOConstruct an MPO from an OpIDSum.
Before construction, os can optionally be rewritten into the basis defined by basis_op_cache_vec, and basic consistency checks can be run with check_for_errors=true.
Keyword arguments:
basis_op_cache_vec=nothing: A list of operators to use as a basis for each site. Products of operators on the same site are expressed as one of these basis operators. Ifnothing, a basis is inferred from the input and no basis transformation occurs.check_for_errors=true: Check the inputOpIDSumfor errors. This can be expensive for larger problems.checkflux=true: Check that the resulting MPO tensors all have a well-defined flux. This can be expensive for larger problems whensplitblocks=true.splitblocks=true: Split the QN sectors into blocks of size one. This affects the sparsity of the resulting MPO, but can also slow down construction.combine_qn_sectors=false: Whentrue, the blocks of the MPO tensors corresponding to the same quantum numbers are merged together into a single block. This can decrease the resulting sparsity, but can offer performance gains.alg="QR": The decomposition algorithm to use. The options are"QR", which uses the rank decomposition algorithm based on the sparse QR decomposition, and"VC", which uses the vertex cover algorithm. When the vertex cover algorithm is used, all tolerance arguments are ignored.tol=1: A multiplicative modifier to the default tolerance used in SPQR's sparse QR decomposition, see SPQR user guide Section 2.3. The value of the default tolerance depends on the input matrix, which means a different tolerance is used for each decomposition. In the cases we have examined, the default tolerance works well for producing accurate MPOs.absolute_tol=false: Iftrue, override the default adaptive tolerance scheme outlined above, and use the value oftolas the single tolerance for each decomposition.call_back: A function that is called after constructing the MPO tensor atcur_site. Called ascall_back(n, offsets, block_sparse_matrices, sites, llinks, g, op_cache_vec). Primarily used for writing checkpoints to disk for large calculations.output_level=0: Controls progress and timing output.
MPO_new(os::OpIDSum, sites; kwargs...) -> MPOConstruct an MPO from os, automatically choosing Float64 or ComplexF64 from its coefficients and cached local operator matrices.
MPO_new(ValType, os::OpSum, sites; kwargs...) -> MPOConvert an ITensor OpSum to OpIDSum form and construct an MPO with element type ValType. Keyword arguments are forwarded to the OpIDSum method.
MPO_new(os::OpSum, sites; kwargs...) -> MPOConvert an ITensor OpSum to OpIDSum form and construct an MPO, inferring the numeric element type automatically.
ITensorMPOConstruction.resume_MPO_construction! — Function
resume_MPO_construction!(
n_init, offsets, block_sparse_matrices, sites, llinks, g, op_cache_vec;
combine_qn_sectors=false, alg="QR", tol=1, absolute_tol=false,
call_back=..., output_level=0
) -> NothingContinue MPO construction starting from site n_init using the current graph state g.
This is the low-level driver underlying MPO_new. At each site it factors the current graph with at_site!, stores component offsets and intermediate block-sparse local tensor data in offsets[n] and block_sparse_matrices[n], and advances to the next-site graph. The callback is invoked after each site is completed, before moving on to the next site.
offsets and block_sparse_matrices are preallocated vectors with one entry per site. llinks is a length length(sites) + 1 vector whose first link is already initialized; this routine fills the outgoing link for each processed site.
Keyword arguments:
combine_qn_sectors=false: Whentrue, the blocks of the MPO tensors corresponding to the same quantum numbers are merged together into a single block. This can decrease the resulting sparsity, but can offer performance gains.alg="QR": The decomposition algorithm to use. The options are"QR", which uses the rank decomposition algorithm based on the sparse QR decomposition, and"VC", which uses the vertex cover algorithm. When the vertex cover algorithm is used, all tolerance arguments are ignored.tol=1: A multiplicative modifier to the default tolerance used in SPQR's sparse QR decomposition, see SPQR user guide Section 2.3. The value of the default tolerance depends on the input matrix, which means a different tolerance is used for each decomposition. In the cases we have examined, the default tolerance works well for producing accurate MPOs.absolute_tol=false: Iftrue, override the default adaptive tolerance scheme outlined above, and use the value oftolas the single tolerance for each decomposition.call_back: A function that is called after constructing the MPO tensor atcur_site. Called ascall_back(n, offsets, block_sparse_matrices, sites, llinks, g, op_cache_vec). Primarily used for writing checkpoints to disk for large calculations.output_level=0: Controls progress and timing output.
ITensorMPOConstruction.instantiate_MPO — Function
instantiate_MPO(offsets, block_sparse_matrices, sites, llinks; splitblocks, checkflux) -> MPOConvert intermediate block-sparse MPO tensor data into an ITensor MPO.
For each site n, this calls to_ITensor(offsets[n], block_sparse_matrices[n], llinks[n], llinks[n + 1], sites[n]; splitblocks) to assemble the local tensor. Afterward, the dummy left and right boundary links are contracted away. If checkflux=true, ITensors.checkflux is run on each resulting MPO tensor.
ITensorMPOConstruction.sparsity — Function
sparsity(mpo::MPO) -> Float64Return the fraction of tensor entries in mpo that are structural zeros.
For dense tensors this is based on the full tensor size; for block-sparse tensors ITensors.nnz counts stored entries, so entries omitted by block sparsity are treated as structural zeros.
ITensorMPOConstruction.block2_nnz — Function
block2_nnz(mpo::MPO) -> Tuple{Int,Int}Count link-space blocks in mpo that are structural zeros.
When the MPO tensors are viewed as a matrix of operators, this returns the total number of entries in the MPO and the total number of structural nonzeros. This can be used to directly compare sparsities with the block2 storage format. The returned tuple is (total_link_blocks, nonzero_link_blocks).
A note on truncation
MPO_new is designed to construct numerically exact MPOs; the tolerance parameter should only be used to adjust the definition of "numerically exact", not to perform truncation. If truncation is desired, truncate the resulting MPO with ITensorMPS.truncate!. See Haldane-Shastry and truncation for an example.