TensorNetwork: numpy.linalg.LinAlgError: SVD did not converge
Thx for the current repo.
I want to use TensorNetwork package to simulate time evolution of transverse ising model with itebd algorithm.
but it seems svd throws out error like this, can anyone give answers or any hints?
- does the
split_node_full_svdused inapply_two_site_gate()has parameter likemax_iteror something else? - are there demo code snippets for using
tn.InfiniteMPSortn.FiniteMPSwith tebd algorithm?
thanks~
$ python evolution.py
1727.9739590611944
6.306919862739928e-15
0.014427307657060863
7.142939906200455e-15
0.0015813901699260678
7.156663235322543e-15
../anaconda3/envs/tf2/lib/python3.7/site-packages/tensornetwork/backends/numpy/numpy_backend.py:90: RuntimeWarning: invalid value encountered in sqrt
return np.sqrt(tensor)
Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
Intel MKL ERROR: Parameter 4 was incorrect on entry to DLASCL.
Traceback (most recent call last):
File "evolution.py", line 83, in <module>
itebd_ising(N=10)
File "evolution.py", line 76, in itebd_ising
imps_state.canonicalize()
File "../anaconda3/envs/tf2/lib/python3.7/site-packages/tensornetwork/matrixproductstates/infinite_mps.py", line 276, in canonicalize
relative=True)
File "..anaconda3/envs/tf2/lib/python3.7/site-packages/tensornetwork/backends/numpy/numpy_backend.py", line 627, in svd
relative=relative)
File "../anaconda3/envs/tf2/lib/python3.7/site-packages/tensornetwork/backends/numpy/decompositions.py", line 36, in svd
u, s, vh = np.linalg.svd(tensor, full_matrices=False)
File "<__array_function__ internals>", line 6, in svd
File "../anaconda3/envs/tf2/lib/python3.7/site-packages/numpy/linalg/linalg.py", line 1636, in svd
u, s, vh = gufunc(a, signature=signature, extobj=extobj)
File "../anaconda3/envs/tf2/lib/python3.7/site-packages/numpy/linalg/linalg.py", line 106, in _raise_linalgerror_svd_nonconvergence
raise LinAlgError("SVD did not converge")
numpy.linalg.LinAlgError: SVD did not converge
code script shown below,
import tensornetwork as tn
import numpy as np
import tensorflow as tf
# tn.set_default_backend('numpy')
def itebd_ising(N=1000):
J=1.0;
g=0.5;
chi=50
d=2;
delta=0.05;
# N=3000;
# two-site Hamiltonian
H = np.array([[J, -g/2, -g/2,0],
[-g/2, -J,0,-g/2],
[-g/2.,0,-J,-g/2.],
[0,-g/2.,-g/2.,J]])
w,v=np.linalg.eig(H)
U=np.reshape(np.dot(np.dot(v,np.diag(np.exp(-delta*w))),np.transpose(v)),(2,2,2,2))
imps_state=tn.InfiniteMPS.random(d=[2,2], D=[chi,chi,chi], dtype=np.complex128)
print(imps_state.check_canonical())
imps_state.canonicalize()
print(imps_state.check_canonical())
truncation=[]
for step in range(N):
if step%2==0:
truncation_err = imps_state.apply_two_site_gate(U, site1=0, site2=1,max_singular_values=chi)
else:
truncation_err = imps_state.apply_two_site_gate(np.transpose(U,(1,0,3,2)), site1=0, site2=1,max_singular_values=chi)
truncation.append(np.linalg.norm(truncation_err))
imps_state.canonicalize()
print(np.linalg.norm(truncation_err))
print(imps_state.check_canonical())
# pdb.set_trace()
print("-"*20)
print(sum(truncation))
if __name__ == "__main__":
itebd_ising(N=10)
About this issue
- Original URL
- State: closed
- Created 3 years ago
- Comments: 23 (1 by maintainers)
a precision problem. Canonicalize takes the sqrt of eigenvalues of left and right reduced density matrices. EVs of the rdm’s grew small (order of precision), and due to finite precision arithmetic actually negative. np.sqrt of a negative float gives nan, producing the issue.
thanks for the message! Could you isolate parameter values for which the code breaks, so that I don’t have to wait for 30 iterations to finish until the error appears? Thanks!
it’s not released yet. You can clone #913 to your local machine and install it there
Hi @sr33dhar, had a very quick look at your code. The truncation there will in many cases lead to non-optimal truncations, something that you definitely want to avoid. I submitted a PR that adds truncation to the MPS class via the
positionfunction. You can use that for truncating your MPSApart from the above, I noticed that the code you sent uses inconsistent dtypes, i.e. you are setting some dtypes to complex64, but others are float64 or complex128. In order to avoid any suprises there just set all types of all arrays to be the same
Hi @sr33dhar I looked at the code, and there is indeed something going on in numpy. The issue I believe arises because the singular values of the matrices you are creating are highly degenerate. I think that if you were to enforce a canonical form on the MPS, this would fix your problem, but I haven’t tried it. Instead, I submitted a PR (similar to your PR, which is still open (sorry!)) that defaults to using QR instead of SVD if no truncation is performed. This fixes the issue. Hopefully we’ll have it merged soon.
@sr33dhar I can’t really tell though if this will solve your problem. Best you try it and let us know
I submitted a fix, hopefully we’ll pull it in today!
thanks for issue @Papageno2. I submitted a fix for the error. I’m not sure if the iTEBD actually converges to the right state (some quick tests indicate it always converges to a product state), so there could still be a bug somewhere. Let me know if there are any more issues