wandb: wandb: ERROR Error while calling W&B API: Error 1062: Duplicate entry '224579-yzgdojv7' for key 'PRIMARY' ()

Description I run multiple python experiment with different hyper-parameters and get the error:

wandb: ERROR Error while calling W&B API: Error 1062: Duplicate entry '224579-m0paw8n2' for key 'PRIMARY' (<Response [409]>)

In every experiment, I use

wandb.init(
      project='the same project name',
      name='different name for this run',
      config='some thing'
)

I notice that there are similar issues in https://github.com/ultralytics/yolov3/issues/1650 and https://github.com/ultralytics/yolov5/issues/1878.

Thx

About this issue

  • Original URL
  • State: closed
  • Created 3 years ago
  • Comments: 20 (5 by maintainers)

Most upvoted comments

Hi i get this error after deleting a run in the dashboard which was synched before and again trying to sync. Any suggestions?

@karthi0804 you’ll need to sync the run to a new id. When you run wandb sync you can specify a new run_id. To get a new unique id you can run wandb.util.generate_id()

I am also getting this issue, just starting today (it was working fine before today). I am using huggingface transformers, and my wandb call is wandb.init(project=proj_name, name=exp_name). I tried passing in id=wandb.util.generate_id(), but it still gives Error 1062.

EDIT: @waltsims’ comment below seems to have been the issue. I closed my screen sessions and pkill -9 wandb, and things are now working.

@karthi0804 you’ll need to sync the run to a new id. When you run wandb sync you can specify a new run_id. To get a new unique id you can run wandb.util.generate_id()

Thanks. It worked. I could resolve this by specifying wandb sync --id xxxxxx

Deleting the local wandb folder that is created in the folder where the experiment is ran fixed it for me.

rm -rf wandb/

It seems I had some zombie processes from wandb still running on my machine. I restarted my machine and things seem to be working once again.

Hey @karthi0804 Glad that the issue resolved Would you like to close the ticket?

But I didn’t open the ticket. So I am not sure if the original problem is resolved. @zliangak should close the ticket if resolved for him.

Hey @ZhicongLiang If I am getting you correctly, you have written a bash script that runs the main.py every 3 seconds with different hyper-parameters. I think it would be better for you to use our sweep feature. This is a hyper-parameter tuning tool that we provide off the top. Here you can specify a yaml folder that holds all the configuration details about your hyper-parameters and that is it. For more details on wandb sweep you can check out our documentation: wandb sweep

In terms of the ticket that has been raised, could you share the bash script that does the job? Maybe I could try reproducing the issue and have a better understanding of what happens. 😄

Thx for your suggestion! I will try that. And my bash is:

n=0
for type1 in 1 2 3 4 5
do
  for type2 in 1 2 3 4 5 6 7
  do
    for lr  in 1e-4 1e-3
    do
      for t in 1800 1850 1900  1950 2000 2050
      do
          for hidden_dim in 512
          do
              n=$((n + 1))
              r=$((n % 10))
              python main.py --epochs 100 --lr_decay_step 20 --lr $lr --batch_size 128 --gamma 0.1 --momentum 0.9 --hidden_dim $hidden_dim --t $t --type1 $type1 --type2 $type2 --gpu $r &
              sleep 3s
          done
      done
    done
  done
done