LearningByCheating: Cannot reproduce the Privileged Agent reported results

I’m trying to reproduce the privileged agent by training and benchmarking it on my dataset. I generated the dataset by running Carla 0.9.6 and data_collector.py file with the default parameters. However, despite what you’ve mentioned in your paper, I had to generate 200 episodes for the training set to get 179103 frames! So my first question is how did you generate 174k frames with only 100 episodes? For the validation set, I’ve generated 20 episodes in the same town (town01) as training with 18188 frames.

Here is the train/val loss that I got so far: image

As you can see, I can’t get the validation loss smaller or close to 3e-5 as you mentioned on the README page. I’ve benchmarked the agent on both 128th and 256th checkpoints in Town02 but the results are far worse than what you’ve reported.

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 17 (10 by maintainers)

Most upvoted comments

Also if you don’t mind you can open a new issue to discuss this, or take the conversation on email.

Is it fair to say that the focal vehicle runs a different autopilot policy compared to other vehicle in the scene?

That’s exactly what happens. In CARLA 0.9.6 other vehicles’ controllers are implemented on the server/C++ side.

I am not sure why it would run red lights though.

How often does this happen? Last time I checked this does not happen except there is one traffic light constantly ignored because it is mislabeled on the CARLA map… Let me know if this is not the case though.

when you suggest tuning PID values, which agent are you referring to

To get most performance out of the birdview agent, one should tune the birdview PID, and of the image agent, the image agent PID.

Also, in terms of autopilot, their PID values are set very differently

Ah yes thanks for the catch! This is related to the PID bugfix mentioned in this thread above. Please refer to the NoisyAgent for the correct PID value.

Lastly, would you say your implementation of the autopilot is a better version compared to the one used in the original NoCrash paper?

I’d say this is pretty much the same as the default autopilot in the original CARLA repo, as it can navigate through the towns with no problem. The NoCrash paper uses some complicated noise injection during data collection so I don;t think it is comparable.

@peiyunh This is mainly because the training has some stochasticity (we did not set random seeds), and we find that the performance is a bit sensitive to the PID values. But I’d recommend trying the default ones and see how it works first.

Great, I’m glad it worked out. If you have any questions, feel free to shoot me an email at dchen@cs.utexas.edu

Thanks for providing the details. I double-checked the code and found out the PID values in data_collector.py are incorrect. It was a mistake I made while refactoring the codebase for release. Please check the updated data_collector.py for the correct PID values (two line changes). I think that explains the dataset problem in your visualizations and loss.

I apologize for the inconvenience. The rest of the code should be still intact. Please let us know if the problem persists after the change.