armi: Unit Tests failing on GitHub Actions
As you can see, this build of our Windows unit tests failed, with the single message:
File "D:\a\armi\armi\armi\reactor\tests\test_assemblies.py", line 597, in test_duplicate
self._setup_blueprints()
File "D:\a\armi\armi\armi\reactor\tests\test_assemblies.py", line [58](https://github.com/terrapower/armi/actions/runs/3463436138/jobs/5783668579#step:6:59)9, in _setup_blueprints
y = textProcessors.resolveMarkupInclusions(
File "D:\a\armi\armi\armi\utils\textProcessors.py", line 187, in resolveMarkupInclusions
return _resolveMarkupInclusions(src, root)[0]
File "D:\a\armi\armi\armi\utils\textProcessors.py", line 247, in _resolveMarkupInclusions
_processIncludes(src, out, includes, root)
File "D:\a\armi\armi\armi\utils\textProcessors.py", line 106, in _processIncludes
raise ValueError(
ValueError: The !included file, `refSmallSfpGrid.yaml` does not exist from D:\a\armi\armi\armi\tests\detailedAxialExpansion!
I fixed it by running the test again.
It feels like this started happening in the last month. Though pretty rarely.
And (if my memory serves), it’s always with this file:
refSmallSfpGrid.yamldoes not exist from D:\a\armi\armi\armi\tests\detailedAxialExpansion!
This has never happened on Linux and highly suspect it is a problem with the testing infrastructure and not the code.
About this issue
- Original URL
- State: closed
- Created 2 years ago
- Comments: 21 (21 by maintainers)
I should be able to do this after the holiday. A benefit is that I can make it as light as possible and therefore much faster.
Though I think ultimately, the best/ideal solution (as John points out) is to remove
setMasterCs()fromloadTestReactor().Global Cs
Global nuclideBases
we got problems. But maybe new ticket fodder
I think there needs to be more time in the day 😄
If it is becoming blocking for PRs and testing, I can move it up the priority list. Otherwise, it would be great it this could wait until ~mid next week?
The ORDER is really the key. That’s the fun part of a race condition. If the parallel tests are run in some orders, the tests pass. But if they are run in other tests, they fail. It all depend on if the axial expansion blueprints are loaded into memory at the wrong time.
@albeanth @opotowsky While I was on a run this morning, I figured it out.
We never solved this bug, and it popped up in yet another new PR. But how? Well, if we look at the stack trace:
The last line there shows our detailed axial expansion test bug we thought we fixed popped up again. But if you look at the first line, you can see this happened in
test_assemblies.py!This is your classic race condition. Somehow the axial expansion tests are affecting OTHER tests, but only randomly. How, you ask?
This sets the case settings objects for ALL other tests that happen to be running at that moment. D’oh!