cFE: Stub for CFE_SB_SendMsg does not always save MsgPtr argument value
Describe the bug If a negative value is set for the return from CFE_SB_SendMsg stub the MsgPtr argument is not saved to the context. The passed in message cannot be verified for forced negative results.
To Reproduce Steps to reproduce the behavior:
- Write a test giving CFE_SB_SendMsg a context and that forces CFE_SB_SendMsg to return a negative value
- Check that the context contains the expected pointer - FAIL
- Change the return value to >=0
- Check that the context contains the expected pointer - PASS
Expected behavior No matter the forced return value the MsgPtr should be copied to the context. The check for status >= 0 appears to be leftover from the previous use of:
if (status >= 0)
{
UT_Stub_CopyFromLocal(UT_KEY(CFE_SB_SendMsg), MsgPtr->Byte,
CFE_SB_StubMsg_GetMetaData(MsgPtr)->TotalLength);
}
System observed on: RHEL 7.6
Reporter Info Alan Gibson NASA GSFC/587
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 16 (16 by maintainers)
The part that concerns me is when tests are (seemingly) just confirming one specific implementation - i.e. basically writing source code that confirms that the implementation was written in one particular way.
There is certainly some value in making sure all the if/else branches have been executed to confirm that they don’t contain fundamentally broken code (uninitialized pointers, divide by zero, that type of stuff). And to achieve this certainly requires some knowledge as to what the implementation is testing in its conditionals so it can set the stage for each conditional path.
But if this gets taken to the next level and the test starts to assert on every small implementation detail, it can become a maintenance issue - where the test breaks with every nuisance change even when that change is internal only and does not affect functional requirements.
While I don’t have any actual evidence - just anecdotal from personal experience - CFE+OSAL are already at the point where getting coverage tests working can easily take 3-4x the amount of time as getting the “real” (fsw) part of a change implemented, because they are so sensitive to one specific implementation. A small implementation change can become a giant coverage test headache pretty easily.
There must be a balance between catching every small implementation change and catching things that are likely to be real errors.