bmcweb: redfish validation failing with "Rework Authorization flow" commit
All bmcweb bumps starting with https://gerrit.openbmc-project.xyz/c/openbmc/bmcweb/+/30994 are failing HW CI on witherspoon. The Redfish validation fails and we are unable to code update the system after this code gets on the system.
ERROR - SchemaURI couldn't call reference link ServiceRoot inside /redfish/v1/$metadata
ERROR - ResourceObject creation: No schema XML for #ServiceRoot.v1_5_0.ServiceRoot /redfish/v1/$metadata#ServiceRoot.ServiceRoot
WARNING - /redfish/v1/ @odata.id: Expected @odata.id to match URI link /redfish/v1
WARNING - SchemaURI /redfish/v1/schema/ServiceRoot_v1.xml was unable to be called, defaulting to local storage in ./SchemaFiles/metadata
WARNING - SchemaURI /redfish/v1/$metadata#ServiceRoot.ServiceRoot was unable to be called, defaulting to local storage in ./SchemaFiles/metadata
WARNING - Unable to find a harddrive stored $metadata at ./SchemaFiles/metadata, defaulting to ServiceRoot_v1.xml
ERROR - The following schema URIs referenced from $metadata could not be retrieved:
/redfish/v1/schema/SerialInterface_v1.xml
/redfish/v1/schema/OemComputerSystem_v1.xml
/redfish/v1/schema/OutletCollection_v1.xml
/redfish/v1/schema/OutletGroup_v1.xml
/redfish/v1/schema/Power_v1.xml
/redfish/v1/schema/PhysicalContext_v1.xml
/redfish/v1/schema/SerialInterfaceCollection_v1.xml
/redfish/v1/schema/Settings_v1.xml
/redfish/v1/schema/OemSession_v1.xml
/redfish/v1/schema/VCATEntryCollection_v1.xml
/redfish/v1/schema/PCIeFunctionCollection_v1.xml
/redfish/v1/schema/MessageRegistryFileCollection_v1.xml
...
About this issue
- Original URL
- State: closed
- Created 4 years ago
- Comments: 22 (22 by maintainers)
The core of this issue is that we need a chunked upload mechanism. The full bmc image (32 or 64MB) is already too big to be holding in memory in one shot, which is how bmcweb was originally designed because it was easier, and images at the time were smaller. The long term intent was to provide a streaming interface (kinda like the websocket handler) so we could stream the file directly to disk, without consuming lots of memory in the meantime, and we can rate limit based on what the BMC capabilities are by streaming the file to the filesystem. I suspect as bmcs move to EMMC, and the file uploads get bigger, that infrastructure will need to be built. We can’t just keep increasing the timeouts, or implementing goofy non-standard rate tracking mechanisms, because we will always have problems with someones use case (what if I’m on a 56k modem uploading a 5GB file for example). Many CVEs have been published on many web servers that try to implement bitrate-based handling. We need to not go down that path. The MIME parser should be the hard part of the above, which is already in review/in progress.
We didn’t test with the bmcweb commit on the system, but just doing the get of Memory_v1.xml was very fast for us using master code: