esp-idf: Got ESP_ERR_NVS_NOT_ENOUGH_SPACE when write zero length data with nvs_set_blob (IDFGH-4110)

Environment

  • Module or chip used: ESP32-WROOM-32E
  • IDF version: v4.3-dev-1197-g8bc19ba893e5
  • Build System: idf.py
  • Compiler version: xtensa-esp32-elf-gcc (crosstool-NG esp-2020r3) 8.4.0
  • Operating System: Linux
  • Power Supply: USB

Problem Description

I try to randomly write blob data with 512B/1KB/2KB/3KB strings. nvs_set_blob(handle, key, str, strlen(str));

And use below code to clear my blob data. nvs_set_blob(handle, key, “”, 0);

It works in most time. But I found if I repeat writing the blob data more times, it’s possible to hit ESP_ERR_NVS_NOT_ENOUGH_SPACE error when I call nvs_set_blob(handle, key, “”, 0). What make me confused is when this happen, it’s ok to call nvs_set_blob(handle, key, str, strlen(str)); with short string.

Note, when this issue happen, in writeMultiPageBlob()

    do {
        Page& page = getCurrentPage();
        size_t tailroom = page.getVarDataTailroom();
        size_t chunkSize =0;
printf("chunkCount=%d tailroom=%d dataSize=%d\r\n", chunkCount, tailroom, dataSize);
        if (!chunkCount && tailroom < dataSize && tailroom < Page::CHUNK_MAX_SIZE/10) {

Above printf shows “chunkCount=0 tailroom=0 dataSize=0”.

I’m wondering if there is something wrong when calling nvs_set_blob with dataSize = 0.

Expected Behavior

IMHO, clear blob data with writing dataSize=0 should not got ESP_ERR_NVS_NOT_ENOUGH_SPACE error. Especially when it’s ok to write short string with the same key.

Actual Behavior

Sometimes nvs_set_blob(handle, key, “”, 0) returns ESP_ERR_NVS_NOT_ENOUGH_SPACE. But it’s ok to write some short string (non-zero length) with the same key.

Steps to reproduce

Just repeat read/write NVS with variable length data (to make it fragment).

About this issue

  • Original URL
  • State: closed
  • Created 4 years ago
  • Comments: 15 (6 by maintainers)

Most upvoted comments

Hi @0xjakob My test looks lood so far, but I think it needs more time to verify. I will close the issue if it’s still does not hit the issue next week. Thanks a lot for your support.