generative-ai: `ValueError: Content has no parts.`

Hello , i successfully run the intro_multimodal_rag example, but when i tried my own pdf I encountered the following error,


ValueError                                Traceback (most recent call last)
Cell In[37], line 11
      5 image_description_prompt = """Explain what is going on in the image.
      6 If it's a table, extract all elements of the table.
      7 If it's a graph, explain the findings in the graph.
      8 Do not include any numbers that are not mentioned in the image:"""
     10 # Extract text and image metadata from the PDF document
---> 11 text_metadata_df, image_metadata_df = get_document_metadata(
     12     PROJECT_ID,
     13     model,
     14     pdf_path,
     15     image_save_dir="images_telys",
     16     image_description_prompt=image_description_prompt,
     17     embedding_size=1408,
     18     text_emb_text_limit=1000,  # Set text embedding input text limit to 1000 char
     19 )
     21 print("--- Completed processing. ---")

File ~/utils/intro_multimodal_rag_utils.py:572, in get_document_metadata(project_id, generative_multimodal_model, pdf_path, image_save_dir, image_description_prompt, embedding_size, text_emb_text_limit)
    566 image_for_gemini, image_name = get_image_for_gemini(
    567     doc, image, image_no, image_save_dir, file_name, page_num
    568 )
    570 print(f"Extracting image from page: {page_num + 1}, saved as: {image_name}")
--> 572 response = get_gemini_response(
    573     generative_multimodal_model,
    574     model_input=[image_description_prompt, image_for_gemini],
    575     stream=True,
    576 )
    578 image_embedding_with_description = (
    579     get_image_embedding_from_multimodal_embedding_model(
    580         project_id=project_id,
   (...)
    584     )
    585 )
    587 image_embedding = get_image_embedding_from_multimodal_embedding_model(
    588     project_id=project_id,
    589     image_uri=image_name,
    590     embedding_size=embedding_size,
    591 )

File ~/utils/intro_multimodal_rag_utils.py:413, in get_gemini_response(generative_multimodal_model, model_input, stream)
    411     response_list = []
    412     for chunk in response:
--> 413         response_list.append(chunk.text)
    414     response = "".join(response_list)
    415 else:

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1315, in GenerationResponse.text(self)
   1313 if len(self.candidates) > 1:
   1314     raise ValueError("Multiple candidates are not supported")
-> 1315 return self.candidates[0].text

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1368, in Candidate.text(self)
   1366 @property
   1367 def text(self) -> str:
-> 1368     return self.content.text

File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1425, in Content.text(self)
   1423     raise ValueError("Multiple content parts are not supported.")
   1424 if not self.parts:
-> 1425     raise ValueError("Content has no parts.")
   1426 return self.parts[0].text

ValueError: Content has no parts.

, any suggestion?

About this issue

  • Original URL
  • State: open
  • Created 6 months ago
  • Comments: 30 (8 by maintainers)

Commits related to this issue

Most upvoted comments

https://console.cloud.google.com/vertex-ai/generative/language/create/text?authuser=0 The vertex-ai console also blocks. The blocks are not random and seem to depend on the keywords included in the output. convert the company below into the official English version.

Lihit Lab Inc

Response

Response blocked for unknown reason. Try rewriting the prompt.

Hi @takeruh, I am also able to reproduce your issue here.

This is currently a bug and as stated in the previous comment, out teams are looking into the issue.

Hi @zafercavdar, I am able to reproduce your issues here:

Thank you for bring our attention to it. Our internal teams are looking into it. Allow us some time to respond to it.

getting the same error “ValueError: Content has no parts.” on version google-cloud-aiplatform 1.40.0 today

Same here also. I am not sure why it is randomly generating this error.

@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.

@lavinigam-gcp, there could be something else at play here because even with safety_settings set to BLOCK_NONE, I get FinishReason.OTHER as a response with:

response.candidates[0].finish_reason

I am experiencing the same issue. Why is this issue closed?

I added safety_settings but the problem is still there.

safety_settings: Optional[dict] = {
                                        HarmCategory.HARM_CATEGORY_HARASSMENT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_HATE_SPEECH: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: HarmBlockThreshold.BLOCK_NONE,
                                        HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: HarmBlockThreshold.BLOCK_NONE,
                                    },

hasattr(response,"text")
                ValueError response:
                candidates {
                content {
                    role: "model"
                }
                finish_reason: OTHER
                }

@lavinigam-gcp, there could be something else at play here because even with safety_settings set to BLOCK_NONE, I get FinishReason.OTHER as a response with:

response.candidates[0].finish_reason

The issue here is that Gemini is blocking some of the text. It can be offset by setting the safety thresholds to None, as pointed out by @TTTTao725. Working on a PR to fix this issue.

The reason for the termination of the first execution is HARM_CATEGORY_DANGEROUS_CONTENT, so that’s the reason why we have nothing returned: it got blocked!!

Therefore, you can set your safety configuration to BLOCK_NONE:

safety_config = {
    generative_models.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HARASSMENT: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_HATE_SPEECH: generative_models.HarmBlockThreshold.BLOCK_NONE,
    generative_models.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT: generative_models.HarmBlockThreshold.BLOCK_NONE,
}

answer = gemini_pro_model.generate_content("Now I am going to give you a molecule in SMILES format, as well as its caption (description), I want you to rewrite the caption into five different versions. SMILES: CN(C(=O)N)N=O, Caption: The molecule is a member of the class of N-nitrosoureas that is urea in which one of the nitrogens is substituted by methyl and nitroso groups. It has a role as a carcinogenic agent, a mutagen, a teratogenic agent and an alkylating agent. Format your output as a python list, for example, you should output something like [\"caption1\", \"caption2\", \"caption3\", \"caption4\", \"caption5\",] Do not use ```python``` in your answer.", safety_settings=safety_config)

You will not have this problem anymore 😃

https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/configure-safety-attributes

getting the same error “ValueError: Content has no parts.” on version google-cloud-aiplatform 1.40.0 today