generative-ai: `ValueError: Content has no parts.`
Hello , i successfully run the intro_multimodal_rag example, but when i tried my own pdf I encountered the following error,
ValueError Traceback (most recent call last)
Cell In[37], line 11
5 image_description_prompt = """Explain what is going on in the image.
6 If it's a table, extract all elements of the table.
7 If it's a graph, explain the findings in the graph.
8 Do not include any numbers that are not mentioned in the image:"""
10 # Extract text and image metadata from the PDF document
---> 11 text_metadata_df, image_metadata_df = get_document_metadata(
12 PROJECT_ID,
13 model,
14 pdf_path,
15 image_save_dir="images_telys",
16 image_description_prompt=image_description_prompt,
17 embedding_size=1408,
18 text_emb_text_limit=1000, # Set text embedding input text limit to 1000 char
19 )
21 print("--- Completed processing. ---")
File ~/utils/intro_multimodal_rag_utils.py:572, in get_document_metadata(project_id, generative_multimodal_model, pdf_path, image_save_dir, image_description_prompt, embedding_size, text_emb_text_limit)
566 image_for_gemini, image_name = get_image_for_gemini(
567 doc, image, image_no, image_save_dir, file_name, page_num
568 )
570 print(f"Extracting image from page: {page_num + 1}, saved as: {image_name}")
--> 572 response = get_gemini_response(
573 generative_multimodal_model,
574 model_input=[image_description_prompt, image_for_gemini],
575 stream=True,
576 )
578 image_embedding_with_description = (
579 get_image_embedding_from_multimodal_embedding_model(
580 project_id=project_id,
(...)
584 )
585 )
587 image_embedding = get_image_embedding_from_multimodal_embedding_model(
588 project_id=project_id,
589 image_uri=image_name,
590 embedding_size=embedding_size,
591 )
File ~/utils/intro_multimodal_rag_utils.py:413, in get_gemini_response(generative_multimodal_model, model_input, stream)
411 response_list = []
412 for chunk in response:
--> 413 response_list.append(chunk.text)
414 response = "".join(response_list)
415 else:
File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1315, in GenerationResponse.text(self)
1313 if len(self.candidates) > 1:
1314 raise ValueError("Multiple candidates are not supported")
-> 1315 return self.candidates[0].text
File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1368, in Candidate.text(self)
1366 @property
1367 def text(self) -> str:
-> 1368 return self.content.text
File ~/.local/lib/python3.10/site-packages/vertexai/generative_models/_generative_models.py:1425, in Content.text(self)
1423 raise ValueError("Multiple content parts are not supported.")
1424 if not self.parts:
-> 1425 raise ValueError("Content has no parts.")
1426 return self.parts[0].text
ValueError: Content has no parts.
, any suggestion?
About this issue
- Original URL
- State: open
- Created 6 months ago
- Comments: 30 (8 by maintainers)
Commits related to this issue
- feat: Creating "Using Gemini with BigQuery through Remote Functions" (#336) Co-authored-by: Holt Skinner <holtskinner@google.com> — committed to GoogleCloudPlatform/generative-ai by shanecglass 6 months ago
Hi @takeruh, I am also able to reproduce your issue here.
This is currently a bug and as stated in the previous comment, out teams are looking into the issue.
Hi @zafercavdar, I am able to reproduce your issues here:
Thank you for bring our attention to it. Our internal teams are looking into it. Allow us some time to respond to it.
Same here also. I am not sure why it is randomly generating this error.
@nmoell Thank you for raising the issue and being patient with it. I have again escalated the issue internally, and will report back as soon as I get an update. In the meantime, would it be possible for you to share any reproducible prompts where you have been observing the issue? Or if you can print the response object and share the “finish_reason” value so that we know what is actually causing the issue.
I am experiencing the same issue. Why is this issue closed?
I added safety_settings but the problem is still there.
@lavinigam-gcp, there could be something else at play here because even with safety_settings set to
BLOCK_NONE
, I getFinishReason.OTHER
as a response with:The issue here is that Gemini is blocking some of the text. It can be offset by setting the safety thresholds to None, as pointed out by @TTTTao725. Working on a PR to fix this issue.
The reason for the termination of the first execution is
HARM_CATEGORY_DANGEROUS_CONTENT
, so that’s the reason why we have nothing returned: it got blocked!!Therefore, you can set your safety configuration to
BLOCK_NONE
:You will not have this problem anymore 😃
https://cloud.google.com/vertex-ai/docs/generative-ai/multimodal/configure-safety-attributes
getting the same error “ValueError: Content has no parts.” on version google-cloud-aiplatform 1.40.0 today