generative-ai-python: “1‘ is blocked by safety reason? seriously
Description of the bug:
when i send 1 to the gemini pro, it raise a exception :
ValueError: The response.parts
quick accessor only works for a single candidate, but none were returned. Check the response.prompt_feedback
to see if the prompt was blocked.
then i print the response.prommpt_feedback, i got this:
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: NEGLIGIBLE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: LOW } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: MEDIUM } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: NEGLIGIBLE } them i go to studio :
Actual vs expected behavior:
so anybody tell what happend? No response
Any other information you’d like to share?
No response
About this issue
- Original URL
- State: closed
- Created 7 months ago
- Reactions: 3
- Comments: 17
This is my set up
Try setting the probability with ‘BLOCK_NONE’. I ran it successfully with that.
block_reason: SAFETY safety_ratings { category: HARM_CATEGORY_SEXUALLY_EXPLICIT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HATE_SPEECH probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_HARASSMENT probability: BLOCK_NONE } safety_ratings { category: HARM_CATEGORY_DANGEROUS_CONTENT probability: BLOCK_NONE }
Same here, I’m running the API over an eval dataset and it won’t finish the full dataset run. If this is rate limiting then at least clearly say so
The same here. lmfao.
API level censorship is stupid.