pydantic: Partial update of nested model via dump -> copy -> parse doesn't work on v2
Initial Checks
- I confirm that I’m using Pydantic V2
Description
With v1, doing a partial update of a model instance from another instance could be done via
data = input_model.dict(exclude_unset=True)
updated_model = model.copy(update=data)
(this is even a recommended way in FastAPI docs)
There’s a caveat though. If those models had nested attributes/values (i.e. sub models),
the resulting updated_model
will have a dict
as a value instead of the sub model instance.
To fix/workaround this problem in v1 one could use this trick:
updated_model = updated_model.parse_obj(updated_model)
In v2 this trick no longer works.
Is there any way to parse/validate nested values/objects like this?
Example Code
from pydantic import BaseModel
class SubModel(BaseModel):
foo: int
class Model(BaseModel):
bar: str
sub: SubModel
model = Model(bar="bar", sub={"foo": 0})
# check that foo on sub is 0
assert model.sub.foo == 0
class InputModel(BaseModel):
sub: SubModel
input_ = InputModel(sub={"foo": 1})
input_data = input_.dict(exclude_unset=True)
updated_m = model.copy(update=input_data)
# now check that foo on sub is 1
# this will raise AttributeError
try:
assert updated_m.sub.foo == 1
except AttributeError as ex:
assert str(ex) == "'dict' object has no attribute 'foo'"
else:
assert False, "were expecting AttributeError"
# trick in v1 to fix this
updated_m = updated_m.parse_obj(updated_m)
assert updated_m.sub.foo == 1
With v2, the above code still generates:
AttributeError: 'dict' object has no attribute 'foo'
This is even the case if we get rid of the deprecation warnings:
--- /tmp/model_update.py 2023-09-10 13:16:14
+++ /tmp/model_update_v2.py 2023-09-10 13:16:24
@@ -22,8 +22,8 @@
input_ = InputModel(sub={"foo": 1})
-input_data = input_.dict(exclude_unset=True)
-updated_m = model.copy(update=input_data)
+input_data = input_.model_dump(exclude_unset=True)
+updated_m = model.model_copy(update=input_data)
# now check that foo on sub is 1
# this will raise AttributeError
@@ -35,5 +35,5 @@
assert False, "were expecting AttributeError"
# trick in v1 to fix this
-updated_m = updated_m.parse_obj(updated_m)
+updated_m = updated_m.model_validate(updated_m)
assert updated_m.sub.foo == 1
Interestingly, if the input_data
had invalid data (e.g. "x"
instead of 1
),
calling model_validate
would still pass.
Python, Pydantic & OS Version
pydantic version: 2.3.0
pydantic-core version: 2.6.3
pydantic-core build: profile=release pgo=false
install path: /Users/slafs/.pyenv/versions/3.10.9/envs/tapper-core-pydantic-v2-py310/lib/python3.10/site-packages/pydantic
python version: 3.10.9 (main, Dec 21 2022, 11:47:15) [Clang 12.0.5 (clang-1205.0.22.9)]
platform: macOS-13.5.2-x86_64-i386-64bit
optional deps. installed: ['email-validator', 'typing-extensions']
About this issue
- Original URL
- State: closed
- Created 10 months ago
- Comments: 17 (15 by maintainers)
@slafs
Sure thing. Thanks for following up with MREs and detailed explanations. Makes things much easier on our end 😄.
Ah! Perfect! Yeah, that seems to be the missing bit 👍. Thank you! 🙇
Hey @sydney-runkle. Thanks for the summary. I agree, except that in 1. notice that the serialisation warning/error happens on
updated_m.model_dump()
hence my suggestion about “copy w. update” being the problem. Apart from that 👌.@slafs, ah, gotcha! Happy to reopen and look into this further. Thanks for clarifying!
I’m not sure what is supposed to happen but yeah I’d expect it to either error or validate the object.
I ran into this same issue. My workaround to force validation to run:
Right. I know this is documented, but I find it weird that one cannot tell in any way if that model copy is valid in general or not (in v1 this was possible with
parse_obj
)No, I mean
model_validate
😃Not when you’ve used
.model_copy(update=...)
To narrow down the discussion let’s get rid ofInputModel
and some assertion checks:Yep. That’s what I ended up with for now in my project (not using copy and merging dicts from the original object dump and the input dump).
This workaround obviously works, but it’s not why I’ve opened this issue 😃.