Exporting models

As well as accessing model attributes directly via their names (eg. model.foobar), models can be converted and exported in a number of ways:

model.dict(...)🔗

The primary way of converting a model to a dictionary. Sub-models will be recursively converted to dictionaries.

Arguments:

  • include: fields to include in the returned dictionary, see below
  • exclude: fields to exclude from the returned dictionary, see below
  • by_alias: whether field aliases should be used as keys in the returned dictionary, default False
  • skip_defaults: whether fields which were not set when creating the model and have their default values should be excluded from the returned dictionary, default False

Example:

from pydantic import BaseModel

class BarModel(BaseModel):
    whatever: int

class FooBarModel(BaseModel):
    banana: float
    foo: str
    bar: BarModel

m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})

print(m.dict())
# (returns a dictionary)
# > {'banana': 3.14, 'foo': 'hello', 'bar': {'whatever': 123}}

print(m.dict(include={'foo', 'bar'}))
# > {'foo': 'hello', 'bar': {'whatever': 123}}

print(m.dict(exclude={'foo', 'bar'}))
# > {'banana': 3.14}

(This script is complete, it should run "as is")

dict(model) and iteration🔗

pydantic models can also be converted to dictionaries using dict(model), you can also iterate over a model's field using for field_name, value in model:. Here the raw field values are returned, eg. sub-models will not be converted to dictionaries.

Example:

from pydantic import BaseModel

class BarModel(BaseModel):
    whatever: int

class FooBarModel(BaseModel):
    banana: float
    foo: str
    bar: BarModel

m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})

print(dict(m))
#> {'banana': 3.14, 'foo': 'hello', 'bar': <BarModel whatever=123>}

for name, value in m:
    print(f'{name}: {value}')

#> banana: 3.14
#> foo: hello
#> bar: BarModel whatever=123

(This script is complete, it should run "as is")

model.copy(...)🔗

copy() allows models to be duplicated, this is particularly useful for immutable models.

Arguments:

  • include: fields to include in the returned dictionary, see below
  • exclude: fields to exclude from the returned dictionary, see below
  • update: dictionaries of values to change when creating the new model
  • deep: whether to make a deep copy of the new model, default False

Example:

from pydantic import BaseModel

class BarModel(BaseModel):
    whatever: int

class FooBarModel(BaseModel):
    banana: float
    foo: str
    bar: BarModel

m = FooBarModel(banana=3.14, foo='hello', bar={'whatever': 123})

print(m.copy(include={'foo', 'bar'}))
# > FooBarModel foo='hello' bar=<BarModel whatever=123>

print(m.copy(exclude={'foo', 'bar'}))
# > FooBarModel banana=3.14

print(m.copy(update={'banana': 0}))
# > FooBarModel banana=0 foo='hello' bar=<BarModel whatever=123>

print(id(m.bar), id(m.copy().bar))
# normal copy gives the same object reference for `bar`
# > 140494497582280 140494497582280

print(id(m.bar), id(m.copy(deep=True).bar))
# deep copy gives a new object reference for `bar`
# > 140494497582280 140494497582856

(This script is complete, it should run "as is")

model.json(...)🔗

The .json() method will serialise a model to JSON. Typically, .json() in turn calls .dict() and serialises its result. (For models with a custom root type, after calling .dict(), only the value for the __root__ key is serialised.)

Serialisation can be customised on a model using the json_encoders config property, the keys should be types and the values should be functions which serialise that type, see the example below.

Arguments:

  • include: fields to include in the returned dictionary, see below
  • exclude: fields to exclude from the returned dictionary, see below
  • by_alias: whether field aliases should be used as keys in the returned dictionary, default False
  • skip_defaults: whether fields which were not set when creating the model and have their default values should be excluded from the returned dictionary, default False
  • encoder: a custom encoder function passed to the default argument of json.dumps(), defaults to a custom encoder designed to take care of all common types
  • **dumps_kwargs: any other keyword argument are passed to json.dumps(), eg. indent.

Example:

from datetime import datetime, timedelta
from pydantic import BaseModel
from pydantic.json import timedelta_isoformat

class BarModel(BaseModel):
    whatever: int

class FooBarModel(BaseModel):
    foo: datetime
    bar: BarModel

m = FooBarModel(foo=datetime(2032, 6, 1, 12, 13, 14), bar={'whatever': 123})
print(m.json())
# (returns a str)
# > {"foo": "2032-06-01T12:13:14", "bar": {"whatever": 123}}

class WithCustomEncoders(BaseModel):
    dt: datetime
    diff: timedelta

    class Config:
        json_encoders = {
            datetime: lambda v: v.timestamp(),
            timedelta: timedelta_isoformat,
        }

m = WithCustomEncoders(dt=datetime(2032, 6, 1), diff=timedelta(hours=100))
print(m.json())
# > {"dt": 1969660800.0, "diff": "P4DT4H0M0.000000S"}

(This script is complete, it should run "as is")

By default timedelta's are encoded as a simple float of total seconds. The timedelta_isoformat is provided as an optional alternative which implements ISO 8601 time diff encoding.

See below for details on how to use other libraries for more performant JSON encoding and decoding

pickle.dumps(model)🔗

Using the same plumbing as copy() pydantic models support efficient pickling and unpicking.

import pickle
from pydantic import BaseModel

class FooBarModel(BaseModel):
    a: str
    b: int

m = FooBarModel(a='hello', b=123)
print(m)
# > FooBarModel a='hello' b=123

data = pickle.dumps(m)
print(data)
# > b'\x80\x03c...'

m2 = pickle.loads(data)
print(m2)
# > FooBarModel a='hello' b=123

(This script is complete, it should run "as is")

Advanced include and exclude🔗

The dict, json and copy methods support include and exclude arguments which can either be sets or dictionaries, allowing nested selection of which fields to export:

from pydantic import BaseModel, SecretStr

class User(BaseModel):
    id: int
    username: str
    password: SecretStr

class Transaction(BaseModel):
    id: str
    user: User
    value: int

transaction = Transaction(
    id="1234567890",
    user=User(
        id=42,
        username="JohnDoe",
        password="hashedpassword"
    ),
    value=9876543210
)

# using a set:
print(transaction.dict(exclude={'user', 'value'}))
#> {'id': '1234567890'}

# using a dict:
print(transaction.dict(exclude={'user': {'username', 'password'}, 'value': ...}))
#> {'id': '1234567890', 'user': {'id': 42}}

print(transaction.dict(include={'id': ..., 'user': {'id'}}))
#> {'id': '1234567890', 'user': {'id': 42}}

The ellipsis (...) indicates that we want to exclude or include an entire key, just as if we included it in a set.

Of course same can be done on any depth level:

import datetime
from typing import List

from pydantic import BaseModel, SecretStr

class Country(BaseModel):
    name: str
    phone_code: int

class Address(BaseModel):
    post_code: int
    country: Country

class CardDetails(BaseModel):
    number: SecretStr
    expires: datetime.date

class Hobby(BaseModel):
    name: str
    info: str

class User(BaseModel):
    first_name: str
    second_name: str
    address: Address
    card_details: CardDetails
    hobbies: List[Hobby]

user = User(
    first_name='John',
    second_name='Doe',
    address=Address(
        post_code=123456,
        country=Country(
            name='USA',
            phone_code=1
        )
    ),
    card_details=CardDetails(
        number=4212934504460000,
        expires=datetime.date(2020, 5, 1)
    ),
    hobbies=[
        Hobby(name='Programming', info='Writing code and stuff'),
        Hobby(name='Gaming', info='Hell Yeah!!!')
    ]

)

exclude_keys = {
    'second_name': ...,
    'address': {'post_code': ..., 'country': {'phone_code'}},
    'card_details': ...,
    'hobbies': {-1: {'info'}},  # You can exclude values from tuples and lists by indexes
}

include_keys = {
    'first_name': ...,
    'address': {'country': {'name'}},
    'hobbies': {0: ..., -1: {'name'}}
}

print(
    user.dict(include=include_keys) == user.dict(exclude=exclude_keys) == {
        'first_name': 'John',
        'address': {'country': {'name': 'USA'}},
        'hobbies': [
            {'name': 'Programming', 'info': 'Writing code and stuff'},
            {'name': 'Gaming'}
        ]
    }
)
# True

Same goes for json and copy methods.

Custom JSON (de)serialisation🔗

To improve the performance of encoding and decoding JSON, alternative JSON implementations can be used via the json_loads and json_dumps properties of Config, e.g. ujson.

from datetime import datetime
import ujson
from pydantic import BaseModel

class User(BaseModel):
    id: int
    name = 'John Doe'
    signup_ts: datetime = None

    class Config:
        json_loads = ujson.loads

user = User.parse_raw('{"id": 123, "signup_ts": 1234567890, "name": "John Doe"}')
print(user)
#> User id=123 signup_ts=datetime.datetime(2009, 2, 13, 23, 31, 30, tzinfo=datetime.timezone.utc) name='John Doe'

(This script is complete, it should run "as is")

ujson generally cannot be used to dump JSON since it doesn't support encoding of objects like datetimes and does not accept a default fallback function argument. To do this you may use another library like orjson.

from datetime import datetime
import orjson
from pydantic import BaseModel

def orjson_dumps(v, *, default):
    # orjson.dumps returns bytes, to match standard json.dumps we need to decode
    return orjson.dumps(v, default=default).decode()

class User(BaseModel):
    id: int
    name = 'John Doe'
    signup_ts: datetime = None

    class Config:
        json_loads = orjson.loads
        json_dumps = orjson_dumps


user = User.parse_raw('{"id": 123, "signup_ts": 1234567890, "name": "John Doe"}')
print(user.json())
#> {"id":123,"signup_ts":"2009-02-13T23:31:30+00:00","name":"John Doe"}

(This script is complete, it should run "as is")

Note that orjson takes care of datetime encoding natively, making it faster than json.dumps but meaning you cannot always customise encoding using Config.json_encoders.