List data sets
- async AsyncCogniteClient.data_sets.list(
- metadata: dict[str, str] | None = None,
- created_time: dict[str, Any] | TimestampRange | None = None,
- last_updated_time: dict[str, Any] | TimestampRange | None = None,
- external_id_prefix: str | None = None,
- write_protected: bool | None = None,
- limit: int | None = 25,
-
- Parameters:
metadata (dict[str, str] | None) – Custom, application-specific metadata. String key -> String value.
created_time (dict[str, Any] | TimestampRange | None) – Range between two timestamps.
last_updated_time (dict[str, Any] | TimestampRange | None) – Range between two timestamps.
external_id_prefix (str | None) – Filter by this (case-sensitive) prefix for the external ID.
write_protected (bool | None) – Specify whether the filtered data sets are write-protected, or not. Set to True to only list write-protected data sets.
limit (int | None) – Maximum number of data sets to return. Defaults to 25. Set to -1, float(“inf”) or None to return all items.
- Returns:
List of requested data sets
- Return type:
Examples
List data sets and filter on write_protected:
>>> from cognite.client import CogniteClient, AsyncCogniteClient >>> client = CogniteClient() >>> # async_client = AsyncCogniteClient() # another option >>> data_sets_list = client.data_sets.list(limit=5, write_protected=False)
Iterate over data sets, one-by-one:
>>> for data_set in client.data_sets(): ... data_set # do something with the data set
Iterate over chunks of data sets to reduce memory load:
>>> for data_set_list in client.data_sets(chunk_size=2500): ... data_set_list # do something with the list