Time Series
Metadata
|
|
|
|
Count of time series matching the specified filters and search. |
|
|
|
|
|
|
|
Upsert time series, i.e., update if it exists, and create if it does not exist. |
Time Series Data classes
- class cognite.client.data_classes.time_series.SortableTimeSeriesProperty(value)
Bases:
EnumPropertyAn enumeration.
- class cognite.client.data_classes.time_series.TimeSeries(
- id: int,
- created_time: int,
- last_updated_time: int,
- is_step: bool,
- is_string: bool,
- external_id: str | None = None,
- instance_id: NodeId | None = None,
- name: str | None = None,
- metadata: dict[str, str] | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- asset_id: int | None = None,
- description: str | None = None,
- security_categories: Sequence[int] | None = None,
- data_set_id: int | None = None,
Bases:
WriteableCogniteResourceWithClientRef[TimeSeriesWrite]This represents a sequence of data points. The TimeSeries object is the metadata about the datapoints, and the Datapoint object is the actual data points. This is the read version of TimesSeries, which is used when retrieving from CDF.
- Parameters:
id (int) – A server-generated ID for the object.
created_time (int) – The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.
last_updated_time (int) – The number of milliseconds since 00:00:00 Thursday, 1 January 1970, Coordinated Universal Time (UTC), minus leap seconds.
is_step (bool) – Whether the time series is a step series or not.
is_string (bool) – Whether the time series is string valued or not.
external_id (str | None) – The externally supplied ID for the time series.
instance_id (NodeId | None) – The Instance ID for the time series. (Only applicable for time series created in DMS)
name (str | None) – The display short name of the time series.
metadata (dict[str, str] | None) – Custom, application-specific metadata. String key -> String value. Limits: Maximum length of key is 32 bytes, value 512 bytes, up to 16 key-value pairs.
unit (str | None) – The physical unit of the time series.
unit_external_id (str | None) – The physical unit of the time series (reference to unit catalog). Only available for numeric time series.
asset_id (int | None) – Asset ID of equipment linked to this time series.
description (str | None) – Description of the time series.
security_categories (Sequence[int] | None) – The required security categories to access this time series.
data_set_id (int | None) – The dataSet ID for the item.
- as_write() TimeSeriesWrite
Convert the time series to a writeable version.
- Returns:
A writeable version of this time series.
- Return type:
- Raises:
ValueError – If the time series has an instance_id as these must be created via the Data Modeling API.
- asset() Asset
Returns the asset this time series belongs to.
- Returns:
The asset given by its asset_id.
- Return type:
- Raises:
ValueError – If asset_id is missing.
- async asset_async() Asset
Returns the asset this time series belongs to.
- Returns:
The asset given by its asset_id.
- Return type:
- Raises:
ValueError – If asset_id is missing.
- count() int
Returns the number of datapoints in this time series.
This result may not be completely accurate, as it is based on aggregates which may be occasionally out of date.
- Returns:
The number of datapoints in this time series.
- Return type:
int
- Raises:
RuntimeError – If the time series is string, as count aggregate is only supported for numeric data
- Returns:
The total number of datapoints
- Return type:
int
- async count_async() int
Returns the number of datapoints in this time series.
This result may not be completely accurate, as it is based on aggregates which may be occasionally out of date.
- Returns:
The number of datapoints in this time series.
- Return type:
int
- Raises:
RuntimeError – If the time series is string, as count aggregate is only supported for numeric data
- Returns:
The total number of datapoints
- Return type:
int
- dump(camel_case: bool = True) dict[str, Any]
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- first() Datapoint | None
Returns the first datapoint in this time series. If empty, returns None.
- Returns:
A datapoint object containing the value and timestamp of the first datapoint.
- Return type:
Datapoint | None
- async first_async() Datapoint | None
Returns the first datapoint in this time series. If empty, returns None.
- Returns:
A datapoint object containing the value and timestamp of the first datapoint.
- Return type:
Datapoint | None
- latest(
- before: int | str | datetime | None = None,
Returns the latest datapoint in this time series.
- Parameters:
before (int | str | datetime | None) – Get latest datapoint before this time.
- Returns:
A datapoint object containing the value and timestamp of the latest datapoint.
- Return type:
- async latest_async(
- before: int | str | datetime | None = None,
Returns the latest datapoint in this time series.
- Parameters:
before (int | str | datetime | None) – Get latest datapoint before this time.
- Returns:
A datapoint object containing the value and timestamp of the latest datapoint.
- Return type:
- class cognite.client.data_classes.time_series.TimeSeriesFilter(
- name: str | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- unit_quantity: str | None = None,
- is_string: bool | None = None,
- is_step: bool | None = None,
- metadata: dict[str, str] | None = None,
- asset_ids: Sequence[int] | None = None,
- asset_external_ids: SequenceNotStr[str] | None = None,
- asset_subtree_ids: Sequence[dict[str, Any]] | None = None,
- data_set_ids: Sequence[dict[str, Any]] | None = None,
- external_id_prefix: str | None = None,
- created_time: dict[str, Any] | TimestampRange | None = None,
- last_updated_time: dict[str, Any] | TimestampRange | None = None,
Bases:
CogniteFilterNo description.
- Parameters:
name (str | None) – Filter on name.
unit (str | None) – Filter on unit.
unit_external_id (str | None) – Filter on unit external ID.
unit_quantity (str | None) – Filter on unit quantity.
is_string (bool | None) – Filter on isString.
is_step (bool | None) – Filter on isStep.
metadata (dict[str, str] | None) – Custom, application specific metadata. String key -> String value. Limits: Maximum length of key is 32 bytes, value 512 bytes, up to 16 key-value pairs.
asset_ids (Sequence[int] | None) – Only include time series that reference these specific asset IDs.
asset_external_ids (SequenceNotStr[str] | None) – Asset External IDs of related equipment that this time series relates to.
asset_subtree_ids (Sequence[dict[str, Any]] | None) – Only include time series that are related to an asset in a subtree rooted at any of these asset IDs or external IDs. If the total size of the given subtrees exceeds 100,000 assets, an error will be returned.
data_set_ids (Sequence[dict[str, Any]] | None) – No description.
external_id_prefix (str | None) – Filter by this (case-sensitive) prefix for the external ID.
created_time (dict[str, Any] | TimestampRange | None) – Range between two timestamps.
last_updated_time (dict[str, Any] | TimestampRange | None) – Range between two timestamps.
- class cognite.client.data_classes.time_series.TimeSeriesList(
- resources: Sequence[T_CogniteResource],
Bases:
WriteableCogniteResourceList[TimeSeriesWrite,TimeSeries],IdTransformerMixin
- class cognite.client.data_classes.time_series.TimeSeriesProperty(value)
Bases:
EnumPropertyAn enumeration.
- class cognite.client.data_classes.time_series.TimeSeriesUpdate(
- id: int | None = None,
- external_id: str | None = None,
- instance_id: NodeId | None = None,
Bases:
CogniteUpdateChanges will be applied to time series.
- Parameters:
id (int | None) – A server-generated ID for the object.
external_id (str | None) – The external ID provided by the client. Must be unique for the resource type.
instance_id (NodeId | None) – The ID of the instance this time series belongs to.
- dump(
- camel_case: Literal[True] = True,
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (Literal[True]) – No description.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- class cognite.client.data_classes.time_series.TimeSeriesWrite(
- external_id: str | None = None,
- name: str | None = None,
- is_string: bool | None = None,
- metadata: dict[str, str] | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- asset_id: int | None = None,
- is_step: bool | None = None,
- description: str | None = None,
- security_categories: Sequence[int] | None = None,
- data_set_id: int | None = None,
Bases:
WriteableCogniteResource[TimeSeriesWrite]This is the write version of TimeSeries, which is used when writing to CDF.
- Parameters:
external_id (str | None) – The externally supplied ID for the time series.
name (str | None) – The display short name of the time series.
is_string (bool | None) – Whether the time series is string valued or not.
metadata (dict[str, str] | None) – Custom, application-specific metadata. String key -> String value. Limits: Maximum length of key is 32 bytes, value 512 bytes, up to 16 key-value pairs.
unit (str | None) – The physical unit of the time series.
unit_external_id (str | None) – The physical unit of the time series (reference to unit catalog). Only available for numeric time series.
asset_id (int | None) – Asset ID of equipment linked to this time series.
is_step (bool | None) – Whether the time series is a step series or not.
description (str | None) – Description of the time series.
security_categories (Sequence[int] | None) – The required security categories to access this time series.
data_set_id (int | None) – The dataSet ID for the item.
- as_write() TimeSeriesWrite
Returns this TimeSeriesWrite object.
- class cognite.client.data_classes.time_series.TimeSeriesWriteList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceList[TimeSeriesWrite],ExternalIDTransformerMixin
Synthetic time series
Datapoints
Delete a range of datapoints from a time series. |
|
Insert datapoints into a time series. |
|
Insert a dataframe containing datapoints to one or more time series. |
|
Get datapoints directly in a pandas dataframe. |
|
Datapoints Data classes
- class cognite.client.data_classes.datapoints.Datapoint(
- timestamp: int,
- value: str | float | None = None,
- average: float | None = None,
- max: float | None = None,
- max_datapoint: MaxDatapoint | MaxDatapointWithStatus | None = None,
- min: float | None = None,
- min_datapoint: MinDatapoint | MinDatapointWithStatus | None = None,
- count: int | None = None,
- sum: float | None = None,
- interpolation: float | None = None,
- step_interpolation: float | None = None,
- continuous_variance: float | None = None,
- discrete_variance: float | None = None,
- total_variation: float | None = None,
- count_bad: int | None = None,
- count_good: int | None = None,
- count_uncertain: int | None = None,
- duration_bad: int | None = None,
- duration_good: int | None = None,
- duration_uncertain: int | None = None,
- status_code: int | None = None,
- status_symbol: str | None = None,
- timezone: timezone | ZoneInfo | None = None,
Bases:
CogniteResourceAn object representing a datapoint.
- Parameters:
timestamp (int) – The data timestamp in milliseconds since the epoch (Jan 1, 1970). Can be negative to define a date before 1970. Minimum timestamp is 1900.01.01 00:00:00 UTC
value (str | float | None) – The raw data value. Can be string or numeric.
average (float | None) – The time-weighted average value in the aggregate interval.
max (float | None) – The maximum value in the aggregate interval.
max_datapoint (MaxDatapoint | MaxDatapointWithStatus | None) – Objects with the maximum values and their timestamps in the aggregate intervals, optionally including status codes and symbols.
min (float | None) – The minimum value in the aggregate interval.
min_datapoint (MinDatapoint | MinDatapointWithStatus | None) – Objects with the minimum values and their timestamps in the aggregate intervals, optionally including status codes and symbols.
count (int | None) – The number of raw datapoints in the aggregate interval.
sum (float | None) – The sum of the raw datapoints in the aggregate interval.
interpolation (float | None) – The interpolated value at the beginning of the aggregate interval.
step_interpolation (float | None) – The interpolated value at the beginning of the aggregate interval using stepwise interpretation.
continuous_variance (float | None) – The variance of the interpolated underlying function.
discrete_variance (float | None) – The variance of the datapoint values.
total_variation (float | None) – The total variation of the interpolated underlying function.
count_bad (int | None) – The number of raw datapoints with a bad status code, in the aggregate interval.
count_good (int | None) – The number of raw datapoints with a good status code, in the aggregate interval.
count_uncertain (int | None) – The number of raw datapoints with a uncertain status code, in the aggregate interval.
duration_bad (int | None) – The duration the aggregate is defined and marked as bad (measured in milliseconds).
duration_good (int | None) – The duration the aggregate is defined and marked as good (measured in milliseconds).
duration_uncertain (int | None) – The duration the aggregate is defined and marked as uncertain (measured in milliseconds).
status_code (int | None) – The status code for the raw datapoint.
status_symbol (str | None) – The status symbol for the raw datapoint.
timezone (datetime.timezone | ZoneInfo | None) – The timezone to use when displaying the datapoint.
- dump(
- camel_case: bool = True,
- include_timezone: bool = True,
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- to_pandas(camel_case: bool = False) pandas.DataFrame
Convert the datapoint into a pandas DataFrame.
- Parameters:
camel_case (bool) – Convert column names to camel case (e.g. stepInterpolation instead of step_interpolation)
- Returns:
The DataFrame representation of the datapoint.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.Datapoints(
- id: int,
- is_string: bool,
- is_step: bool,
- type: Literal['numeric', 'string', 'state'],
- external_id: str | None = None,
- instance_id: NodeId | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- granularity: str | None = None,
- timestamp: list[int] | None = None,
- value: list[str] | list[float] | None = None,
- average: list[float] | None = None,
- max: list[float] | None = None,
- max_datapoint: list[MaxDatapoint] | list[MaxDatapointWithStatus] | None = None,
- min: list[float] | None = None,
- min_datapoint: list[MinDatapoint] | list[MinDatapointWithStatus] | None = None,
- count: list[int] | None = None,
- sum: list[float] | None = None,
- interpolation: list[float] | None = None,
- step_interpolation: list[float] | None = None,
- continuous_variance: list[float] | None = None,
- discrete_variance: list[float] | None = None,
- total_variation: list[float] | None = None,
- count_bad: list[int] | None = None,
- count_good: list[int] | None = None,
- count_uncertain: list[int] | None = None,
- duration_bad: list[int] | None = None,
- duration_good: list[int] | None = None,
- duration_uncertain: list[int] | None = None,
- status_code: list[int] | None = None,
- status_symbol: list[str] | None = None,
- timezone: timezone | ZoneInfo | None = None,
Bases:
CogniteResourceAn object representing a list of datapoints.
- Parameters:
id (int) – Id of the time series the datapoints belong to
is_string (bool) – Whether the time series contains numerical or string data.
is_step (bool) – Whether the time series is stepwise or continuous.
type (Literal['numeric', 'string', 'state']) – The type of the time series.
external_id (str | None) – External id of the time series the datapoints belong to
instance_id (NodeId | None) – The instance id of the time series the datapoints belong to
unit (str | None) – The physical unit of the time series (free-text field). Omitted if the datapoints were converted to another unit.
unit_external_id (str | None) – The unit_external_id (as defined in the unit catalog) of the returned data points. If the datapoints were converted to a compatible unit, this will equal the converted unit, not the one defined on the time series.
granularity (str | None) – The granularity of the aggregate datapoints (does not apply to raw data)
timestamp (list[int] | None) – The data timestamps in milliseconds since the epoch (Jan 1, 1970). Can be negative to define a date before 1970. Minimum timestamp is 1900.01.01 00:00:00 UTC
value (list[str] | list[float] | None) – The raw data values. Can be string or numeric.
average (list[float] | None) – The time-weighted average values per aggregate interval.
max (list[float] | None) – The maximum values per aggregate interval.
max_datapoint (list[MaxDatapoint] | list[MaxDatapointWithStatus] | None) – Objects with the maximum values and their timestamps in the aggregate intervals, optionally including status codes and symbols.
min (list[float] | None) – The minimum values per aggregate interval.
min_datapoint (list[MinDatapoint] | list[MinDatapointWithStatus] | None) – Objects with the minimum values and their timestamps in the aggregate intervals, optionally including status codes and symbols.
count (list[int] | None) – The number of raw datapoints per aggregate interval.
sum (list[float] | None) – The sum of the raw datapoints per aggregate interval.
interpolation (list[float] | None) – The interpolated values at the beginning of each the aggregate interval.
step_interpolation (list[float] | None) – The interpolated values at the beginning of each the aggregate interval using stepwise interpretation.
continuous_variance (list[float] | None) – The variance of the interpolated underlying function.
discrete_variance (list[float] | None) – The variance of the datapoint values.
total_variation (list[float] | None) – The total variation of the interpolated underlying function.
count_bad (list[int] | None) – The number of raw datapoints with a bad status code, per aggregate interval.
count_good (list[int] | None) – The number of raw datapoints with a good status code, per aggregate interval.
count_uncertain (list[int] | None) – The number of raw datapoints with a uncertain status code, per aggregate interval.
duration_bad (list[int] | None) – The duration the aggregate is defined and marked as bad (measured in milliseconds).
duration_good (list[int] | None) – The duration the aggregate is defined and marked as good (measured in milliseconds).
duration_uncertain (list[int] | None) – The duration the aggregate is defined and marked as uncertain (measured in milliseconds).
status_code (list[int] | None) – The status codes for the raw datapoints.
status_symbol (list[str] | None) – The status symbols for the raw datapoints.
timezone (datetime.timezone | ZoneInfo | None) – The timezone to use when displaying the datapoints.
- dump(camel_case: bool = True) dict[str, Any]
Dump the datapoints into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representing the instance.
- Return type:
dict[str, Any]
- to_pandas(
- include_aggregate_name: bool = True,
- include_granularity_name: bool = False,
- include_unit: bool = True,
- include_status: bool = True,
Convert the datapoints into a pandas DataFrame.
- Parameters:
include_aggregate_name (bool) – Include aggregate in the dataframe columns, if present (separate MultiIndex level)
include_granularity_name (bool) – Include granularity in the dataframe columns, if present (separate MultiIndex level)
include_unit (bool) – Include the unit_external_id in the dataframe columns, if present (separate MultiIndex level)
include_status (bool) – Include status code and status symbol as separate columns, if available. Also adds the status info as a separate level in the columns (MultiIndex).
- Returns:
The dataframe.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.DatapointsArray(
- id: int,
- is_string: bool,
- is_step: bool,
- type: Literal['numeric', 'string', 'state'],
- external_id: str | None = None,
- instance_id: NodeId | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- granularity: str | None = None,
- timestamp: NumpyDatetime64NSArray | None = None,
- value: NumpyFloat64Array | NumpyObjArray | None = None,
- average: NumpyFloat64Array | None = None,
- max: NumpyFloat64Array | None = None,
- max_datapoint: NumpyObjArray | None = None,
- min: NumpyFloat64Array | None = None,
- min_datapoint: NumpyObjArray | None = None,
- count: NumpyInt64Array | None = None,
- sum: NumpyFloat64Array | None = None,
- interpolation: NumpyFloat64Array | None = None,
- step_interpolation: NumpyFloat64Array | None = None,
- continuous_variance: NumpyFloat64Array | None = None,
- discrete_variance: NumpyFloat64Array | None = None,
- total_variation: NumpyFloat64Array | None = None,
- count_bad: NumpyInt64Array | None = None,
- count_good: NumpyInt64Array | None = None,
- count_uncertain: NumpyInt64Array | None = None,
- duration_bad: NumpyInt64Array | None = None,
- duration_good: NumpyInt64Array | None = None,
- duration_uncertain: NumpyInt64Array | None = None,
- status_code: NumpyUInt32Array | None = None,
- status_symbol: NumpyObjArray | None = None,
- null_timestamps: set[int] | None = None,
- timezone: datetime.timezone | ZoneInfo | None = None,
Bases:
CogniteResourceAn object representing datapoints using numpy arrays.
- dump(
- camel_case: bool = True,
- convert_timestamps: bool = False,
Dump the DatapointsArray into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
convert_timestamps (bool) – Convert timestamps to ISO 8601 formatted strings. Default: False (returns as integer, milliseconds since epoch)
- Returns:
A dictionary representing the instance.
- Return type:
dict[str, Any]
- to_pandas(
- include_aggregate_name: bool = True,
- include_granularity_name: bool = False,
- include_unit: bool = True,
- include_status: bool = True,
Convert the DatapointsArray into a pandas DataFrame.
- Parameters:
include_aggregate_name (bool) – Include aggregate in the dataframe columns, if present (separate MultiIndex level)
include_granularity_name (bool) – Include granularity in the dataframe columns, if present (separate MultiIndex level)
include_unit (bool) – Include the unit_external_id in the dataframe columns, if present (separate MultiIndex level)
include_status (bool) – Include status code and status symbol as separate columns, if available. Also adds the status info as a separate level in the columns (MultiIndex).
- Returns:
The datapoints as a pandas DataFrame.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.DatapointsArrayList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceListWithClientRef[DatapointsArray]- dump(
- camel_case: bool = True,
- convert_timestamps: bool = False,
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
convert_timestamps (bool) – Convert timestamps to ISO 8601 formatted strings. Default: False (returns as integer, milliseconds since epoch)
- Returns:
A list of dicts representing the instance.
- Return type:
list[dict[str, Any]]
- get(
- id: int | None = None,
- external_id: str | None = None,
- instance_id: NodeId | tuple[str, str] | None = None,
Get a specific DatapointsArray from this list by id or external_id.
Note
For duplicated time series, returns a list of DatapointsArray.
- Parameters:
id (int | None) – The id of the item(s) to get.
external_id (str | None) – The external_id of the item(s) to get.
instance_id (NodeId | tuple[str, str] | None) – The instance_id of the item(s) to get.
- Returns:
The requested item(s)
- Return type:
DatapointsArray | list[DatapointsArray] | None
- to_pandas(
- include_aggregate_name: bool = True,
- include_granularity_name: bool = False,
- include_unit: bool = True,
- include_status: bool = True,
Convert the DatapointsArrayList into a pandas DataFrame.
- Parameters:
include_aggregate_name (bool) – Include aggregate in the dataframe columns, if present (separate MultiIndex level)
include_granularity_name (bool) – Include granularity in the dataframe columns, if present (separate MultiIndex level)
include_unit (bool) – Include the unit_external_id in the dataframe columns, if present (separate MultiIndex level)
include_status (bool) – Include status code and status symbol as separate columns, if available. Also adds the status info as a separate level in the columns (MultiIndex).
- Returns:
The datapoints as a pandas DataFrame.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.DatapointsList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceListWithClientRef[Datapoints]- get(
- id: int | None = None,
- external_id: str | None = None,
- instance_id: InstanceId | tuple[str, str] | None = None,
Get a specific Datapoints from this list by id, external_id or instance_id.
Note
For duplicated time series, returns a list of Datapoints.
- Parameters:
id (int | None) – The id of the item(s) to get.
external_id (str | None) – The external_id of the item(s) to get.
instance_id (InstanceId | tuple[str, str] | None) – The instance_id of the item(s) to get.
- Returns:
The requested item(s)
- Return type:
Datapoints | list[Datapoints] | None
- to_pandas(
- include_aggregate_name: bool = True,
- include_granularity_name: bool = False,
- include_unit: bool = True,
- include_status: bool = True,
Convert the datapoints list into a pandas DataFrame.
- Parameters:
include_aggregate_name (bool) – Include aggregate in the dataframe columns, if present (separate MultiIndex level)
include_granularity_name (bool) – Include granularity in the dataframe columns, if present (separate MultiIndex level)
include_unit (bool) – Include the unit_external_id in the dataframe columns, if present (separate MultiIndex level)
include_status (bool) – Include status code and status symbol as separate columns, if available. Also adds the status info as a separate level in the columns (MultiIndex).
- Returns:
The datapoints list as a pandas DataFrame.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.DatapointsQuery(
- id: InitVar[int | None] = None,
- external_id: InitVar[str | None] = None,
- instance_id: InitVar[NodeId | tuple[str,
- str] | None] = None,
- start: int | str | datetime.datetime = <object object>,
- end: int | str | datetime.datetime = <object object>,
- aggregates: Aggregate | list[Aggregate] | None = <object object>,
- granularity: str | None = <object object>,
- timezone: str | datetime.timezone | ZoneInfo | None = <object object>,
- target_unit: str | None = <object object>,
- target_unit_system: str | None = <object object>,
- limit: int | None = <object object>,
- include_outside_points: bool = <object object>,
- ignore_unknown_ids: bool = <object object>,
- include_status: bool = <object object>,
- ignore_bad_datapoints: bool = <object object>,
- treat_uncertain_as_bad: bool = <object object>,
Bases:
objectRepresent a user request for datapoints for a single time series
- class cognite.client.data_classes.datapoints.LatestDatapoint(
- id: int,
- timestamp: datetime | None,
- value: str | float | None,
- is_string: bool,
- type: Literal['numeric', 'string', 'state'],
- before: datetime | None,
- is_step: bool | None = None,
- external_id: str | None = None,
- instance_id: NodeId | None = None,
- unit: str | None = None,
- unit_external_id: str | None = None,
- status_code: int | None = None,
- status_symbol: str | None = None,
Bases:
CogniteResourceAn object representing the latest datapoint for a time series.
This class combines time series metadata with at most one datapoint, optimized for the
retrieve_latestmethod response.- Parameters:
id (int) – Id of the time series the datapoint belongs to
timestamp (datetime.datetime | None) – The data timestamp. None if no datapoint exists.
value (str | float | None) – The data value. Can be string or numeric, or None if no datapoint exists or value is missing.
is_string (bool) – Whether the time series contains numerical or string data.
type (Literal['numeric', 'string', 'state']) – The type of the time series.
before (datetime.datetime | None) – The timestamp used as the ‘before’ parameter in the query that retrieved this datapoint.
is_step (bool | None) – Whether the time series is stepwise or continuous.
external_id (str | None) – External id of the time series the datapoint belongs to
instance_id (NodeId | None) – The instance id of the time series the datapoint belongs to
unit (str | None) – The physical unit of the time series (free-text field).
unit_external_id (str | None) – The unit_external_id of the returned data points.
status_code (int | None) – The status code for the datapoint.
status_symbol (str | None) – The status symbol for the datapoint.
- dump(camel_case: bool = True) dict[str, Any]
Dump the latest datapoint into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representing the instance.
- Return type:
dict[str, Any]
- property has_datapoint: bool
Whether a datapoint exists for this time series.
- to_pandas(camel_case: bool = False) pandas.DataFrame
Convert the latest datapoint into a pandas DataFrame.
- Parameters:
camel_case (bool) – Convert column names to camel case. Defaults to False.
- Returns:
The DataFrame representation of the latest datapoint.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.LatestDatapointList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceListWithClientRef[LatestDatapoint],IdTransformerMixinA list of LatestDatapoint objects.
This list is optimized for the
retrieve_latestmethod, providing ato_pandas()method that creates a DataFrame with time series identifiers as index and timestamp/value as columns, avoiding sparse DataFrames when timestamps differ across time series.- get(
- id: int | None = None,
- external_id: str | None = None,
- instance_id: InstanceId | tuple[str, str] | None = None,
Get a specific LatestDatapoint from this list by id, external_id or instance_id.
Note
For duplicated time series, returns a list of LatestDatapoint.
- Parameters:
id (int | None) – The id of the item(s) to get.
external_id (str | None) – The external_id of the item(s) to get.
instance_id (InstanceId | tuple[str, str] | None) – The instance_id of the item(s) to get.
- Returns:
The requested item(s)
- Return type:
LatestDatapoint | list[LatestDatapoint] | None
- to_pandas(include_status: bool = True) pandas.DataFrame
Convert the latest datapoints list into a pandas DataFrame.
Creates a DataFrame with time series identifiers (preferring external_id, then id) as the index, and timestamp/value as columns. This format avoids sparse DataFrames when timestamps differ across time series.
- Parameters:
include_status (bool) – Include status_code and status_symbol columns if available. Default: True
- Returns:
- A DataFrame with columns ‘timestamp’, ‘value’ (and optionally
’status_code’, ‘status_symbol’) with time series identifiers as the index.
- Return type:
pandas.DataFrame
Examples
Get the latest datapoint for multiple time series and convert to DataFrame:
>>> from cognite.client import CogniteClient >>> client = CogniteClient() >>> latest = client.time_series.data.retrieve_latest(external_id=["ts1", "ts2", "ts3"]) >>> df = latest.to_pandas()
- class cognite.client.data_classes.datapoints.LatestDatapointQuery(
- id: InitVar[int | None] = None,
- external_id: InitVar[str | None] = None,
- instance_id: InitVar[NodeId | None] = None,
- before: None | int | str | datetime.datetime = None,
- target_unit: str | None = None,
- target_unit_system: str | None = None,
- include_status: bool | None = None,
- ignore_bad_datapoints: bool | None = None,
- treat_uncertain_as_bad: bool | None = None,
Bases:
objectParameters describing a query for the latest datapoint from a time series.
Note
Pass either ID, external ID or instance ID.
- Parameters:
id (InitVar[int | None]) – The internal ID of the time series to query.
external_id (InitVar[str | None]) – The external ID of the time series to query.
instance_id (InitVar[NodeId | None]) – The instance ID of the time series to query.
before (None | int | str | datetime.datetime) – Get latest datapoint before this time. None means ‘now’.
target_unit (str | None) – The unit_external_id of the data points returned. If the time series does not have a unit_external_id that can be converted to the target_unit, an error will be returned. Cannot be used with target_unit_system.
target_unit_system (str | None) – The unit system of the data points returned. Cannot be used with target_unit.
include_status (bool | None) – Also return the status code, an integer, for each datapoint in the response.
ignore_bad_datapoints (bool | None) – Prevent data points with a bad status code to be returned. Default: True.
treat_uncertain_as_bad (bool | None) – Treat uncertain status codes as bad. If false, treat uncertain as good. Default: True.
- class cognite.client.data_classes.datapoints.MaxDatapoint(timestamp: 'int', value: 'float')
Bases:
MaxOrMinDatapoint
- class cognite.client.data_classes.datapoints.MaxDatapointWithStatus(
- timestamp: 'int',
- value: 'float',
- status_code: 'int' = <property object at 0x7ec29b518310>,
- status_symbol: 'str' = <property object at 0x7ec29b518360>,
Bases:
MaxDatapoint
- class cognite.client.data_classes.datapoints.MaxOrMinDatapoint
Bases:
object
- class cognite.client.data_classes.datapoints.MinDatapoint(timestamp: 'int', value: 'float')
Bases:
MaxOrMinDatapoint
- class cognite.client.data_classes.datapoints.MinDatapointWithStatus(
- timestamp: 'int',
- value: 'float',
- status_code: 'int' = <property object at 0x7ec29b39b880>,
- status_symbol: 'str' = <property object at 0x7ec29b39b920>,
Bases:
MinDatapoint
- class cognite.client.data_classes.datapoints.StatusCode(value)
Bases:
IntEnumThe three main categories of status codes
- class cognite.client.data_classes.datapoints.SyntheticDatapoints(
- expression: str,
- timestamp: list[int],
- value: list[float | None],
- error: list[str | None],
- is_string: bool,
- timezone: timezone | ZoneInfo | None = None,
Bases:
CogniteResource- dump(camel_case: bool = True) dict[str, Any]
Dump the synthetic datapoints into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representing the instance.
- Return type:
dict[str, Any]
- to_pandas(include_errors: bool = True) pandas.DataFrame
Convert the synthetic datapoints into a pandas DataFrame.
Note
The error column will only be included if
include_errors=TrueAND there is at least one error exists.- Parameters:
include_errors (bool) – Whether to include the error column. Defaults to True, but will be skipped if there are no errors.
- Returns:
A DataFrame with timestamp as index and columns for the expression value and optionally error.
- Return type:
pandas.DataFrame
- class cognite.client.data_classes.datapoints.SyntheticDatapointsList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceList[SyntheticDatapoints]A list of SyntheticDatapoints objects representing multiple expressions.
Each SyntheticDatapoints in the list represents the result of evaluating one expression.
- dump(camel_case: bool = True) NoReturn
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A list of dicts representing the instance.
- Return type:
list[dict[str, Any]]
- get(
- *args: Any,
- **kwargs: Any,
Get an item from this list by id, external_id or instance_id.
- Parameters:
id (int | None) – The id of the item to get.
external_id (str | None) – The external_id of the item to get.
instance_id (InstanceId | tuple[str, str] | None) – The instance_id of the item to get.
- Returns:
The requested item if present, otherwise None.
- Return type:
T_CogniteResource | None
- to_pandas(
- include_errors: bool = True,
Convert the list of synthetic datapoints into a single pandas DataFrame.
Each expression becomes a column in the resulting DataFrame, with timestamps as the index. Error columns are only included for expressions that have at least one error.
- Parameters:
include_errors (bool) – Whether to include error columns. Defaults to True.
- Returns:
A DataFrame with timestamp as index and columns for each expression and optionally errors.
- Return type:
pandas.DataFrame
Datapoint Subscriptions
|
|
|
|
Datapoint Subscription classes
- class cognite.client.data_classes.datapoints_subscriptions.DataDeletion(inclusive_begin: 'int', exclusive_end: 'int | None')
Bases:
object
- class cognite.client.data_classes.datapoints_subscriptions.DataPointSubscriptionUpdate(external_id: str)
Bases:
CogniteUpdateChanges applied to datapoint subscription
- Parameters:
external_id (str) – The external ID provided by the client. Must be unique for the resource type.
- class cognite.client.data_classes.datapoints_subscriptions.DataPointSubscriptionWrite(
- external_id: str,
- partition_count: int,
- time_series_ids: list[str] | None = None,
- instance_ids: list[NodeId] | None = None,
- filter: Filter | None = None,
- name: str | None = None,
- description: str | None = None,
- data_set_id: int | None = None,
Bases:
DatapointSubscriptionCore- A data point subscription is a way to listen to changes to time series data points, in ingestion order.
This is the write version of a subscription, used to create new subscriptions.
A subscription can either be defined directly by a list of time series ids or indirectly by a filter.
- Parameters:
external_id (str) – Externally provided ID for the subscription. Must be unique.
partition_count (int) – The maximum effective parallelism of this subscription (the number of clients that can read from it concurrently) will be limited to this number, but a higher partition count will cause a higher time overhead. The partition count must be between 1 and 100. CAVEAT: This cannot change after the subscription has been created.
time_series_ids (list[str] | None) – List of (external) ids of time series that this subscription will listen to. Not compatible with filter.
instance_ids (list[NodeId] | None) – List of instance ids of time series that this subscription will listen to. Not compatible with filter.
filter (Filter | None) – A filter DSL (Domain Specific Language) to define advanced filter queries. Not compatible with time_series_ids.
name (str | None) – No description.
description (str | None) – A summary explanation for the subscription.
data_set_id (int | None) – The id of the dataset this subscription belongs to.
- as_write() DataPointSubscriptionWrite
Returns this DatapointSubscription instance
- dump(
- camel_case: bool = True,
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscription(
- external_id: str,
- partition_count: int,
- created_time: int,
- last_updated_time: int,
- time_series_count: int | None = None,
- filter: Filter | None = None,
- name: str | None = None,
- description: str | None = None,
- data_set_id: int | None = None,
Bases:
DatapointSubscriptionCore- A data point subscription is a way to listen to changes to time series data points, in ingestion order.
This is the read version of a subscription, used when reading subscriptions from CDF.
- Parameters:
external_id (str) – Externally provided ID for the subscription. Must be unique.
partition_count (int) – The maximum effective parallelism of this subscription (the number of clients that can read from it concurrently) will be limited to this number, but a higher partition count will cause a higher time overhead.
created_time (int) – Time when the subscription was created in CDF in milliseconds since Jan 1, 1970.
last_updated_time (int) – Time when the subscription was last updated in CDF in milliseconds since Jan 1, 1970.
time_series_count (int | None) – The number of time series in the subscription. None if no timeseries.
filter (Filter | None) – If present, the subscription is defined by this filter.
name (str | None) – No description.
description (str | None) – A summary explanation for the subscription.
data_set_id (int | None) – The id of the dataset this subscription belongs to.
- as_write() DataPointSubscriptionWrite
Returns this DatapointSubscription as a DataPointSubscriptionWrite
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionBatch(
- updates: 'list[DatapointsUpdate]',
- subscription_changes: 'SubscriptionTimeSeriesUpdate',
- has_next: 'bool',
- cursor: 'str',
Bases:
object
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionCore(
- external_id: str,
- partition_count: int,
- filter: Filter | None,
- name: str | None,
- description: str | None,
- data_set_id: int | None,
Bases:
WriteableCogniteResource[DataPointSubscriptionWrite],ABC- dump(
- camel_case: bool = True,
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionFilterProperties
alias of
DatapointSubscriptionProperty
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionList(
- resources: Sequence[T_CogniteResource],
Bases:
WriteableCogniteResourceList[DataPointSubscriptionWrite,DatapointSubscription],ExternalIDTransformerMixin- as_write() DatapointSubscriptionWriteList
Returns this DatapointSubscriptionList as a DatapointSubscriptionWriteList
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionPartition(index: 'int', cursor: 'str | None' = None)
Bases:
object
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionProperty(value)
Bases:
EnumPropertyAn enumeration.
- class cognite.client.data_classes.datapoints_subscriptions.DatapointSubscriptionWriteList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceList[DataPointSubscriptionWrite],ExternalIDTransformerMixin
- class cognite.client.data_classes.datapoints_subscriptions.DatapointsUpdate(
- time_series: 'TimeSeriesID',
- upserts: 'SubscriptionDatapoints',
- deletes: 'list[DataDeletion]',
Bases:
object
- class cognite.client.data_classes.datapoints_subscriptions.SubscriptionDatapoints(
- id: int,
- is_string: bool,
- type: Literal['numeric', 'string', 'state'],
- timestamp: list[int],
- value: list[str] | list[float],
- external_id: str | None = None,
- instance_id: NodeId | None = None,
- status_code: list[int] | None = None,
- status_symbol: list[str] | None = None,
Bases:
CogniteResourceDatapoints from a subscription update, flattened from the nested API response.
The API returns time series metadata (id, isString, type, etc.) separately from the datapoints array. This class combines them into a single object for easier consumption.
- dump(camel_case: bool = True) dict[str, Any]
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- to_pandas(include_status: bool = True) pd.DataFrame
Convert the datapoints into a pandas DataFrame.
- Parameters:
include_status (bool) – Include status code and status symbol as separate columns, if available. Also adds the status info as a separate level in the columns (MultiIndex).
- Returns:
The dataframe.
- Return type:
pd.DataFrame
- class cognite.client.data_classes.datapoints_subscriptions.SubscriptionTimeSeriesUpdate(
- added: 'list[TimeSeriesID]',
- removed: 'list[TimeSeriesID]',
Bases:
object
- class cognite.client.data_classes.datapoints_subscriptions.TimeSeriesID(
- id: int | None = None,
- external_id: str | None = None,
- instance_id: NodeId | None = None,
Bases:
CogniteResourceA TimeSeries Identifier to uniquely identify a time series.
- Parameters:
id (int | None) – A server-generated ID for the object. May be None if the time series reference is broken (e.g., the time series was deleted or its external_id was changed).
external_id (str | None) – The external ID provided by the client. Must be unique for the resource type.
instance_id (NodeId | None) – The ID of an instance in Cognite Data Models.
- dump(camel_case: bool = True) dict[str, Any]
Dump the instance into a json serializable Python data type.
- Parameters:
camel_case (bool) – Use camelCase for attribute names. Defaults to True.
- Returns:
A dictionary representation of the instance.
- Return type:
dict[str, Any]
- property is_resolved: bool
Returns True if this reference points to an existing time series (i.e., has an id).
- class cognite.client.data_classes.datapoints_subscriptions.TimeSeriesIDList(
- resources: Sequence[T_CogniteResource],
Bases:
CogniteResourceList[TimeSeriesID],IdTransformerMixin