Describe the bug
The telemetryapireceiver stamps all FaaS metric data points (faas.invocations, faas.errors, faas.coldstarts, faas.timeouts, faas.init_duration, faas.invoke_duration, faas.mem_usage) with time.Now() at flush time rather than using the event's time field from the Telemetry API payload.
In Lambda's freeze-thaw execution model, time.Now() at flush time reflects when the runtime was thawed, not when the event occurred. This causes metric timestamps to be dislocated from the actual event time, especially after a period of inactivity where the environment was frozen.
The log pipeline (createLogs) already correctly parses the event's time field and uses it as the log record timestamp. The metric pipeline does not — flushMetricsLocked creates a single time.Now() timestamp and applies it to all data points, and recordMetrics does not pass event timestamps to the metric builder Add/Record calls.
Steps to reproduce
Deploy a Lambda function with the OpenTelemetry collector layer and the telemetryapi receiver enabled in a metrics pipeline
Configure a metrics backend that displays data point timestamps (e.g., Prometheus, Grafana)
Invoke the function, then wait several minutes for the Lambda environment to freeze
Invoke the function again and compare the metric data point timestamps with the log record timestamps for the same events
What did you expect to see?
Metric data point timestamps matching the event occurrence time (the time field from the Telemetry API payload), consistent with how log record timestamps are already set.
What did you see instead?
Metric data point timestamps reflecting the flush/thaw time rather than the event time. After a freeze-thaw cycle, metric timestamps can be significantly later than the actual event time, while log timestamps from the same events are correct.
What version of collector/language SDK version did you use?
Version: 0_21_0 of the lambda layer
What language layer did you use?
Just the collector layer with configuration for telemetryapi reciever
Tip: React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding +1 or me too, to help us triage it. Learn more here.
Describe the bug
The telemetryapireceiver stamps all FaaS metric data points (
faas.invocations,faas.errors,faas.coldstarts,faas.timeouts,faas.init_duration,faas.invoke_duration,faas.mem_usage) withtime.Now()at flush time rather than using the event's time field from the Telemetry API payload.In Lambda's freeze-thaw execution model, time.Now() at flush time reflects when the runtime was thawed, not when the event occurred. This causes metric timestamps to be dislocated from the actual event time, especially after a period of inactivity where the environment was frozen.
The log pipeline (createLogs) already correctly parses the event's time field and uses it as the log record timestamp. The metric pipeline does not — flushMetricsLocked creates a single
time.Now()timestamp and applies it to all data points, and recordMetrics does not pass event timestamps to the metric builder Add/Record calls.Steps to reproduce
Deploy a Lambda function with the OpenTelemetry collector layer and the telemetryapi receiver enabled in a metrics pipeline
Configure a metrics backend that displays data point timestamps (e.g., Prometheus, Grafana)
Invoke the function, then wait several minutes for the Lambda environment to freeze
Invoke the function again and compare the metric data point timestamps with the log record timestamps for the same events
What did you expect to see?
Metric data point timestamps matching the event occurrence time (the time field from the Telemetry API payload), consistent with how log record timestamps are already set.
What did you see instead?
Metric data point timestamps reflecting the flush/thaw time rather than the event time. After a freeze-thaw cycle, metric timestamps can be significantly later than the actual event time, while log timestamps from the same events are correct.
What version of collector/language SDK version did you use?
Version: 0_21_0 of the lambda layer
What language layer did you use?
Just the collector layer with configuration for
telemetryapirecieverTip: React with 👍 to help prioritize this issue. Please use comments to provide useful context, avoiding
+1orme too, to help us triage it. Learn more here.