The RELEX Monitoring API provides a way to access monitoring data for RELEX services. The API allows customers to retrieve information about events that have occurred in their environments, such as file processing events and job execution events.
RELEX customers can use the Monitoring API to:
See the Get File Events endpoint for example file event payload.
File events are grouped by the file name originally used when uploading the file to RELEX. The file events endpoint returns a list of file names along with the current status of the data contained in each file. The status of a file can be one of the following:
| Status | Description |
|---|---|
| Received | The file has been received by RELEX, but no further processing has yet happened. |
| Processing | The data contained in the file is being validated and transferred via intermediate storage to RELEX supply chain planning. |
| Processed | The data is fully validated and ready to be used in supply chain planning calculations. |
| Exported | A customer-facing file has been created by RELEX, but it has not yet been delivered to customer-facing storage. |
| Delivering | A customer-facing file is currently being delivered to customer-facing storage. |
| Delivered | A customer-facing file has been delivered to customer-facing storage. |
| Warning | A non-critical issue was encountered during processing of the file. Usually caused by some rows being skipped during file validation. |
| Error | A critical error occurred during processing of the file. |
Along with the status, the file events endpoint returns a list of events that have occurred.
| Event | Description |
|---|---|
| File received | The file has been received in an upload location by RELEX. |
| File preparation started | The file is being read in and validated by the data integration application known as RELEX Connect. |
| File preparation completed | The file has been fully validated by RELEX Connect and the data is now uploaded to intermediate storage for use in supply chain planning. |
| File import started | The data is now being read from intermediate storage to RELEX Plan, the core RELEX supply chain planning application. |
| File import completed | The data has been fully read and is ready for use in supply chain planning calculations. |
| File is ready for processing | The data is ready for pricing and promotion strategy optimization. |
| Export file created | A customer-facing export file has been created by RELEX. |
| Export file ready for preparation | The export file is being prepared for delivery to customer-facing storage. |
| Export file uploaded to customer-facing storage | The export file has been uploaded to customer-facing storage and is available for download. |
The API provides information about the direction of data integration. Two types of integration are supported: inbound and outbound.
The integration direction is indicated in the file event payload via the direction field. It can also be specified as an optional direction query parameter in the file event endpoint. Additionally:
Possible values for the 'direction' field are:
| Direction | Description | File path field name |
|---|---|---|
| inbound | Data is being uploaded to RELEX. | input_url |
| outbound | Data is being downloaded from RELEX. | output_url |
⚠️ Functionality is available from Plan 10.6 and higher.
Events with 'File preparation started', 'File preparation completed', and 'File import completed' statuses also include information about the number of total and skipped rows in the file. This information is provided in 'file_rows' object. The 'file_rows' object contains the following fields:
| Field | Description |
|---|---|
| total | Total number of data rows present in the uploaded file. |
| skipped | Number of rows skipped while processing the uploaded file |
For some events, the API provides information on the reasons for skipping rows, in which case the event will also include a 'warnings' object.
The overall status of the file will be 'Warning' if any rows were skipped during processing, unless there were also critical errors.
Example:
{
"file": "transactions_20251204_110001.csv",
"input_url": "https://example-storage.blob.core.windows.net/input",
"status": "Warning",
"direction": "inbound",
"file_size": 694713,
"timestamp": "2025-12-04T14:12:10Z",
"events": [
{
"title": "File received",
"timestamp": "2025-12-04T11:36:01Z"
},
{
"title": "File preparation started",
"timestamp": "2025-12-04T14:11:48Z",
"file_rows": {
"total": 5973,
"skipped": 1339
},
"warnings": [
{
"details": "Null value in non-null column relex_transaction_type",
"count": 1326,
"example_rows": [
"2025-12-03,LOC01,LOC02,,-6.0,CS,-6.0,CS,REF001,Invoice,TYPE1,CAT1,ORD001,ID001,SO",
"2025-12-03,LOC01,LOC02,,-1.0,CS,-1.0,CS,REF002,Invoice,TYPE1,CAT1,ORD002,ID002,SO",
"2025-12-03,LOC01,LOC02,,-35.0,CS,-35.0,CS,REF003,Invoice,TYPE1,CAT1,ORD003,ID003,SO",
"2025-12-03,LOC01,LOC02,,-8.0,CS,-8.0,CS,REF004,Invoice,TYPE1,CAT1,ORD004,ID004,SO",
"2025-12-03,LOC01,LOC02,,-36.0,CS,-36.0,CS,REF005,Invoice,TYPE1,CAT1,ORD005,ID005,SO"
]
},
{
"details": "Null value in non-null column primary_qty",
"count": 13,
"example_rows": [
"2025-12-03,LOC03,SKU01,DELIVERY,,LB,40.0,CS,REF006,IR Description,TYPE2,CAT2,ORD006,ID006,OT",
"2025-12-03,LOC03,SKU02,DELIVERY,,LB,60.0,CS,REF007,IR Description,TYPE2,CAT2,ORD007,ID007,OT",
"2025-12-03,LOC03,SKU03,DELIVERY,,LB,60.0,CS,REF008,IR Description,TYPE2,CAT2,ORD008,ID008,OT",
"2025-12-03,LOC03,SKU04,DELIVERY,,LB,55.0,CS,REF009,IR Description,TYPE2,CAT2,ORD009,ID009,OT",
"2025-12-03,LOC01,SKU05,DELIVERY,,LB,192.0,CS,REF010,IR Description,TYPE2,CAT2,ORD010,ID010,OP"
]
}
]
},
{
"title": "File preparation completed",
"timestamp": "2025-12-04T14:12:10Z",
"file_rows": {
"total": 4634,
"skipped": 0
}
}
]
},
The following table lists the most common warning messages:
| Warning Details | Reason |
|---|---|
| Missing required non-null fields, fields=”comma-separated list of fields” | The input file specification contains fields marked as 'Not null', but the provided file is missing these fields. |
| Could not parse value of type "type-name" | The input file specification defines a field of a specific type, but the provided file contains a value that does not match the specified type. |
| Number of columns on row does not match the header or linebreaks are inconsistent | The number of fields in the provided file does not match the input file specification, or inconsistent linebreaking characters are used in the data. |
⚠️ Functionality is available for SFTP file uploads
Shows the size of the file in bytes. The file size is included in the file_size field of the file event object.
If a critical error occurs during file processing, the endpoint will return a list of errors and the overall status of the file is "Error". Each error consists of a title that describes at what stage the error occurred and a details field that provides more information about the error.
Example:
{
"file": "Locations_2025-01-23.csv",
"status": "Error",
"timestamp": "2025-01-23T07:38:22Z",
"file_size": 15,
"events": [
{
"title": "File preparation started",
"timestamp": "2025-01-23T07:38:16Z"
}
],
"errors": [
{
"title": "File preparation started",
"details": "The file you are looking for may have been moved or deleted"
}
]
See the Get Job Events endpoint for example job event payload.
The RELEX Monitoring API provides access to events linked to RELEX Plan job executions. By querying the job events endpoint, RELEX users can find out about the current state of scheduled jobs in RELEX Plan and whether or not they have completed successfully.
Jobs can have the following statuses:
| Status | Description |
|---|---|
| Running | The job is in progress. |
| Canceled | The job was canceled when the Cancel button was selected. The job is also marked as Canceled if the continue criteria set in WaitFor is not met and WaitFor is set to not continue the execution of the job. |
| Completed | The job has completed successfully. |
| Error | The job has failed or has been aborted by the system (for example, due to a system restart). |
| Queued | The job is lined up and waiting to be processed. There are more jobs started than can be executed simultaneously. You can configure the number, and the default is 5 jobs. The job remains in the Queued status until earlier jobs have completed. |
| Warning | The job has completed, but some of the child runs executed within the job have failed with an error. |
| Pending | The job is awaiting initiation. |
| Delayed | The job start has been postponed. |
Jobs can be filtered by a number of fields, including:
job_name) - the request will return jobs whose names contain the specified stringjob_parent_name) - will return jobs whose parent job name contain the specified stringTo start using the Monitoring API, ensure you have received your Client ID and Client Secret.
A simple client code snippet in Python for reference is maintained in this repository:
https://github.com/relex/monitoring-api-demo
This section covers the data resources and technical details of the RELEX Monitoring API.
⚠️ All API requests must be made over HTTPS
Calls made over plain HTTP will fail and return an HTTP Status Code 400 Bad Request.
The RELEX Monitoring API uses JSON as the format for data serialization. The JSON payload of the API has a root-level object that contains a meta-object and data array element.
All resources have a defined schema, and the details of each resource can be examined in the resource specifications in this document.
Date and time can be represented with a single field type expected to follow the ISO 8601 standard.
2023-12-05T13:45:10Z.The base URL for API requests depends on the region where the client is located. The following URLs should be used depending on the region:
| Region | Base URL |
|---|---|
| EU | https://eu.monitor.relexsolutions.com |
| US | https://us.monitor.relexsolutions.com |
Ensure that all API requests use the appropriate base URL for your region. For example, an API request for a customer in the EU would be:
GET https://eu.monitor.relexsolutions.com/api/v1/{customer_id}/events/file?env=test
For more examples, see the operations dropdown menu on the right side of the page.
Monitoring API resources respond to requests synchronously.
The RELEX Monitoring API uses standard HTTP response codes to indicate the various failures an API request can return. Codes in the 4xx range indicate an error on the client side, for example, the resource name was incorrect. Codes in the 5xx range indicate an error within RELEX services.
To convey details about errors to clients in JSON responses, the RELEX Monitoring API uses the RFC 7807 standard.
| Error code | Description |
|---|---|
| 400 Bad Request | The request was unacceptable, for example, due to a missing parameter |
| 401 Unauthorized | The client must log in. This often means "Unauthenticated" |
| 403 Forbidden | Authentication/authorization failed. The bearer token provided might be invalid, expired, or revoked, or the customer_id path parameter is incorrect |
| 404 Not Found | Resource was not found |
| 405 Method not allowed | The request method is not supported by the target resource |
| 429 Too Many Requests | Request limit has been reached |
| 500 Internal Server Error | Generic server error indicating an unexpected problem. A client may retry |
The RELEX Monitoring API imposes limits on incoming web requests to ensure performance, reliability, and efficiency. The API sets the following limits:
Overall limit is 100,000 requests per 5 minutes from a single customer IP address. Applies to all endpoints and requests to the API.
The limit for the /api/v{N}/{customer_id}/events/job endpoint is 100 requests per minute from
a single customer IP address.
The limit for the /api/v{N}/{customer_id}/events/file endpoint is 100 requests per minute from
a single customer IP address.
The limit for the /api/v{N}/health endpoint is 12 requests per minute from a single customer IP address.
If the limits are exceeded, an HTTP status code 429 is returned.
ℹ️ API limits are subject to change and appropriate values are determined during the implementation project.
The RELEX Monitoring API imposes no limits on return payloads. The payload size can be arbitrarily large.
This section describes the authentication methods and security-related requirements for the Monitoring API.
The Monitoring API utilizes OAuth 2.0 and OpenID Connect (OIDC) for authentication and authorization. OIDC is an authentication layer on top of OAuth 2.0 and JSON Web Tokens (JWT).
The API is secured via a JWT access token, which must be present in each request. The JWT is fetched from a token endpoint of RELEX Identity (an Identity Provider) using a Client ID and a Client Secret.
OAuth2 client credentials grant. The token endpoint URL depends on the region:
https://identity.prod-eu.prod.cc.relexsolutions.com/monitoring_api_prod/connect/tokenhttps://identity.prod-us.prod.cc.relexsolutions.com/monitoring_api_prod/connect/tokenclientCredentials https://identity.prod-eu.prod.cc.relexsolutions.com/monitoring_api_prod/connect/tokenThe Monitoring API implements authorization using the Client Credentials Grant. It is an authorization flow defined in the OAuth 2.0 specification, which is typically used for allowing access to resources in an automated context; that is, in a context with no human input.
The general steps involved are as follows:
To authenticate with RELEX Identity, the client needs:
These are provided by RELEX during the implementation process. After this information has been provided, it needs to be configured into the client application according to the application's specific needs.
The token endpoint URL for obtaining the JWT access token depends on the region. The following URLs should be used depending on the region where the client operates:
| Region | Token endpoint URL |
|---|---|
| EU | https://identity.prod-eu.prod.cc.relexsolutions.com/monitoring_api_prod/connect/token |
| US | https://identity.prod-us.prod.cc.relexsolutions.com/monitoring_api_prod/connect/token |
application/x-www-form-urlencodedclient_credentials grant type must be included in the POST body as URL-encoded key-value pairs.An example body:
client_id=my_client&client_secret=ZWFzdGVyZWdnCg&grant_type=client_credentials
"access_token": the JWT token that needs to be used for accessing the Monitoring API"expires_in": the access token's time to live in seconds"token_type": information on how the token needs to be used, in this case, always set to "Bearer"."scope": The scope that the token grants access to.Whenever the client accesses any Monitoring API endpoint, the client must provide the following:
Authorization -header, preceded by Bearer. An example header is Authorization: Bearer eyJ[...]jw.When an Access Token expires, Monitoring API calls respond with 401 Unauthorized
The client application has two options to keep it authenticated:
Note: the Client Credentials grant type is not compatible with a construct such as refresh tokens, as mentioned in the protocol specification.
As an additional layer of security, the Monitoring API can be configured to allow requests from only certain IP addresses or ranges. The recommendation is to always have an IP allowlist in place. The API does not support an explicit blocklist.
Returns a list of environments available to the customer
| N required | integer Value: 1 Example: 1 API version, MAJOR format |
| customer_id required | string Example: example-customer ID of the customer who the environments belong to |
{- "environments": [
- "dev"
]
}Returns file events owned by the requesting customer. Supports filtering via query parameters. The response includes one entry per file where at least one event occurred within the requested timeframe.
| N required | integer Value: 1 Example: 1 API version, MAJOR format |
| customer_id required | string Example: example-customer ID of the customer who the events belong to |
| env required | string [ 1 .. 20 ] characters Example: env=dev Customer environment to fetch the events from |
| file_name | string [ 1 .. 512 ] characters Example: file_name=somefile.json File name filter; returns files whose names contain this string |
| start_timestamp | string <date-time> Example: start_timestamp=2026-01-01T05:10:06+00:00 Start timestamp (inclusive). Must not be later than |
| end_timestamp | string <date-time> Example: end_timestamp=2026-03-01T20:10:06+00:00 End timestamp (exclusive). Must not be earlier than |
| direction | string Enum: "inbound" "outbound" Example: direction=inbound Integration direction (case-insensitive) |
| job_id | string^[0-9]+$ Example: job_id=12345 The ID of RELEX Plan job to filter file events by |
| status | Array of strings Items Enum: "Error" "Warning" "Exported" "Delivering" "Delivered" "Received" "Processing" "Processed" Example: status=warning Current event flow status (case-insensitive); supports exploded array syntax, eg. |
| page | integer [ 1 .. 9223372036854776000 ] Default: 1 Example: page=1 Page number, 1-based |
| per_page | integer [ 1 .. 1000 ] Default: 50 Example: per_page=50 Number of file event flows per page |
{- "meta": {
- "customer": "example-customer",
- "version": "v1"
}, - "data": [
- {
- "file": "E-10003.BSAPPRODUCTLDSRELEX_6700_20231130224506.csv",
- "input_url": "sftp://sftpexample-customerfim@sftp-example.relexsolutions.com/data/input/E-10003.BSAPPRODUCTLDSRELEX_6700_20231130224506.csv",
- "direction": "inbound",
- "status": "Received",
- "timestamp": "2023-12-01T02:55:39Z",
- "file_size": 359,
- "events": [
- {
- "title": "File received",
- "timestamp": "2023-12-01T02:55:39Z"
}
]
}, - {
- "file": "E-10003.BSAPCAMPAIGNLRELEX20231130224800.csv",
- "input_url": "sftp://sftpexample-customerfim@sftp-example.relexsolutions.com/data/input/E-10003.BSAPCAMPAIGNLRELEX20231130224800.csv",
- "direction": "inbound",
- "status": "Processing",
- "timestamp": "2023-12-01T04:00:10Z",
- "file_size": 359,
- "events": [
- {
- "title": "File received",
- "timestamp": "2023-12-01T02:55:39Z"
}, - {
- "title": "File preparation started",
- "timestamp": "2023-12-01T04:00:10Z"
}
]
}, - {
- "file": "E-10003.BSAPPRODUCTLDSRELEX_6400_20231130224503.csv",
- "input_url": "sftp://sftpexample-customerfim@sftp-example.relexsolutions.com/data/input/E-10003.BSAPPRODUCTLDSRELEX_6400_20231130224503.csv",
- "application": "Plan",
- "job_id": "12345",
- "direction": "inbound",
- "status": "Processed",
- "timestamp": "2023-12-04T05:07:33Z",
- "file_size": 359,
- "events": [
- {
- "title": "File received",
- "timestamp": "2023-12-01T02:55:39Z"
}, - {
- "title": "File preparation started",
- "timestamp": "2023-12-01T04:00:10Z"
}, - {
- "title": "File preparation completed",
- "timestamp": "2023-12-01T04:00:11Z"
}, - {
- "title": "File import started",
- "timestamp": "2023-12-01T04:01:46Z",
- "file_rows": {
- "total": 100,
- "skipped": 0
}
}, - {
- "title": "File import completed",
- "timestamp": "2023-12-04T05:07:33Z"
}
]
}, - {
- "file": "transactions_20251204_110001.csv",
- "status": "Warning",
- "direction": "inbound",
- "file_size": 694713,
- "timestamp": "2025-12-04T14:12:10Z",
- "events": [
- {
- "title": "File received",
- "timestamp": "2025-12-04T11:36:01Z"
}, - {
- "title": "File preparation started",
- "timestamp": "2025-12-04T14:11:48Z",
- "file_rows": {
- "total": 5973,
- "skipped": 1339
}, - "warnings": [
- {
- "details": "Null value in non-null column relex_transaction_type",
- "count": 1326,
- "example_rows": [
- "2025-12-03,LOC01,LOC02,,-6.0,CS,-6.0,CS,REF001,Invoice,TYPE1,CAT1,ORD001,ID001,SO",
- "2025-12-03,LOC01,LOC02,,-1.0,CS,-1.0,CS,REF002,Invoice,TYPE1,CAT1,ORD002,ID002,SO",
- "2025-12-03,LOC01,LOC02,,-35.0,CS,-35.0,CS,REF003,Invoice,TYPE1,CAT1,ORD003,ID003,SO",
- "2025-12-03,LOC01,LOC02,,-8.0,CS,-8.0,CS,REF004,Invoice,TYPE1,CAT1,ORD004,ID004,SO",
- "2025-12-03,LOC01,LOC02,,-36.0,CS,-36.0,CS,REF005,Invoice,TYPE1,CAT1,ORD005,ID005,SO"
]
}, - {
- "details": "Null value in non-null column primary_qty",
- "count": 13,
- "example_rows": [
- "2025-12-03,LOC03,SKU01,DELIVERY,,LB,40.0,CS,REF006,IR Description,TYPE2,CAT2,ORD006,ID006,OT",
- "2025-12-03,LOC03,SKU02,DELIVERY,,LB,60.0,CS,REF007,IR Description,TYPE2,CAT2,ORD007,ID007,OT",
- "2025-12-03,LOC03,SKU03,DELIVERY,,LB,60.0,CS,REF008,IR Description,TYPE2,CAT2,ORD008,ID008,OT",
- "2025-12-03,LOC03,SKU04,DELIVERY,,LB,55.0,CS,REF009,IR Description,TYPE2,CAT2,ORD009,ID009,OT",
- "2025-12-03,LOC01,SKU05,DELIVERY,,LB,192.0,CS,REF010,IR Description,TYPE2,CAT2,ORD010,ID010,OP"
]
}
]
}, - {
- "title": "File preparation completed",
- "timestamp": "2025-12-04T14:12:10Z",
- "file_rows": {
- "total": 4634,
- "skipped": 0
}
}
]
}, - {
- "file": "E-10003.BSAPINVTRANSRELEX_6500_20231201023001.csv",
- "input_url": "sftp://sftpexample-customerfim@sftp-example.relexsolutions.com/data/input/E-10003.BSAPINVTRANSRELEX_6500_20231201023001.csv",
- "direction": "inbound",
- "status": "Error",
- "timestamp": "2023-12-01T04:00:10Z",
- "file_size": 359,
- "events": [
- {
- "title": "File received",
- "timestamp": "2023-12-01T02:55:39Z"
}, - {
- "title": "File preparation started",
- "timestamp": "2023-12-01T04:00:10Z"
}
], - "errors": [
- {
- "title": "File validation failed",
- "details": "Validation failed: invalid value \"QTY\" for column \"batch_size\""
}
]
}, - {
- "file": "Transaction_aaTYk_2025-06-06_05:05:50.csv",
- "application": "Promo",
- "status": "Delivered",
- "direction": "outbound",
- "timestamp": "2025-06-06T05:06:50Z",
- "events": [
- {
- "title": "Export file created",
- "timestamp": "2025-06-06T05:05:50Z"
}, - {
- "title": "Export file uploaded to customer-facing storage",
- "timestamp": "2025-06-06T05:06:50Z"
}
]
}
], - "_links": {
- "next": {
- "href": "/api/v1/example-customer/events/file?page=2"
}, - "self": {
- "href": "/api/v1/example-customer/events/file"
}
}
}Returns job events executed on customer environment. Supports filtering via query parameters.
| N required | integer Value: 1 Example: 1 API version, MAJOR format |
| customer_id required | string Example: example-customer ID of the customer who the events belong to |
| env required | string [ 1 .. 20 ] characters Example: env=dev Customer environment to fetch the events from |
| job_id | integer [ 1 .. 9223372036854776000 ] Example: job_id=1234 Uniquely identifies a specific execution of the job |
| job_parent_id | integer [ 1 .. 9223372036854776000 ] Example: job_parent_id=1234 The job_id of the parent job |
| run_id | string <uuid> Example: run_id=00000000-0000-0000-0000-000000000000 The ID of the process which created the scheduled job, which doesn't change between job executions |
| run_parent_id | string <uuid> Example: run_parent_id=00000000-0000-0000-0000-000000000000 The run_id of the parent job |
| job_name | string [ 1 .. 512 ] characters Example: job_name=Scheduled job Name of the scheduled job. The API returns all jobs whose names include the specified string. |
| job_parent_name | string [ 1 .. 512 ] characters Example: job_parent_name=Cleanup run Name of the parent job. The API returns all jobs whose parent names include the specified string. |
| status | Array of strings Items Enum: "Pending" "Delayed" "Queued" "Running" "Canceled" "Completed" "Warning" "Error" Example: status=warning Current job status (case-insensitive); supports exploded array syntax, eg. |
| start_timestamp | string <date-time> Example: start_timestamp=2026-01-01T05:10:06+00:00 Start timestamp (inclusive). Must not be later than |
| end_timestamp | string <date-time> Example: end_timestamp=2026-03-01T20:10:06+00:00 End timestamp (exclusive). Must not be earlier than |
| page | integer [ 1 .. 9223372036854776000 ] Default: 1 Example: page=1 Page number, 1-based |
| per_page | integer [ 1 .. 1000 ] Default: 50 Example: per_page=50 Number of job execution results per page |
{- "meta": {
- "customer": "example-customer",
- "version": "v1"
}, - "data": [
- {
- "name": "Scheduled job - Example Workflow",
- "run_id": "123e4567-e89b-12d3-a456-426614174000",
- "job_id": 1123456789900,
- "application": "plan-example",
- "status": "Completed",
- "status_message": "Externally triggered workflow 01234567890",
- "progress": {
- "rows_total": 30,
- "rows_ignored": 0,
- "rows_completed": 30,
- "percentage_completed": 100
}, - "last_update": "2025-03-28T04:45:28Z",
- "start_time": "2025-03-28T04:45:25Z",
- "end_time": "2025-03-28T04:45:28Z",
- "events": [
- {
- "status": "Pending",
- "timestamp": "2025-03-28T04:45:25Z"
}, - {
- "status": "Running",
- "timestamp": "2025-03-28T04:45:25Z"
}, - {
- "status": "Completed",
- "timestamp": "2025-03-28T04:45:28Z"
}
]
}, - {
- "name": "Scheduled job - Example Workflow",
- "run_id": "a1b2c3d4-e5f6-4789-a012-3456789abcde",
- "job_id": 1123456789900,
- "job_parent_id": 1123456789899,
- "job_parent_name": "Scheduled job - Parent Workflow",
- "application": "plan-example",
- "status": "Error",
- "status_message": "Externally triggered workflow 01234567890 - Error running scheduled job",
- "progress": {
- "rows_total": 4,
- "rows_ignored": 0,
- "rows_completed": 0,
- "percentage_completed": 100
}, - "last_update": "2025-03-28T04:45:28Z",
- "start_time": "2025-03-28T04:45:25Z",
- "end_time": "2025-03-28T04:45:28Z",
- "events": [
- {
- "status": "Pending",
- "timestamp": "2025-03-28T04:45:25Z"
}, - {
- "status": "Running",
- "timestamp": "2025-03-28T04:45:25Z"
}, - {
- "status": "Error",
- "timestamp": "2025-03-28T04:45:28Z"
}
]
}
], - "_links": {
- "next": {
- "href": "/api/v1/example-customer/events/job?page=2"
}, - "self": {
- "href": "/api/v1/example-customer/events/job"
}
}
}⚠️ This endpoint is not yet available. The section below describes the planned metrics endpoint.
The Monitoring API exposes a Prometheus-compatible metrics endpoint at /api/v1/{customer_id}/metrics.
The metrics available over this endpoint give quantitative insight into the throughput,
latency, and reliability of data flowing into RELEX services.
When applicable, metrics are labelled with a type dimension that identifies the data type
(e.g. sales, product_locations). The type values are normalised to use underscores.
These metrics describe the end-to-end data ingestion pipeline from the point data is received by RELEX until it is consumed by RELEX Plan, the core supply chain planning application. The metrics are technology-agnostic and apply regardless of whether data is ingested via batch file uploads or via the Data API.
| Metric | Type | Unit | Description |
|---|---|---|---|
plan_data_ingestion_delay_seconds |
Gauge | seconds | End-to-end latency of RELEX Plan data ingestion, measured as the delay between data being sent to RELEX and data being consumed by RELEX Plan. |
plan_data_ingestion_messages_total |
Counter | messages | Total number of messages (files or records) ingested by RELEX Plan. Tracks overall ingestion throughput. |
plan_data_ingestion_rows_total |
Counter | rows | Total number of data rows processed by RELEX Plan during ingestion. Provides a fine-grained view of data volume beyond message counts. |
Labels used with plan_data_ingestion_* metrics:
| Label | Meaning |
|---|---|
app |
Which customer application the metric belongs to. |
env |
Which customer environment the application belongs to. |
type |
Which kind of integration data stream the metric describes. |
These metrics follow OpenTelemetry semantic conventions and describe the HTTP layer of the RELEX Data API, for customers using that integration method.
| Metric | Type | Unit | Description |
|---|---|---|---|
http_server_request_duration_ms |
Histogram | milliseconds | Duration of HTTP requests received by the Data API. |
http_server_request_body_size_bytes |
Counter | bytes | Cumulative size of HTTP request bodies received by the Data API. Indicates the volume of data being sent to RELEX. |
http_server_response_body_size_bytes |
Counter | bytes | Cumulative size of HTTP response bodies returned by the Data API. |
http_server_request_count_by_status |
Counter | requests | Total number of HTTP requests received by the Data API, broken down by HTTP status code (http_status label). |
Labels used with http_server_request_duration_ms* metrics:
| Label | Meaning |
|---|---|
type |
Which kind of integration data stream the metric describes. |
http_path |
Which API endpoint received the data in RELEX. |
le |
For bucket metrics, the upper time limit of the bucket in milliseconds. This label is not present on the related _sum and _count metrics. |
Labels used with http_server_request_count_by_status:
| Label | Meaning |
|---|---|
type |
Which kind of integration data stream the metric describes. |
http_path |
Which API endpoint received the data in RELEX. |
http_status |
Which HTTP response status RELEX returned, for example 200 or 400. |
Labels used with http_server_request_body_size_bytes* and http_server_response_body_size_bytes* metrics:
| Label | Meaning |
|---|---|
type |
Which kind of integration data stream the metric describes. |
http_path |
Which API endpoint received the data in RELEX for request size metrics, or which API endpoint RELEX sent the data from for response size metrics. |
Check whether the service is alive and can be queried. This endpoint signals availability via response body; if the status field is different from "ok", this indicates service component unavailability.
| N required | integer Value: 1 Example: 1 API version, MAJOR format |
{- "message": "Service unhealthy",
- "status": "fail"
}