From 341f309866f2ca90674ddd847294e9e4047954e7 Mon Sep 17 00:00:00 2001 From: Scott Parillo Date: Fri, 5 Dec 2025 18:11:58 -0800 Subject: [PATCH 01/18] REL-1207540: The environment size was updated and now defines X-Large. Collapsed multiple tables into a single Elastic Stack infrastructure table to describe the infrastructure recommendations using the Server 2025 GA EW production certification results. --- ...elasticsearch_pre_installation_overview.md | 164 +++--------------- 1 file changed, 21 insertions(+), 143 deletions(-) diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index 8d5ca380..c8973e2b 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -118,149 +118,27 @@ The number of servers and hardware specifications needed to host the Elastic com **Environment Size** -The environment size is defined by the number of Web, Agent, and Worker servers within the instance. - -| Environment Size| Web Servers | Agent Servers | Workers | -| --------------- | ------------- | ------------- | ------- | -| Development | 1 | 1 | 1 | -| Small | 1 | 4 | 1 | -| Medium | 2-4 | 5-9 | 2-9 | -| Large | 5+ | 10+ | 10+ | - -> Each Elasticsearch server should have at least 4 vCPU and 32 GB RAM. - -#### Environment Size – Development - -> [!NOTE] -> For a development environment, all Elasticsearch components are installed within a single server to minimize complexity and get up and running as quickly as possible. There are no data upgrades performed for this environment. -
- -| Elastic Stack Component | Server Count | -| ----------------------------------------- | ------------ | -| **Environment Watch Only** | | -| Elasticsearch/Kibana/APM Server | 1 | -| **Data Grid Audit Only** | | -| Elasticsearch/Kibana (optional) | 1 | -| **Environment Watch and Data Grid Audit** | | -| Elasticsearch/Kibana/APM Server | 1 | - -| Elastic Stack Component | Server Count | Disk (TB) | -| ----------------------------------------- | ------------ | ---------- | -| **Environment Watch Only** | | | -| Elasticsearch/Kibana/APM Server | 1 | 1 | -| **Data Grid Audit Only** | | | -| Elasticsearch/Kibana (optional) | 1 | 1 | -| **Environment Watch and Data Grid Audit** | | | -| Elasticsearch/Kibana/APM Server | 1 | 1 | - -#### Environment Size – Small - -> [!NOTE] -> For a small environment, we recommend dedicated Kibana and APM Server server, but can consider installing Kibana and/or APM Server on a single server or even on the same server being used as an Elasticsearch node. -
- -| Elastic Stack Component | Server Count | -| ----------------------------------------- | ------------ | -| **Environment Watch Only** | | -| Elasticsearch nodes | 2 | -| Kibana | 1 | -| APM Server | 1 | -| **Data Grid Audit Only** | | -| Elasticsearch nodes | 2 | -| Kibana (optional) | 1 | -| APM Server | N/A | -| **Environment Watch and Data Grid Audit** | | -| Elasticsearch nodes | 3 | -| Kibana | 1 | -| APM Server | 1 | - -| Elastic Stack Component | Server Count | Disk (TB) | -| ----------------------------------------- | ------------ | ---------- | -| **Environment Watch Only** | | | -| Elasticsearch nodes | 2 | 1 | -| Kibana | 1 | 1 | -| APM Server | 1 | 1 | -| **Data Grid Audit Only** | | | -| Elasticsearch nodes | 2 | 1 | -| Kibana (optional) | 1 | 1 | -| APM Server | N/A | - | -| **Environment Watch and Data Grid Audit** | | | -| Elasticsearch nodes | 3 | 1 | -| Kibana | 1 | 1 | -| APM Server | 1 | 1 | - -#### Environment Size – Medium - -> [!NOTE] -> For a medium environment, a few additional nodes are added to the Elasticsearch cluster(s). -
- -| Elastic Stack Component | Server Count | -| ----------------------------------------- | ------------ | -| **Environment Watch Only** | | -| Elasticsearch nodes | 3 | -| Kibana | 1 | -| APM Server | 1 | -| **Data Grid Audit Only** | | -| Elasticsearch nodes | 3 | -| Kibana (optional) | 1 | -| APM Server | N/A | -| **Environment Watch and Data Grid Audit** | | -| Elasticsearch nodes | 6 | -| Kibana | 1 | -| APM Server | 1 | - -| Elastic Stack Component | Server Count | Disk (TB) | -| ----------------------------------------- | ------------ | ---------- | -| **Environment Watch Only** | | | -| Elasticsearch nodes | 3 | 2 | -| Kibana | 1 | 2 | -| APM Server | 1 | 2 | -| **Data Grid Audit Only** | | | -| Elasticsearch nodes | 3 | 2 | -| Kibana (optional) | 1 | 2 | -| APM Server | N/A | - | -| **Environment Watch and Data Grid Audit** | | | -| Elasticsearch nodes | 6 | 2 | -| Kibana | 1 | 2 | -| APM Server | 1 | 2 | - - -#### Environment Size – Large - -> [!NOTE] -> For a large environment, Elasticsearch is scaled horizontally by adding more nodes to the cluster(s). - -| Elastic Stack Component | Server Count | -| ----------------------------------------- | ---------------------- | -| **Environment Watch Only** | | -| Elasticsearch nodes | 4 | -| Kibana | 1 | -| APM Server | 1 | -| **Data Grid Audit Only** | | -| Elasticsearch nodes | 1-15 (scale on demand) | -| Kibana (optional) | 1 | -| APM Server | N/A | -| **Environment Watch and Data Grid Audit** | | -| Elasticsearch nodes | 4-18 (scale on demand) | -| Kibana | 1 | -| APM Server | 1 | - -| Elastic Stack Component | Server Count | Disk (TB) | -| ----------------------------------------- | ---------------------- | ---------- | -| **Environment Watch Only** | | | -| Elasticsearch nodes | 4 | 4 | -| Kibana | 1 | 4 | -| APM Server | 1 | 4 | -| **Data Grid Audit Only** | | | -| Elasticsearch nodes | 1-15 (scale on demand) | 4 | -| Kibana (optional) | 1 | 4 | -| APM Server | N/A | - | -| **Environment Watch and Data Grid Audit** | | | -| Elasticsearch nodes | 4-18 (scale on demand) | 4 | -| Kibana | 1 | 4 | -| APM Server | 1 | 4 | - +| Environment Size | Web Servers | Agent Servers | Worker Servers | SQL Distributed Servers | +| ----------------------------- | ----------- | ------------- | -------------- | ----------------------- | +| Development | 1 | 2 | 1 | 1 | +| Small | 2 | 10 | 2 | 2 | +| Medium | 8 | 20 | 6 | 6 | +| Large | 12 | 40 | 10 | 12 | +| X-Large | 24 | 80 | 10 | 16 | + +#### Elastic Stack Infrastructure Recommendations + +| Environment Size | DG/Audit Data Nodes | Environment Watch Data Nodes | APM Servers | Kibana Servers | +| ----------------------------- | ------------------- | ---------------------------- | ----------- | -------------- | +| Development | 1 / 500 GB | 1 / 1 TB | 1 | 1 | +| Small | 1 / 1 TB | 1 / 2 TB | 1 | 1 | +| Medium | 2 / 2 TB | 2 / 3 TB | 1 | 1 | +| Large | 5 / 16 TB | 3 / 8 TB | 2 | 2 | +| X-Large | 10 / 32 TB | 5 / 16 TB | 3 | 3 | + +- Separate Elastic clusters is supported when using both Audit/Environment Watch but not required +- APM/Kibana servers can be load balanced +- Each Elasticsearch node should have at least 4 vCPU and 32 GB RAM. ### Licensing From 62efbacc22c5f8c65bcfde153ebfb36084f15a33 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Wed, 10 Dec 2025 17:09:15 +0530 Subject: [PATCH 02/18] REL-1224050: Basic Retension Content --- docs/elasticsearch_retension_setup.md | 39 +++++++++++++++++++++++++++ 1 file changed, 39 insertions(+) create mode 100644 docs/elasticsearch_retension_setup.md diff --git a/docs/elasticsearch_retension_setup.md b/docs/elasticsearch_retension_setup.md new file mode 100644 index 00000000..ad1611ae --- /dev/null +++ b/docs/elasticsearch_retension_setup.md @@ -0,0 +1,39 @@ +# Elasticsearch Retention Policy - Standard Operating Procedure + +## Introduction + +### Purpose + +This Standard Operating Procedure (SOP) defines retention policies for logs, metrics, and traces collected in Elasticsearch and viewed through Kibana. Proper retention management is critical for: + +- **Storage Optimization** – Prevents excessive disk usage by automatically removing outdated data +- **Performance Maintenance** – Keeps query response times fast by limiting the volume of searchable data +- **Compliance Adherence** – Ensures data is retained long enough to meet regulatory and audit requirements +- **Cost Control** – Reduces infrastructure costs associated with storage expansion + +### Impact of Improper Retention + +Failing to configure appropriate retention policies can lead to: + +- **Excessive Storage Usage** – Uncontrolled data growth consuming available disk space +- **Degraded Query Performance** – Large data volumes slow down search and aggregation operations +- **Risk of Data Loss** – Critical audit data may be prematurely deleted if retention is too short +- **Compliance Violations** – Insufficient retention periods may fail to meet legal or regulatory requirements +- **System Instability** – Disk space exhaustion can cause Elasticsearch cluster failures + +--- + +## Retention Strategy + +### Recommended Retention Periods + +The following table provides baseline retention recommendations for different data types: + +| Data Type | Recommended Retention | Rationale | +|-----------|----------------------|-----------| +| **Logs** | 90 days | Optimal balance between troubleshooting capabilities, storage costs, and operational needs | +| **Metrics** | 90 days | Sufficient period for trend analysis, capacity planning, and performance baseline establishment | +| **Traces** | 30 days | Adequate for performance troubleshooting while managing high-volume trace data storage | + +> [!NOTE] +> These recommendations represent industry best practices for Relativity environments. The 90-day retention for logs and metrics provides sufficient historical data for troubleshooting and trend analysis, while the 30-day retention for traces balances performance monitoring needs with storage efficiency. Adjust these periods based on your organization's specific requirements, compliance obligations, and available storage capacity. From e70d829793e9d99adf29e63d4bfa4c43a1de0ee8 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Wed, 10 Dec 2025 17:14:22 +0530 Subject: [PATCH 03/18] REL-1207540: Feedback fix --- docs/elasticsearch_pre_installation_overview.md | 5 ----- 1 file changed, 5 deletions(-) diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index c8973e2b..2105e658 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -110,11 +110,6 @@ The number of servers and hardware specifications needed to host the Elastic com **A few other key notes and reminders:** - **Tuning for speed** – Review Elastic’s guidance on how to tune the environment for speed [here](https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-search-speed.html). -- **Hosting Elastic** – While the guidance below recommends installing the Elastic components on many dedicated servers, there are no hard requirements to isolate Elasticsearch, Kibana, or APM Server on dedicated hosts. As evident with the Development environment specifications, the full Elastic stack can be deployed on a single host if that server can meet the storage needs. - - **Kibana and APM Server hosting:** - - For Small environments, we recommend dedicated servers for Kibana and APM Server, but can consider installing Kibana and/or APM Server on a single server or even on the same server being used as an Elasticsearch node for development and very small environments. - - For Medium environments and above, we strongly recommend installing Kibana and APM Server each on dedicated servers. -- **Nodes in a shared Environment Watch/Data Grid cluster** – In a cluster being used for both Environment Watch and Data Grid Audit, data nodes are not required to be designated for one or the other. Any node in the cluster can support operations for either product, though dedicated node assignments may be needed for certain workloads. **Environment Size** From 7323447fa14c463fd0c325e0caf17057848538bc Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Fri, 12 Dec 2025 19:25:26 +0530 Subject: [PATCH 04/18] REL-1224050: More section added --- docs/elasticsearch_retension_setup.md | 39 --- ...asticsearch_retention_policy_guidelines.md | 261 ++++++++++++++++++ 2 files changed, 261 insertions(+), 39 deletions(-) delete mode 100644 docs/elasticsearch_retension_setup.md create mode 100644 docs/elasticsearch_retention_policy_guidelines.md diff --git a/docs/elasticsearch_retension_setup.md b/docs/elasticsearch_retension_setup.md deleted file mode 100644 index ad1611ae..00000000 --- a/docs/elasticsearch_retension_setup.md +++ /dev/null @@ -1,39 +0,0 @@ -# Elasticsearch Retention Policy - Standard Operating Procedure - -## Introduction - -### Purpose - -This Standard Operating Procedure (SOP) defines retention policies for logs, metrics, and traces collected in Elasticsearch and viewed through Kibana. Proper retention management is critical for: - -- **Storage Optimization** – Prevents excessive disk usage by automatically removing outdated data -- **Performance Maintenance** – Keeps query response times fast by limiting the volume of searchable data -- **Compliance Adherence** – Ensures data is retained long enough to meet regulatory and audit requirements -- **Cost Control** – Reduces infrastructure costs associated with storage expansion - -### Impact of Improper Retention - -Failing to configure appropriate retention policies can lead to: - -- **Excessive Storage Usage** – Uncontrolled data growth consuming available disk space -- **Degraded Query Performance** – Large data volumes slow down search and aggregation operations -- **Risk of Data Loss** – Critical audit data may be prematurely deleted if retention is too short -- **Compliance Violations** – Insufficient retention periods may fail to meet legal or regulatory requirements -- **System Instability** – Disk space exhaustion can cause Elasticsearch cluster failures - ---- - -## Retention Strategy - -### Recommended Retention Periods - -The following table provides baseline retention recommendations for different data types: - -| Data Type | Recommended Retention | Rationale | -|-----------|----------------------|-----------| -| **Logs** | 90 days | Optimal balance between troubleshooting capabilities, storage costs, and operational needs | -| **Metrics** | 90 days | Sufficient period for trend analysis, capacity planning, and performance baseline establishment | -| **Traces** | 30 days | Adequate for performance troubleshooting while managing high-volume trace data storage | - -> [!NOTE] -> These recommendations represent industry best practices for Relativity environments. The 90-day retention for logs and metrics provides sufficient historical data for troubleshooting and trend analysis, while the 30-day retention for traces balances performance monitoring needs with storage efficiency. Adjust these periods based on your organization's specific requirements, compliance obligations, and available storage capacity. diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md new file mode 100644 index 00000000..669bbb09 --- /dev/null +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -0,0 +1,261 @@ +# Elasticsearch Retention Policy - Guidelines + +## Introduction + +### Purpose + +This guidelines defines retention policies for logs, metrics, and traces collected in Elasticsearch and viewed through Kibana. Proper retention management is critical for: + +- **Storage Optimization** – Prevents excessive disk usage by automatically removing outdated data +- **Performance Maintenance** – Keeps query response times fast by limiting the volume of searchable data +- **Compliance Adherence** – Ensures data is retained long enough to meet regulatory and audit requirements +- **Cost Control** – Reduces infrastructure costs associated with storage expansion + +### Impact of Improper Retention + +Failing to configure appropriate retention policies can lead to: + +- **Excessive Storage Usage** – Uncontrolled data growth consuming available disk space +- **Degraded Query Performance** – Large data volumes slow down search and aggregation operations +- **Risk of Data Loss** – Critical audit data may be prematurely deleted if retention is too short +- **Compliance Violations** – Insufficient retention periods may fail to meet legal or regulatory requirements +- **System Instability** – Disk space exhaustion can cause Elasticsearch cluster failures + +--- + +## Retention Strategy + +### Recommended Retention Periods + +The following table provides baseline retention recommendations for different data types: + +| Data Type | Default Retention | Recommended Retention | Rationale | +|-----------|-------------------|----------------------|-----------| +| **Logs** | 10 days | 90 days | Optimal balance between troubleshooting capabilities, storage costs, and operational needs | +| **Metrics** | 90 days | 90 days | Sufficient period for trend analysis, capacity planning, and performance baseline establishment | +| **Traces** | 10 days | 30 days | Adequate for performance troubleshooting while managing high-volume trace data storage | + +> [!NOTE] +> The default retention values are configured out-of-the-box to minimize storage usage in new installations. The recommended retention periods represent industry best practices for Relativity environments, providing sufficient historical data for troubleshooting and trend analysis. Consider upgrading from default to recommended retention based on your organization's specific requirements, compliance obligations, and available storage capacity. + +### Calculating Storage Requirements + +Use the following formula to estimate storage requirements based on your Relativity environment size and desired retention period: + +**Formula:** + +``` +Docs/Day (Daily Documents) = 6M + (Web_Server_Count × 2M) + (Agent_Server_Count × 2M) + (Worker_Server_Count × 400k) + (SQL_Distributed_Server_Count × 500k) + +GiB/Day (Daily Storage) = Docs/Day × 380 / 1024³ + +Total Storage with Retention = GiB/Day × R (where R is retention in days) +``` + +**Example Calculation:** + +For an environment with 1 Web Server, 4 Agent Servers, 1 Worker, and 0 SQL Distributed Servers: + +``` +Docs/Day = 6M + (1 × 2M) + (4 × 2M) + (1 × 400k) + (0 × 500k) + = 16.4M documents/day + +GiB/Day = 16,400,000 × 380 / 1,073,741,824 + ≈ 5.8 GiB/day + +Total Storage (90-day retention) = 5.8 × 90 ≈ 522 GiB (~0.5 TB) +Total Storage (10-day retention) = 5.8 × 10 ≈ 58 GiB +``` + +This calculation helps you understand the storage impact of different retention periods and plan your infrastructure accordingly. + +### Factors Influencing Retention + +When determining the appropriate retention period for your environment, consider: + +- **Environment Size** – Development environments typically use default retention to minimize storage, while Small through X-Large environments benefit from recommended retention (90 days for logs/metrics, 30 days for traces) for better operational visibility and troubleshooting capabilities. + +- **Storage Capacity and Cost** – Evaluate available disk space using the storage calculation formula above. Longer retention requires more storage investment, so balance retention needs against available capacity and infrastructure costs. + +- **Regulatory Compliance** – Consult with legal and compliance teams to ensure retention settings meet your organization's regulatory obligations. Some industries and frameworks (HIPAA, SOX, PCI DSS) mandate specific retention periods for audit and logging data. + +--- + +## Configuration Steps + +### Step 1: Create Component Template with Required Retention Policy + +Elastic APM provides the `apm-90d@lifecycle` component template by default for 90-day retention. For 30-day retention (recommended for traces), create a custom component template using the Dev Tools Console in Kibana: + +**Sample Request:** + +```json +PUT _component_template/apm-30d@lifecycle +{ + "template": { + "lifecycle": { + "enabled": true, + "data_retention": "30d" + } + }, + "_meta": { + "managed": true, + "description": "Data stream lifecycle for 30 days of retention" + } +} +``` + +**Sample Output:** + +```json +{ + "acknowledged": true +} +``` + +### Step 2: Update Index Templates + +Update the following index templates to use the appropriate component template based on your retention requirements: + +| Index Template | Data Type | Default Component | Recommended Component | +|---------------|-----------|-------------------|----------------------| +| `logs-apm.app@template` | Logs | `apm-10d@lifecycle` | `apm-90d@lifecycle` | +| `metrics-apm.app@template` | Metrics | `apm-90d@lifecycle` | `apm-90d@lifecycle` | +| `traces-apm@template` | Traces | `apm-10d@lifecycle` | `apm-30d@lifecycle` | + +#### a. Get Current Index Template Configuration + +Use the Dev Tools Console in Kibana to retrieve the existing index template settings: + +```json +GET _index_template/logs-apm.app@template +``` + +**Sample Output:** + +```json +{ + "index_templates": [ + { + "name": "logs-apm.app@template", + "index_template": { + "index_patterns": [ + "logs-apm.app.*-*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "logs-apm.app@default-pipeline", + "final_pipeline": "logs-apm@pipeline" + } + } + }, + "composed_of": [ + "logs@mappings", + "apm@mappings", + "apm@settings", + "logs-apm@settings", + "logs-apm.app-fallback@ilm", + "ecs@mappings", + "logs@custom", + "logs-apm.app@custom", + "apm-10d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for logs-apm.app.*-*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "logs@custom", + "logs-apm.app@custom", + "logs-apm.app-fallback@ilm" + ] + } + } + ] +} +``` + +#### b. Update the Index Template + +Modify the `composed_of` array to replace the existing lifecycle component template with the desired retention policy. In this example, we replace `apm-10d@lifecycle` with `apm-90d@lifecycle` for 90-day retention: + +```json +PUT _index_template/logs-apm.app@template +{ + "index_patterns": [ + "logs-apm.app.*-*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "logs-apm.app@default-pipeline", + "final_pipeline": "logs-apm@pipeline" + } + } + }, + "composed_of": [ + "logs@mappings", + "apm@mappings", + "apm@settings", + "logs-apm@settings", + "logs-apm.app-fallback@ilm", + "ecs@mappings", + "logs@custom", + "logs-apm.app@custom", + "apm-90d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for logs-apm.app.*-*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "logs@custom", + "logs-apm.app@custom", + "logs-apm.app-fallback@ilm" + ] +} +``` + +**Sample Output:** + +```json +{ + "acknowledged": true +} +``` + +#### c. Repeat for Other Templates + +Repeat the above steps for `metrics-apm.app@template` and `traces-apm@template`, updating each with the appropriate lifecycle component template based on your retention requirements. + +> [!IMPORTANT] +> Changes to index templates only affect **new data streams** created after the update. Existing data streams will continue using their original retention policies until they are manually updated or recreated. + +--- + +## Advanced Configuration + +For more advanced retention management using Index Lifecycle Management (ILM) policies with customizable phases (hot, warm, cold, delete), refer to the official Elasticsearch documentation: + +- [Index Lifecycle Management (ILM) Overview](https://www.elastic.co/guide/en/elasticsearch/reference/current/index-lifecycle-management.html) +- [Configure ILM Policies](https://www.elastic.co/guide/en/elasticsearch/reference/current/set-up-lifecycle-policy.html) +- [Data Stream Lifecycle vs ILM](https://www.elastic.co/guide/en/elasticsearch/reference/current/data-stream-lifecycle.html) + +ILM provides more granular control over data lifecycle phases and allows for tiered storage architectures in large-scale environments. \ No newline at end of file From cd4b49d444a88dfb2d397365da649980f5986d23 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Mon, 15 Dec 2025 12:33:24 +0530 Subject: [PATCH 05/18] REL-1207540: performance impact md file added --- ...elasticsearch_pre_installation_overview.md | 1 + docs/environment_watch_installation.md | 1 + docs/environment_watch_performance_impact.md | 43 +++++++++++++++++++ 3 files changed, 45 insertions(+) create mode 100644 docs/environment_watch_performance_impact.md diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index 2105e658..0cb52735 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -134,6 +134,7 @@ The number of servers and hardware specifications needed to host the Elastic com - Separate Elastic clusters is supported when using both Audit/Environment Watch but not required - APM/Kibana servers can be load balanced - Each Elasticsearch node should have at least 4 vCPU and 32 GB RAM. +- A single Data node can be used for both Audit and Environment Watch in Development environments. ### Licensing diff --git a/docs/environment_watch_installation.md b/docs/environment_watch_installation.md index 2fe71dee..ccc4dbfa 100644 --- a/docs/environment_watch_installation.md +++ b/docs/environment_watch_installation.md @@ -7,6 +7,7 @@ Environment Watch and Data Grid Audit require installation and configuration of The Relativity applications and components that are referenced in this installation guide are packaged together in the Server bundle release. You can find the latest bundle on GitHub [here](https://github.com/relativitydev/server-bundle-release/releases). Environment Watch and Data Grid Audit also require Relativity applications that are available in the Relativity Application Library and not packaged in the bundle or covered in this installation guide (e.g. Pagebase and Telemetry for Environment Watch, Audit for Data Grid Audit, and InfraWatch Services for both). These applications are identified as pre-requisites in relevant sections of this installation guide. +For information about Environment Watch's performance impact on Relativity workloads, see [Environment Watch Performance Impact](./environment_watch_performance_impact.md). The Server bundle is generally released quarterly, with hotfixes provided for critical issues as needed. diff --git a/docs/environment_watch_performance_impact.md b/docs/environment_watch_performance_impact.md new file mode 100644 index 00000000..35bb6b16 --- /dev/null +++ b/docs/environment_watch_performance_impact.md @@ -0,0 +1,43 @@ +# Environment Watch Performance Impact + +## Overview + +This document provides transparent information about the performance overhead Environment Watch introduces to standard Relativity workloads, based on comprehensive testing in a production-like environment. + +## Performance Impact on Relativity Workloads + +Environment Watch has been rigorously tested to ensure minimal impact on your Relativity operations. Here's what you can expect: + +### Performance Results Summary + +| Workload Category | Impact | Summary | +|------------------|--------|------------------| +| **Processing** | **+450% faster** | Processing performance has improved dramatically, delivering a 450% speed increase that will noticeably accelerate end-to-end workflows. | +| **Review (Conversion)** | **+5% faster** | Review operations saw a modest 5% improvement, providing slightly faster document conversion without any workflow disruption. | +| **Imaging & Production** | **Stable (±4%)** | Imaging and production performance remained stable, with changes within a ±4% range, resulting in no meaningful impact to customer workflows. | +| **Data Transfer** | **Mixed results** | Native file operations improved by 4–38%, offering smoother import/export performance. Image-based workflows saw some declines—most notably a 157% slowdown in RIP image export—which may impact image-heavy projects. | + +## Test Environment Specifications + +### Server Configuration Summary + +| Server Role | Quantity | Specs | +|-------------|----------|-------| +| Web Servers | 3 | Standard D8s v5 (8 vCPUs, 32 GiB RAM) | +| Core Agent Servers | 6 | Standard D8s v5 (8 vCPUs, 32 GiB RAM) | +| Processing Workers | 4 | Standard D16ls v6 (16 vCPUs, 32 GiB RAM) | +| Data Grid Servers | 3 | Standard D8s v4 (8 vCPUs, 32 GiB RAM) | +| SQL Primary | 1 | Standard D8as v5 (8 vcpus, 32 GiB RAM) | +| SQL Invariant | 1 | Standard DS13 v2 (8 vcpus, 56 GiB RAM) | +| SQL Distributed | 1 | Standard DS14-8 v2 (8 vcpus, 112 GiB RAM) | +| Analytics Server | 1 | Standard D8s v4 (8 vCPUs, 32 GiB RAM) | +| Conversion Agent | 1 | Standard D8s v5 (8 vCPUs, 32 GiB RAM) | +| DtSearch Agent | 1 | Standard D8ls v5 (8 vCPUs, 16 GiB RAM) | +| RabbitMQ Server | 1 | Standard D8ls v5 (8 vCPUs, 16 GiB RAM) | +| PDF Server | 1 | Standard D8s v5 (8 vCPUs, 32 GiB RAM) | + +This comprehensive test environment, ranging from Small to Medium scale, mirrors typical production Relativity deployments and ensures our performance results are representative of real-world customer workloads. + +## Conclusion + +Environment Watch delivers significant performance improvements for processing workloads while maintaining stable performance for most other Relativity operations. Organizations with heavy image-based data transfer workflows should evaluate their specific use cases to ensure alignment with their performance requirements. From b1908d6bdd67bb5343690f79205b92683abadd10 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Mon, 15 Dec 2025 15:34:22 +0530 Subject: [PATCH 06/18] REL-1207540: Self feedback/observation fix --- ...asticsearch_retention_policy_guidelines.md | 15 +++--- docs/elasticsearch_setup_development.md | 4 ++ .../post-install-verification.md | 5 ++ .../retention-policy.md | 54 +++++++++++++++++++ 4 files changed, 72 insertions(+), 6 deletions(-) create mode 100644 docs/environment-watch/post-install-verification/retention-policy.md diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index 669bbb09..5a6eff1d 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -29,11 +29,11 @@ Failing to configure appropriate retention policies can lead to: The following table provides baseline retention recommendations for different data types: -| Data Type | Default Retention | Recommended Retention | Rationale | -|-----------|-------------------|----------------------|-----------| -| **Logs** | 10 days | 90 days | Optimal balance between troubleshooting capabilities, storage costs, and operational needs | -| **Metrics** | 90 days | 90 days | Sufficient period for trend analysis, capacity planning, and performance baseline establishment | -| **Traces** | 10 days | 30 days | Adequate for performance troubleshooting while managing high-volume trace data storage | +| Data Type | Default Retention | Recommended Retention | +|-----------|-------------------|----------------------| +| **Logs** | 10 days | 90 days | +| **Metrics** | 90 days | 90 days | +| **Traces** | 10 days | 30 days | > [!NOTE] > The default retention values are configured out-of-the-box to minimize storage usage in new installations. The recommended retention periods represent industry best practices for Relativity environments, providing sufficient historical data for troubleshooting and trend analysis. Consider upgrading from default to recommended retention based on your organization's specific requirements, compliance obligations, and available storage capacity. @@ -90,6 +90,7 @@ Elastic APM provides the `apm-90d@lifecycle` component template by default for 9 **Sample Request:** ```json +# Here apm-30d@lifecycle is the name of the component template PUT _component_template/apm-30d@lifecycle { "template": { @@ -128,6 +129,7 @@ Update the following index templates to use the appropriate component template b Use the Dev Tools Console in Kibana to retrieve the existing index template settings: ```json +# Here logs-apm.app@template is the name of the index template GET _index_template/logs-apm.app@template ``` @@ -186,9 +188,10 @@ GET _index_template/logs-apm.app@template #### b. Update the Index Template -Modify the `composed_of` array to replace the existing lifecycle component template with the desired retention policy. In this example, we replace `apm-10d@lifecycle` with `apm-90d@lifecycle` for 90-day retention: +From the output above, copy the entire `index_template` section and modify the `composed_of` array to replace the existing lifecycle component template with the desired retention policy. In this example, we replace `apm-10d@lifecycle` with `apm-90d@lifecycle` for 90-day retention: ```json +# Here logs-apm.app@template is the name of the index template PUT _index_template/logs-apm.app@template { "index_patterns": [ diff --git a/docs/elasticsearch_setup_development.md b/docs/elasticsearch_setup_development.md index ce77490e..e2d0bdfa 100644 --- a/docs/elasticsearch_setup_development.md +++ b/docs/elasticsearch_setup_development.md @@ -425,6 +425,10 @@ If you download a .zip or other file from the internet, Windows may block the fi 3. The word `green` in the response means the cluster is healthy. The word `yellow` in the response means the cluster is partially healthy. If you see `red`, investigate further. +4. Adjust Retention Period (Optional) + + If the default retention periods do not meet your requirements, you can modify them according to your organization's needs. For detailed guidance on retention policies and configuration steps, see [Elasticsearch Retention Policy Guidelines](elasticsearch_retention_policy_guidelines.md). + ## Next Step [Click here for the next step](relativity_server_cli_setup.md) \ No newline at end of file diff --git a/docs/environment-watch/post-install-verification.md b/docs/environment-watch/post-install-verification.md index 8dab9d2f..9bba66ab 100644 --- a/docs/environment-watch/post-install-verification.md +++ b/docs/environment-watch/post-install-verification.md @@ -36,6 +36,11 @@ This section covers how to ensure that the alerting mechanism is working as expe [Click here for Alerts Verification](post-install-verification/alert-overview.md) +### 4. Retention Policy +This section guides through verifying that the data retention policies are properly configured for APM data streams. + +[Click here for Retention Policy Verification](post-install-verification/retention-policy.md) + > [!NOTE] > All Kibana dashboards are designed and optimized for **1920x1080** screen resolution to ensure optimal viewing experience and proper layout formatting. diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md new file mode 100644 index 00000000..d2a5e4da --- /dev/null +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -0,0 +1,54 @@ +# Verify Retention Policy Configuration + +This verification step confirms that the retention period (data lifecycle) is properly configured for your APM data streams. + +## Verification Steps + +1. Navigate to Kibana Dev Tools Console: + - Open Kibana in your web browser + - Click on **Dev Tools** in the left navigation menu + +2. Run the following queries to verify retention policies for each data stream type: + +### Verify Logs Retention Policy + +``` +GET /_data_stream/logs-apm.app*?filter_path=data_streams.name,data_streams.lifecycle +``` + +### Verify Metrics Retention Policy + +``` +GET /_data_stream/metrics-apm.app*?filter_path=data_streams.name,data_streams.lifecycle +``` + +### Verify Traces Retention Policy + +``` +GET /_data_stream/traces-apm*?filter_path=data_streams.name,data_streams.lifecycle +``` + +## Expected Results + +Each query should return the data stream names along with their configured lifecycle settings. The response will look similar to: + +```json +{ + "data_streams": [ + { + "name": "logs-apm.app-default", + "lifecycle": { + "enabled": true, + "data_retention": "90d" + } + } + ] +} +``` + +## What to Check + +- **enabled**: Should be `true` if data lifecycle management is active +- **data_retention**: Shows the configured retention period (e.g., "30d" for 30 days, "90d" for 90 days) + +If the lifecycle settings don't match your expected configuration, you may need to update your retention period according to [elasticsearch_retention_policy_guidelines.md](../../elasticsearch_retention_policy_guidelines.md). From f5f4f81507b4f54f3e3f3732f5bff133591d6b4b Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Mon, 15 Dec 2025 15:37:06 +0530 Subject: [PATCH 07/18] REL-1224050: Banner added in rentention policy verification --- .../post-install-verification/retention-policy.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index d2a5e4da..440624cd 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -1,3 +1,6 @@ +# Post-Install Verification for Rentension Policy +![Post-Install Verification Banner](../../../resources/post-install-verification-images/Post-installation-verification.svg) + # Verify Retention Policy Configuration This verification step confirms that the retention period (data lifecycle) is properly configured for your APM data streams. From 04340116309376f5c19ce0d50611c0550074189b Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Mon, 15 Dec 2025 15:41:24 +0530 Subject: [PATCH 08/18] REL-1224050: Spelling fix --- docs/elasticsearch_retention_policy_guidelines.md | 2 +- .../post-install-verification/retention-policy.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index 5a6eff1d..db6bb7f3 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -4,7 +4,7 @@ ### Purpose -This guidelines defines retention policies for logs, metrics, and traces collected in Elasticsearch and viewed through Kibana. Proper retention management is critical for: +These guidelines define retention policies for logs, metrics, and traces collected in Elasticsearch and viewed through Kibana. Proper retention management is critical for: - **Storage Optimization** – Prevents excessive disk usage by automatically removing outdated data - **Performance Maintenance** – Keeps query response times fast by limiting the volume of searchable data diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index 440624cd..559bda7d 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -1,4 +1,4 @@ -# Post-Install Verification for Rentension Policy +# Post-Install Verification for Retention Policy ![Post-Install Verification Banner](../../../resources/post-install-verification-images/Post-installation-verification.svg) # Verify Retention Policy Configuration From ae1187c9650c29cd514c03a235e5016a02faefef Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Mon, 15 Dec 2025 15:46:37 +0530 Subject: [PATCH 09/18] REL-1207540: Feedback fix in Licensing --- docs/elasticsearch_pre_installation_overview.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index 0cb52735..0c4c882f 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -138,7 +138,7 @@ The number of servers and hardware specifications needed to host the Elastic com ### Licensing -Environment Watch only requires a free and open ("Basic") Elastic license. By default, new installations have a Basic license that never expires. If you would like to utilize additional Elastic features from the Platinum or Enterprise subscription, you will need to purchase the license separately. +Both Environment Watch and Data Grid Audit require only a free and open ("Basic") Elastic license. By default, new installations have a Basic license that never expires. If you would like to utilize additional Elastic features from the Platinum or Enterprise subscription, you will need to purchase the license separately. If you have used Elasticsearch for the optional Data Grid Audit feature on Relativity Server prior to April 2025, you would have been using a Platinum license key provided by Relativity. Effective with Server 2024 Patch 1, the Platinum license is no longer required for Data Grid Audit and Relativity will not provide a Platinum license for any new deployments of Data Grid Audit. All existing Data Grid Audit customers will have until early 2026 to adopt Relativity Server 2024 and update to a Basic Elastic license. From aba3989c0330db01bef5bab4c29fc819d490963c Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 14:28:09 +0530 Subject: [PATCH 10/18] REL-1224050: Delete existing stream step added --- ...asticsearch_retention_policy_guidelines.md | 36 +++++++++++++++++++ 1 file changed, 36 insertions(+) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index db6bb7f3..9b79d986 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -251,6 +251,42 @@ Repeat the above steps for `metrics-apm.app@template` and `traces-apm@template`, > [!IMPORTANT] > Changes to index templates only affect **new data streams** created after the update. Existing data streams will continue using their original retention policies until they are manually updated or recreated. +### Step 3: Delete Existing Data Streams (Setup Time Only) + +> [!WARNING] +> This step should only be performed once during initial setup. Deleting data streams will permanently remove all data and indices under those data streams. + +After updating the index templates with new retention policies, you need to delete the existing data streams so they can be recreated with the updated retention settings. Use the Dev Tools Console in Kibana to run the following commands: + +**Delete Logs Data Stream:** + +```json +DELETE _data_stream/logs-apm.app* +``` + +**Delete Metrics Data Stream:** + +```json +DELETE _data_stream/metrics-apm.app* +``` + +**Delete Traces Data Stream:** + +```json +DELETE _data_stream/traces-apm* +``` + +**Sample Output for each command:** + +```json +{ + "acknowledged": true +} +``` + +> [!NOTE] +> After deleting the data streams, new data streams will be automatically created with the updated retention policies when APM agents begin sending new telemetry data. + --- ## Advanced Configuration From f6b2a649bd477fab9a9449c454b619c89d209f82 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 14:47:58 +0530 Subject: [PATCH 11/18] REL-1224050: Red color highlight removal --- .../elasticsearch_retention_policy_guidelines.md | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index 9b79d986..cbd39ede 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -90,8 +90,8 @@ Elastic APM provides the `apm-90d@lifecycle` component template by default for 9 **Sample Request:** ```json -# Here apm-30d@lifecycle is the name of the component template -PUT _component_template/apm-30d@lifecycle +# Here apm-30d@lifecycle is the name of the component template +PUT _component_template/apm-30d@lifecycle { "template": { "lifecycle": { @@ -129,8 +129,8 @@ Update the following index templates to use the appropriate component template b Use the Dev Tools Console in Kibana to retrieve the existing index template settings: ```json -# Here logs-apm.app@template is the name of the index template -GET _index_template/logs-apm.app@template +# Here logs-apm.app@template is the name of the index template +GET _index_template/logs-apm.app@template ``` **Sample Output:** @@ -192,7 +192,7 @@ From the output above, copy the entire `index_template` section and modify the ` ```json # Here logs-apm.app@template is the name of the index template -PUT _index_template/logs-apm.app@template +PUT _index_template/logs-apm.app@template { "index_patterns": [ "logs-apm.app.*-*" @@ -261,19 +261,19 @@ After updating the index templates with new retention policies, you need to dele **Delete Logs Data Stream:** ```json -DELETE _data_stream/logs-apm.app* +DELETE _data_stream/logs-apm.app* ``` **Delete Metrics Data Stream:** ```json -DELETE _data_stream/metrics-apm.app* +DELETE _data_stream/metrics-apm.app* ``` **Delete Traces Data Stream:** ```json -DELETE _data_stream/traces-apm* +DELETE _data_stream/traces-apm* ``` **Sample Output for each command:** From 8a36ce187a17e1d8fbbe3c80e901cd43fcf13464 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 14:54:28 +0530 Subject: [PATCH 12/18] REL-1224050: Red color highlight removal --- docs/elasticsearch_retention_policy_guidelines.md | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index cbd39ede..df851b84 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -89,7 +89,7 @@ Elastic APM provides the `apm-90d@lifecycle` component template by default for 9 **Sample Request:** -```json +``` # Here apm-30d@lifecycle is the name of the component template PUT _component_template/apm-30d@lifecycle { @@ -128,7 +128,7 @@ Update the following index templates to use the appropriate component template b Use the Dev Tools Console in Kibana to retrieve the existing index template settings: -```json +``` # Here logs-apm.app@template is the name of the index template GET _index_template/logs-apm.app@template ``` @@ -190,7 +190,7 @@ GET _index_template/logs-apm.app@template From the output above, copy the entire `index_template` section and modify the `composed_of` array to replace the existing lifecycle component template with the desired retention policy. In this example, we replace `apm-10d@lifecycle` with `apm-90d@lifecycle` for 90-day retention: -```json +``` # Here logs-apm.app@template is the name of the index template PUT _index_template/logs-apm.app@template { @@ -260,19 +260,19 @@ After updating the index templates with new retention policies, you need to dele **Delete Logs Data Stream:** -```json +``` DELETE _data_stream/logs-apm.app* ``` **Delete Metrics Data Stream:** -```json +``` DELETE _data_stream/metrics-apm.app* ``` **Delete Traces Data Stream:** -```json +``` DELETE _data_stream/traces-apm* ``` From aea42c90ba12d89da06e4793c68cb19effc357df Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 20:15:27 +0530 Subject: [PATCH 13/18] REL-1224050: Minor feedback fix --- docs/elasticsearch_pre_installation_overview.md | 2 +- docs/elasticsearch_retention_policy_guidelines.md | 2 +- docs/environment-watch/post-install-verification.md | 2 +- .../post-install-verification/retention-policy.md | 4 ++-- 4 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index 0c4c882f..9e3ef0e1 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -132,7 +132,7 @@ The number of servers and hardware specifications needed to host the Elastic com | X-Large | 10 / 32 TB | 5 / 16 TB | 3 | 3 | - Separate Elastic clusters is supported when using both Audit/Environment Watch but not required -- APM/Kibana servers can be load balanced +- APM(Application Performance Monitoring)/Kibana servers can be load balanced - Each Elasticsearch node should have at least 4 vCPU and 32 GB RAM. - A single Data node can be used for both Audit and Environment Watch in Development environments. diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index df851b84..62fc923d 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -77,7 +77,7 @@ When determining the appropriate retention period for your environment, consider - **Storage Capacity and Cost** – Evaluate available disk space using the storage calculation formula above. Longer retention requires more storage investment, so balance retention needs against available capacity and infrastructure costs. -- **Regulatory Compliance** – Consult with legal and compliance teams to ensure retention settings meet your organization's regulatory obligations. Some industries and frameworks (HIPAA, SOX, PCI DSS) mandate specific retention periods for audit and logging data. +- **Regulatory Compliance** – Consult with legal and compliance teams to ensure retention settings meet your organization's regulatory obligations. Some industries and frameworks (HIPAA (Health Insurance Portability and Accountability Act), SOX (Sarbanes-Oxley Act), PCI DSS (Payment Card Industry Data Security Standard)) mandate specific retention periods for audit and logging data. --- diff --git a/docs/environment-watch/post-install-verification.md b/docs/environment-watch/post-install-verification.md index 9bba66ab..58e459af 100644 --- a/docs/environment-watch/post-install-verification.md +++ b/docs/environment-watch/post-install-verification.md @@ -37,7 +37,7 @@ This section covers how to ensure that the alerting mechanism is working as expe [Click here for Alerts Verification](post-install-verification/alert-overview.md) ### 4. Retention Policy -This section guides through verifying that the data retention policies are properly configured for APM data streams. +This section guides through verifying that the data retention policies are properly configured for APM(Application Performance Monitoring) data streams. [Click here for Retention Policy Verification](post-install-verification/retention-policy.md) diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index 559bda7d..cffec5ad 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -1,7 +1,7 @@ # Post-Install Verification for Retention Policy ![Post-Install Verification Banner](../../../resources/post-install-verification-images/Post-installation-verification.svg) -# Verify Retention Policy Configuration +## Verify Retention Policy Configuration This verification step confirms that the retention period (data lifecycle) is properly configured for your APM data streams. @@ -52,6 +52,6 @@ Each query should return the data stream names along with their configured lifec ## What to Check - **enabled**: Should be `true` if data lifecycle management is active -- **data_retention**: Shows the configured retention period (e.g., "30d" for 30 days, "90d" for 90 days) +- **data_retention**: Indicates the configured retention period (e.g., "30d" for 30 days, "90d" for 90 days) If the lifecycle settings don't match your expected configuration, you may need to update your retention period according to [elasticsearch_retention_policy_guidelines.md](../../elasticsearch_retention_policy_guidelines.md). From 2d81e28b07603f32f0f12bfe7f955e3864036939 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 20:16:40 +0530 Subject: [PATCH 14/18] REL-1224050: Feedback fix --- .../post-install-verification/retention-policy.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index cffec5ad..cada0748 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -3,7 +3,7 @@ ## Verify Retention Policy Configuration -This verification step confirms that the retention period (data lifecycle) is properly configured for your APM data streams. +This verification step confirms that the retention period (data lifecycle) is properly configured for your APM(Application Performance Monitoring) data streams. ## Verification Steps From ccf2acf8007de254577fc44b059518eb96f5e414 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Tue, 16 Dec 2025 20:25:17 +0530 Subject: [PATCH 15/18] REL-1224050: Feedback fix --- docs/elasticsearch_pre_installation_overview.md | 2 +- docs/environment-watch/post-install-verification.md | 2 +- .../post-install-verification/retention-policy.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/elasticsearch_pre_installation_overview.md b/docs/elasticsearch_pre_installation_overview.md index 9e3ef0e1..cb3d76a3 100644 --- a/docs/elasticsearch_pre_installation_overview.md +++ b/docs/elasticsearch_pre_installation_overview.md @@ -132,7 +132,7 @@ The number of servers and hardware specifications needed to host the Elastic com | X-Large | 10 / 32 TB | 5 / 16 TB | 3 | 3 | - Separate Elastic clusters is supported when using both Audit/Environment Watch but not required -- APM(Application Performance Monitoring)/Kibana servers can be load balanced +- Application Performance Monitoring(APM)/Kibana servers can be load balanced - Each Elasticsearch node should have at least 4 vCPU and 32 GB RAM. - A single Data node can be used for both Audit and Environment Watch in Development environments. diff --git a/docs/environment-watch/post-install-verification.md b/docs/environment-watch/post-install-verification.md index 58e459af..658b35b3 100644 --- a/docs/environment-watch/post-install-verification.md +++ b/docs/environment-watch/post-install-verification.md @@ -37,7 +37,7 @@ This section covers how to ensure that the alerting mechanism is working as expe [Click here for Alerts Verification](post-install-verification/alert-overview.md) ### 4. Retention Policy -This section guides through verifying that the data retention policies are properly configured for APM(Application Performance Monitoring) data streams. +This section guides through verifying that the data retention policies are properly configured for Application Performance Monitoring(APM) data streams. [Click here for Retention Policy Verification](post-install-verification/retention-policy.md) diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index cada0748..147c5d52 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -3,7 +3,7 @@ ## Verify Retention Policy Configuration -This verification step confirms that the retention period (data lifecycle) is properly configured for your APM(Application Performance Monitoring) data streams. +This verification step confirms that the retention period (data lifecycle) is properly configured for your Application Performance Monitoring(APM) data streams. ## Verification Steps From fb066b53a86deb115ee6ca2044e74771f081593b Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Wed, 17 Dec 2025 12:15:12 +0530 Subject: [PATCH 16/18] REL-1224050: Feedback fix from SQE --- ...asticsearch_retention_policy_guidelines.md | 152 +++++++++++++++++- .../retention-policy.md | 6 +- 2 files changed, 148 insertions(+), 10 deletions(-) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index 62fc923d..1d474bb1 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -87,6 +87,12 @@ When determining the appropriate retention period for your environment, consider Elastic APM provides the `apm-90d@lifecycle` component template by default for 90-day retention. For 30-day retention (recommended for traces), create a custom component template using the Dev Tools Console in Kibana: +**Navigate to Dev Tools Console:** + +1. Open Kibana in your web browser +2. Click on **Dev Tools** in the left navigation menu (or use the search bar at the top to find "Dev Tools") +3. You'll see the Console interface where you can execute Elasticsearch queries + **Sample Request:** ``` @@ -124,13 +130,15 @@ Update the following index templates to use the appropriate component template b | `metrics-apm.app@template` | Metrics | `apm-90d@lifecycle` | `apm-90d@lifecycle` | | `traces-apm@template` | Traces | `apm-10d@lifecycle` | `apm-30d@lifecycle` | -#### a. Get Current Index Template Configuration +#### a. Update Logs Index Template -Use the Dev Tools Console in Kibana to retrieve the existing index template settings: +First, use the Dev Tools Console in Kibana to retrieve the existing index template settings using a GET request: + +**Sample Request:** ``` -# Here logs-apm.app@template is the name of the index template -GET _index_template/logs-apm.app@template +# Here logs-apm.app@template is the name of the index template +GET _index_template/logs-apm.app@template ``` **Sample Output:** @@ -186,9 +194,9 @@ GET _index_template/logs-apm.app@template } ``` -#### b. Update the Index Template +Then, copy the `index_template` section from the output above and update it by replacing `apm-10d@lifecycle` with `apm-90d@lifecycle` in the `composed_of` array using a PUT request: -From the output above, copy the entire `index_template` section and modify the `composed_of` array to replace the existing lifecycle component template with the desired retention policy. In this example, we replace `apm-10d@lifecycle` with `apm-90d@lifecycle` for 90-day retention: +**Sample Request:** ``` # Here logs-apm.app@template is the name of the index template @@ -244,9 +252,137 @@ PUT _index_template/logs-apm.app@template } ``` -#### c. Repeat for Other Templates +#### b. Update Metrics Index Template (Optional) + +The `metrics-apm.app@template` already uses the `apm-90d@lifecycle` component template by default, so it does not require any updates if you are using the recommended 90-day retention period. If you need a different retention period, retrieve the current template configuration using a GET request and update it following the same pattern as the logs template: + +**Sample Request:** + +``` +# Get the current template configuration +GET _index_template/metrics-apm.app@template +``` + +#### c. Update Traces Index Template + +For traces, retrieve the current template configuration using a GET request: + +**Sample Request:** + +``` +# Get the current template configuration +GET _index_template/traces-apm@template +``` + +**Sample Output:** + +```json +{ + "index_templates": [ + { + "name": "traces-apm@template", + "index_template": { + "index_patterns": [ + "traces-apm*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "traces-apm@default-pipeline", + "final_pipeline": "traces-apm@pipeline" + } + } + }, + "composed_of": [ + "traces@mappings", + "apm@mappings", + "apm@settings", + "traces-apm@settings", + "traces-apm-fallback@ilm", + "ecs@mappings", + "traces@custom", + "traces-apm@custom", + "apm-10d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for traces-apm*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "traces@custom", + "traces-apm@custom", + "traces-apm-fallback@ilm" + ] + } + } + ] +} +``` + +Then, copy the `index_template` section from the output above and update it by replacing `apm-10d@lifecycle` with `apm-30d@lifecycle` (which you created in Step 1) in the `composed_of` array using a PUT request: + +**Sample Request:** + +``` +PUT _index_template/traces-apm@template +{ + "index_patterns": [ + "traces-apm*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "traces-apm@default-pipeline", + "final_pipeline": "traces-apm@pipeline" + } + } + }, + "composed_of": [ + "traces@mappings", + "apm@mappings", + "apm@settings", + "traces-apm@settings", + "traces-apm-fallback@ilm", + "ecs@mappings", + "traces@custom", + "traces-apm@custom", + "apm-30d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for traces-apm*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "traces@custom", + "traces-apm@custom", + "traces-apm-fallback@ilm" + ] +} +``` + +**Sample Output:** -Repeat the above steps for `metrics-apm.app@template` and `traces-apm@template`, updating each with the appropriate lifecycle component template based on your retention requirements. +```json +{ + "acknowledged": true +} +``` > [!IMPORTANT] > Changes to index templates only affect **new data streams** created after the update. Existing data streams will continue using their original retention policies until they are manually updated or recreated. diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index 147c5d52..58634ae2 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -42,7 +42,9 @@ Each query should return the data stream names along with their configured lifec "name": "logs-apm.app-default", "lifecycle": { "enabled": true, - "data_retention": "90d" + "data_retention": "90d", + "effective_retention": "90d", + "retention_determined_by": "data_stream_configuration" } } ] @@ -51,7 +53,7 @@ Each query should return the data stream names along with their configured lifec ## What to Check -- **enabled**: Should be `true` if data lifecycle management is active +- **enabled**: Should be `true` - **data_retention**: Indicates the configured retention period (e.g., "30d" for 30 days, "90d" for 90 days) If the lifecycle settings don't match your expected configuration, you may need to update your retention period according to [elasticsearch_retention_policy_guidelines.md](../../elasticsearch_retention_policy_guidelines.md). From efba91f4693d4154020f4c76871593e29b72b6d6 Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Wed, 17 Dec 2025 16:06:17 +0530 Subject: [PATCH 17/18] REL-1224050: Feedback fix from SQE --- ...asticsearch_retention_policy_guidelines.md | 112 +++++++++++++++++- .../retention-policy.md | 6 +- 2 files changed, 115 insertions(+), 3 deletions(-) diff --git a/docs/elasticsearch_retention_policy_guidelines.md b/docs/elasticsearch_retention_policy_guidelines.md index 1d474bb1..62c6069e 100644 --- a/docs/elasticsearch_retention_policy_guidelines.md +++ b/docs/elasticsearch_retention_policy_guidelines.md @@ -254,7 +254,7 @@ PUT _index_template/logs-apm.app@template #### b. Update Metrics Index Template (Optional) -The `metrics-apm.app@template` already uses the `apm-90d@lifecycle` component template by default, so it does not require any updates if you are using the recommended 90-day retention period. If you need a different retention period, retrieve the current template configuration using a GET request and update it following the same pattern as the logs template: +The `metrics-apm.app@template` already uses the `apm-90d@lifecycle` component template by default, so it does not require any updates if you are using the recommended 90-day retention period. If you need a different retention period, retrieve the current template configuration using a GET request: **Sample Request:** @@ -263,6 +263,116 @@ The `metrics-apm.app@template` already uses the `apm-90d@lifecycle` component te GET _index_template/metrics-apm.app@template ``` +**Sample Output:** + +```json +{ + "index_templates": [ + { + "name": "metrics-apm.app@template", + "index_template": { + "index_patterns": [ + "metrics-apm.app.*-*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "metrics-apm.app@default-pipeline", + "final_pipeline": "metrics-apm@pipeline" + } + } + }, + "composed_of": [ + "metrics@mappings", + "apm@mappings", + "apm@settings", + "metrics-apm@settings", + "metrics-apm.app-fallback@ilm", + "ecs@mappings", + "metrics@custom", + "metrics-apm.app@custom", + "apm-90d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for metrics-apm.app.*-*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "metrics@custom", + "metrics-apm.app@custom", + "metrics-apm.app-fallback@ilm" + ] + } + } + ] +} +``` + +Then, if you need to change the retention period, copy the `index_template` section from the output above and update it by replacing `apm-90d@lifecycle` with your desired retention component template in the `composed_of` array using a PUT request: + +**Sample Request:** + +``` +PUT _index_template/metrics-apm.app@template +{ + "index_patterns": [ + "metrics-apm.app.*-*" + ], + "template": { + "settings": { + "index": { + "mode": "standard", + "default_pipeline": "metrics-apm.app@default-pipeline", + "final_pipeline": "metrics-apm@pipeline" + } + } + }, + "composed_of": [ + "metrics@mappings", + "apm@mappings", + "apm@settings", + "metrics-apm@settings", + "metrics-apm.app-fallback@ilm", + "ecs@mappings", + "metrics@custom", + "metrics-apm.app@custom", + "apm-90d@lifecycle" + ], + "priority": 210, + "version": 101, + "_meta": { + "managed": true, + "description": "Index template for metrics-apm.app.*-*" + }, + "data_stream": { + "hidden": false, + "allow_custom_routing": false + }, + "allow_auto_create": true, + "ignore_missing_component_templates": [ + "metrics@custom", + "metrics-apm.app@custom", + "metrics-apm.app-fallback@ilm" + ] +} +``` + +**Sample Output:** + +```json +{ + "acknowledged": true +} +``` + #### c. Update Traces Index Template For traces, retrieve the current template configuration using a GET request: diff --git a/docs/environment-watch/post-install-verification/retention-policy.md b/docs/environment-watch/post-install-verification/retention-policy.md index 58634ae2..cd1e2d0f 100644 --- a/docs/environment-watch/post-install-verification/retention-policy.md +++ b/docs/environment-watch/post-install-verification/retention-policy.md @@ -33,13 +33,15 @@ GET /_data_stream/traces-apm*?filter_path=data_streams.name,data_streams.lifecyc ## Expected Results -Each query should return the data stream names along with their configured lifecycle settings. The response will look similar to: +Each query should return the data stream names along with their configured lifecycle settings. + +**Sample Output:** ```json { "data_streams": [ { - "name": "logs-apm.app-default", + "name": "logs-apm.app.relsvr_logging-default", "lifecycle": { "enabled": true, "data_retention": "90d", From 966d84894583fa99ae564834a302bb3bc5de58ef Mon Sep 17 00:00:00 2001 From: Dinesh Sundhararasu Date: Fri, 19 Dec 2025 10:27:20 +0530 Subject: [PATCH 18/18] REL-1207540: TDB placeholder added for summary and conclusion in performance impact --- docs/environment_watch_performance_impact.md | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/docs/environment_watch_performance_impact.md b/docs/environment_watch_performance_impact.md index 35bb6b16..84105250 100644 --- a/docs/environment_watch_performance_impact.md +++ b/docs/environment_watch_performance_impact.md @@ -12,10 +12,10 @@ Environment Watch has been rigorously tested to ensure minimal impact on your Re | Workload Category | Impact | Summary | |------------------|--------|------------------| -| **Processing** | **+450% faster** | Processing performance has improved dramatically, delivering a 450% speed increase that will noticeably accelerate end-to-end workflows. | +| **Processing** | TBD |TBD| | **Review (Conversion)** | **+5% faster** | Review operations saw a modest 5% improvement, providing slightly faster document conversion without any workflow disruption. | | **Imaging & Production** | **Stable (±4%)** | Imaging and production performance remained stable, with changes within a ±4% range, resulting in no meaningful impact to customer workflows. | -| **Data Transfer** | **Mixed results** | Native file operations improved by 4–38%, offering smoother import/export performance. Image-based workflows saw some declines—most notably a 157% slowdown in RIP image export—which may impact image-heavy projects. | +| **Data Transfer** | TBD| TBD | ## Test Environment Specifications @@ -38,6 +38,5 @@ Environment Watch has been rigorously tested to ensure minimal impact on your Re This comprehensive test environment, ranging from Small to Medium scale, mirrors typical production Relativity deployments and ensures our performance results are representative of real-world customer workloads. -## Conclusion +## Conclusion(TBD) -Environment Watch delivers significant performance improvements for processing workloads while maintaining stable performance for most other Relativity operations. Organizations with heavy image-based data transfer workflows should evaluate their specific use cases to ensure alignment with their performance requirements.