Policy Engine Installation
Understanding Deployment Modes
Before moving to installation steps, it is important to understand the possible deployment modes, their use cases. Policy engine deployment can be done in following different modes:
-
Standalone mode: For quick verifications, POCs.
-
2-Node Active-Passive: Having high-availability but requires manual intervention. Suitable for UATs, small deployments.
-
Cluster Mode (Recommended): Having 3 node setup with high-availability and fault-tolerance.
Standalone Mode
This is the simplest of all the available deployment modes. As shown in Figure 1, all services needed for policy engine are installed on a single VM.
- High-availability: No. In case any of the service goes down, policy engine will not be functional.

2-Node Active-Passive Standalone mode
2 VM Active-Passive Standalone mode is a slightly better version of Standalone mode which allows for manual failover. As shown in Figure 2, all services needed for policy engine are installed on 2 VMs but only one of the VMs is powered on at a time. In this deployment mode, two independent (standalone) VMs are prepared as Policy Engine servers, with second being kept in powered-off state.
-
High-availability: Manual. Whenever failover is performed, HyWorks Policy Management API fetches the latest policies from its database and publishes the data to Policy Data API.
- To use this mode, check UPDATE_OPAL_DATA_ON_SERVICE_START in Deployment Configurations.

Cluster Mode
In cluster mode, all services needed for policy engine are installed on 3 VMs and in clustered mode. Since all 3 VMs are active, the setup provides high availability and some level of fault tolerance.

Deployment Steps
Below is the list of all the steps required to get your policy engine deployment up and ready
Subsequent sections provide details of the above steps.
Docker Installation
Make sure Internet connection on the VM is available for docker and docker compose installation. Internet is required only for first time setup.
Run the below commands to install docker and docker compose
-
sudo dnf config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo -
sudo dnf install docker-ce -
sudo systemctl start docker -
sudo systemctl enable docker -
sudo dnf install curl -
sudo curl -L https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m) -o /usr/local/bin/docker-compose -
sudo ln -s /usr/local/bin/docker-compose /usr/bin/docker-compose
Once you have installed docker and docker compose on all your VM(s), you will need to copy the files required for deployment to all your VMs as outlined in the next section.
Get deployment files
Download the zip file of Policy Engine, extract the files and place it in **/opt** directory on the VM meeting software and hardware prerequisites and having docker installation. The directory structure will be like this:
/opt/<release_name>/automation, whereis the extracted directory e.g., /opt/v3.6-SP1_v20250819T1700/automation
Folder structure
A tree view of the folder structure of the deployment folder is shown below:

Locations for various files that you may need to use are mentioned below
-
.env, deploy.sh, cleanup.sh files are located at the root level.
-
docker-compose-standalone.yml and docker-compose.yml files are in
automation/opal-script/folder ** -
Log files of all services are in
automation/opal-script/logs/folder -
Certificates for ssl configuration of HyWorks Policy Management API are in
automation/opal-script/nginx/certificates/folder
Log files
Location of log files of all the services is mentioned below
-
Policy Data API:
automation/opal-script/logs/policy_api/ -
OPAL client:
automation/opal-script/logs/opal_client/ -
OPAL server:
automation/opal-script/logs/opal_server/ -
Update listener:
automation/opal-script/logs/update_listener/ -
HyWorks Policy Management API: OPAL client:
automation/opal-script/logs/hyworks_policy_manager_api/ -
Nginx:
automation/opal-script/nginx/error.log
Next step is updating the deployment configurations.
Environment configurations Before Deployments
Before proceeding to deploy the builds, appropriate configurations have to be done carefully. This section of the document will provide details for configurations:
Updating Environment File
Configurations related to the deployment can be specified in a file called .env, located at the root level of copied folder. In case this file is not visible, make sure that hidden files options for your file explorer or terminal command are enabled. Below is a brief description of all the configurations available in the .env file
| Configuration | Values to set |
|---|---|
| CLUSTER_MODE | Cluster mode, set as 1 Standalone mode, set as 0. |
| HOST_IP | IP address of the system, where the script is being run. |
| IS_PRIMARY | Cluster deployment, set to 1 on the first machine and 0 on the other nodes. Standalone or 2-Node deployment set as 1. |
| IS_LAST_MACHINE | Cluster deployment, set to 1 on the last node, on which the script runs on. Standalone or 2-Node deployment set as 0. |
| PRIMARY_NODE_IP | Cluster Deployment: For the first machine this value will be the same as HOST_IP. For the last two nodes, IP address of the first machine, the script runs on. Standalone or 2-Node deployment, this will always be same as Host_IP. |
| OTHER_NODE_1 OTHER_NODE_2 |
Cluster Deployment: These refer to the other 2 machines in the cluster. --- On the primary machine these will have 2 different IP addresses of non-primary nodes. --- On other 2 non-primary machines, only configure one of the OTHER_NODE with IP address of other non-primary node, leave the second OTHER_NODE configuration empty. OTHER_NODE set empty will automatically pick primary machine IP address from PRIMARY_NODE_IP. Standalone or 2-Node deployment --- Leave it empty. For Example: Primary Node: 192.168.1.1, first non-primary node: 192.168.1.8, second non-primary node: 192.168.1.9 --- Primary Node: OTHER_NODE_1:192.168.8, OTHER_NODE_2:192.168.1.9 --- First Non-Primary: OTHER_NODE_1:192.168.1.9, OTHER_NODE_2: --- Second Non-Primary: OTHER_NODE_1:192.168.1.8, OTHER_NODE_2: |
| UPDATE_OPAL_DATA_ON_SERVICE_START | In some cases, we might need to update policies and rules data in OPAL when HyWorks Policy Management API starts. By default, its value is set to false which means Policy Manager API will not update policies and rules data in OPAL on service start. For single VM standalone and cluster modes, it is recommended to keep this setting to its value to false.. It is recommended to change this setting to true on a 2-Node Active-Passive Standalone mode setup. |
| UPDATE_RETRY_COUNT | This is the maximum number of attempts made by HyWorks policy management api for updating policies on service start. The default value is 3. |
| UPDATE_RETRY_INTERVAL_IN_SECONDS | This is the interval in seconds between retries. The default value is 30 seconds. |
| DATA_API_IP | Specify IP or FQDN of Policy Data API. For standalone mode, the default value (management_api) will work. For cluster mode, specify the IP or FQDN of your load balancer here. |
| DATA_API_PORT | Specify port number of Policy Data API |
| EVALUATION_API_IP | Specify IP or FQDN of Policy Evaluation API. For standalone mode, the default value (opal_client) will work. For cluster mode, specify the IP or FQDN of your load balancer. |
| EVALUATION_API_PORT | Specify port number of Policy Evaluation API |
| PRIMARY_CONTROLLER_IP | Specify IP address or hostname of the primary controller. This configuration is mandatory. |
| PRIMARY_CONTROLLER_PORT | Specify port number of the primary controller. This configuration is mandatory. |
| SECONDARY_CONTROLLER_IP | Specify IP address or hostname of the secondary controller. Leave blank if your setup does not have a secondary controller. |
| SECONDARY_CONTROLLER_PORT | Specify port number of the secondary controller. Leave blank if your setup does not have a secondary controller. |
| MQ_HOSTNAME | Specify IP address or hostname of RabbitMQ used for HA notifier. |
| MQ_PORT | Specify port number of RabbitMQ used for HA notifier. |
| MQ_USERNAME | Specify username of RabbitMQ used for HA notifier. |
| MQ_PASSWORD | Specify encrypted password of RabbitMQ used for HA notifier. Connect with Accops team to get more details of password encryption. |
EXAMPLE ENVIRONMENT FILES:
| Standalone | 2-Node (On both nodes) | Cluster (3-Node) | ||
|---|---|---|---|---|
| ---- | ---- | Primary | 1st Non-primary | 2nd Non-primary |
|
Same as Standalone Deployment Environment File |
|
|
|
Important
- Make sure not to modify/copy environment files from Windows systems, this may cause issues in deployment if any corruption happens.
Optional Steps (if needed): Correcting Out Environment File:
-
Once .env file is modified following command can be run explicitly to correct out any issues in the file:
-
Take back up of existing env file:
cp .env .env.bak -
Run this command to preview the output
sed 's/\r$//' .env | head -
If the preview looks correct, then run
sed -i 's/\r$//' .env
-
After you have updated your deployment configurations, you must update service configurations for HyWorks Policy Management API as outlined in the next section.
Service configurations
When deploying the policy engine, while most of the services work well with their default values and don't need any change, certain configurations of HyWorks Policy Management API need to be updated to integrate Policy engine with your HyWorks controller.
Service configurations need to be done for the deployment modes as mentioned below
-
For Standalone mode: Make change in
docker-compose-standalone.yml -
For Cluster mode: Make changes in
docker-compose.yml
These files can be found inside the automation/opal-script folder inside your copied folder.
Configurations for HyWorks Policy Management API that need to be updated can be found in the above configuration files under the section called hyworks_policy_manager_api. Below is a brief description of all the settings that need to be updated
-
EdcMQ__SslEnabled: Boolean, set is as true, if HA notifier RabbitMQ uses SSL. Do not modify, if HA notifier RabbitMQ does not use SSL
-
Adding HyWorks Database Server Host Entries: HyWorks Policy Management API uses the same database as the Controller. Sometimes it is possible that it is not able to access database details, received from controller (because of hostname not resolved). For such cases, you might have a specify hostname to IP mapping in the section called hyworks_policy_manager_api/extra_hosts in the format
<hostname>: <IP address>Important
-
These are database server hostname and IP addresses not of HyWorks Controllers.
-
If having more than one database servers, make entries of all database servers (e.g., HyWorks Controller 2-node active-passive deployment)
Screenshot for reference:

-
-
When you use the default configurations HyWorks Policy Management API gets exposed over ssl using self-signed certificates provided with the deployment files. In case you want to use your own certificates, refer section Change SSL certificates
Once all the configurations have been updated you can proceed with running the policy engine services. Steps for running policy engine services are slightly different for the two deployment modes as outlined in the next 2 sections.
Run Policy engine services
Now all prerequisites and configurations are in place, the next step is running the deployment scripts.
Standalone mode or 2-Node Active-Passive Standalone Mode
Running policy engine services in standalone or 2-Node Active-Passive Standalone mode is same. Below are the steps for the same
-
Login to your VM
-
Open a terminal window
-
Navigate to automation folder in your copied folder
-
Run the script deploy.sh by executing below command
sudo sh deploy.sh
-
In case you need to start all containers from scratch, you can run the script
cleanup.shand subsequently run scriptdeploy.shonce again-
sudo sh cleanup.sh -
sudo sh deploy.sh
-
Note
-
For 2-Node Active-Passive Standalone mode:
-
Once both VMs are setup, one of the VMs can be powered-off.
-
When failover needs to be done, power-on the stand-by VM and update the Policy Manager Endpoint advance setting in HyWorks Management Console to point to the stand-by VM.
-
Cluster mode
Running policy engine services in cluster mode is a little more involved. Below are the steps for the same
-
Login to all the 3 VMs
-
Open terminal window on all 3 VMs and navigate to automation folder
-
Go to your primary server and run the script deploy.sh and wait for the script to complete.
sudo sh deploy.sh
-
Go to node 1 and run the script deploy.sh and wait for the script to complete.
sudo sh deploy.sh
-
Go to node 2 and run the script deploy.sh and wait for the script to complete.
sudo sh deploy.sh
-
In case you need to start all containers from scratch, you can run the script cleanup.sh and subsequently run script deploy.sh once again.
-
sudo sh cleanup.sh -
sudo sh deploy.sh
-
On the first machine, the script enables clustering for the mongo containers and every subsequent machine that runs the script joins the cluster.
The script installs OPAL server along with its redis backbone on every machine that it runs on and then joins all the redis instances while executing on the last machine. This way all OPAL servers remain updated with each other by taking on the redis backbone.
Now you will need to add HyWorks Policy Management API (port: 38901), Opal client (port: 8181) and Policy Data API (port: 5120) of all 3 VMs to your load balancer. Steps for the same can be found in your load balancer documentation
Verifying Deployment
On each of the node following command can be run to check if all needed services are up and running:
sudo docker ps -a
It should list out all the running docker containers and all of them should be listed and should be in running state:
-
nginx
-
hyworks_policy_manager_api
-
opal_client
-
opal_server
-
queue_provider
-
rabbitmq
-
management_api
-
mongodb