Skip to content

Deployment and Configuration Guide for Accops Certificate Manager and HyWorks

  • Last Updated on: 15-Sept-2025

1. Document Objective

This guide walks through deploying and configuring the following components for seamless integration in the Accops Certificate Manager and HyWorks ecosystem:

  • MongoDB Cluster: High-availability data store for certificate metadata and service information.

  • HashiCorp Vault: Certificate Authority and key-encryption service.

  • Accops Certificate Manager (API): Issuer of PFX certificates to client applications.

  • Certificate Cleanup Service: Automated removal of expired certificates.

  • Authorizer and Tenant Service: Authentication authority issuing secure bearer tokens, token validation and request encryption/decryption service.


2. Prerequisites Installation Steps

Below are commands as per the operating system:

2.1 Docker Engine

Ubuntu

  • Update repositories and install prerequisites
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg lsb-release
  • Add Docker's GPG key
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
  • Configure the Docker repository
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] \
https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" \
| sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
  • Install Docker packages
sudo apt-get update
sudo apt-get install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  • Enable and start the Docker service
sudo systemctl enable docker
sudo systemctl start docker
  • Verify
docker version

RHEL

  • Remove old Docker Versions (if any)
sudo yum remove -y docker \
docker-client docker-client-latest \
docker-common docker-latest docker-latest-logrotate \
docker-logrotate docker-engine
  • Install Prerequisites

sudo yum install -y yum-utils device-mapper-persistent-data lvm2

  • Configure the Docker repository

sudo yum-config-manager --add-repo https://download.docker.com/linux/rhel/docker-ce.repo

  • Install Docker packages
sudo yum makecache
sudo yum install -y docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
  • Enable and start the Docker service
sudo systemctl enable docker
sudo systemctl start docker
  • Verify
docker version

2.2 Docker Compose

  • Docker Compose v2 is included as the docker compose plugin.
# Verify Docker Compose plugin
docker compose version
# **(Optional) Standalone binary installation**
# curl -fsSL https://github.com/docker/compose/releases/download/v2.17.3/docker-compose-$(uname -s)-$(uname -m) -o ~/docker-compose
# chmod +x ~/docker-compose
# sudo mv ~/docker-compose /usr/local/bin/docker-compose
# docker-compose version

2.3 HashiCorp Vault CLI

Ubuntu

  • Unzip Utility

    • Ensure that the unzip utility is installed on the system. If it is not already installed, you can install it using the following command:
sudo apt-get install unzip
  • Download and extract Vault
# example for v1.20.0; adjust version as needed
wget https://releases.hashicorp.com/vault/1.20.0/vault_1.20.0_linux_amd64.zip
unzip vault_1.20.0_linux_amd64.zip
  • Install
sudo mv vault /usr/local/bin/
  • Cleanup downloaded file
rm vault_1.20.0_linux_amd64.zip
  • Verify
vault version

It should show Vault v1.20.0 or greater.

RHEL

  • Unzip Utility

    • Ensure that the unzip utility is installed on the system. If it is not already installed, you can install it using the following command:
sudo yum install -y unzip
  • Download and extract Vault
# example for v1.20.0; adjust version as needed
wget https://releases.hashicorp.com/vault/1.20.0/vault_1.20.0_linux_amd64.zip
unzip vault_1.20.0_linux_amd64.zip
  • Install
sudo mv vault /usr/bin/
  • Cleanup downloaded file
rm vault_1.20.0_linux_amd64.zip
  • Verify
vault version

It should show Vault v1.20.0 or greater.

2.4 OpenSSL

Ubuntu

sudo apt-get install -y openssl
openssl version

RHEL

sudo yum install -y openssl
openssl version

2.5 ARS with logstash

  • ARS should be configured with logstash pipeline.

3. Installation Preparation

Important

  • Read installation instructions carefully.

  • The sections are marked to be installed on all nodes or on specific node. And this must be followed very strictly.

  • It is encouraged to take snapshots of the servers after successfully completing various stages.

  • The following steps (3.1, 3.2, 3.3, 3.4 and 3.5) must be performed on each VM in your deployment.

3.1 Directory Setup for Deployment and Data Persistence

Perform on Node: All nodes.

Purpose

  1. Create a single base directory that will serve as the installation root directory.

  2. Use this directory to persist container data on the host VM.

  3. During installation, subdirectories (e.g., mongo/, vault/) will be created under this base directory and mounted as Docker volumes to ensure data persistence across container restarts.

Process

  • Navigate to the directory, where root/base directory to be placed.
cd /<directory name>
#Example: cd /opt

Note

It's recommended to make root/base directory at location where it can not be deleted or accessed directly.

  • Create the base directory
sudo mkdir <directory name>
#Example: mkdir acm
  • Set ownership to the installation user
chown -R <username>:<username>  <directory name>
#Ex. chown -R acmadmin:acmadmin acm`

3.2 Copy Installation Files

Perform on Node: All nodes.

  • Copy files into the base directory
scp file1 file2 <Username>@<Host IP address>:<Full Directory path>

#Example: scp env script.sh acmadmin@192.168.1.1:/opt/acm
  • Grant execute permission to all shell scripts

    • Change to the base directory & make all .sh files executable:
cd <base directory path>
#Ex. cd /opt/acm
chmod +x script.sh 

Note

If script.sh or env were edited on Windows (CRLF), convert to Unix LF so they run on RHEL:

sed -i 's/\r$//' script.sh env

3.3 Copy images and load

Perform on Node: All nodes.

  • Copy the images directory into base directory, this images directory must have below .tar file

    • acm.tar

    • acm-cleanup.tar

    • authorizer.tar

    • tenant-service.tar

  • Set permission to images directory

chmod -R u+rwX,g+rX,o-rwx <Directory>

#EX: chmod -R u+rwX,g+rX,o-rwx .images
  • Load the docker images into system, so image swill be available locally

    • Execute the below command one by one to load the images
docker load -i ./images/acm.tar
docker load -i ./images/acm-cleanup.tar  
docker load -i ./images/authorizer.tar  
docker load -i ./images/tenant-service.tar

Note

We need images of Mongo DB and Vault which will be downloaded from internet, So please make sure your machine has active internet connection.

3.4 Make sure that required files in base directory

Perform on Node: All nodes.

  • Below files must be exist in base directory

    • env

    • script.sh

  • The images directory must be exist in base directory and images directory below .tar file

    • acm.tar

    • acm-cleanup.tar

    • authorizer.tar

    • tenant-service.tar

  • Confirm the permissions for .sh file

  • Command to check permission

ls -l
  • it should have -x(execution) rights
-rwxrwxr-x 1 acmadmin acmadmin 39640 Jul 16 20:01 script.sh

3.5 Configure Environment Variables

Perform on Node: All nodes.

  • Purpose

    • Define all variables required by the deployment scripts.

    • Certain environment variables are specific to each VM or node, whereas others must remain consistent across all nodes prior to executing the installation script.

    • Ensure that each VM's env file is populated correctly before running the installation script. Since we have we have to setup and execute the script on all 3 nodes.

  • Example

    • Assume you have three VMs with the following hostnames and IP addresses, here hostname should be resolvable via DNS or host entry.

      • acc-nxt-ubuntu-docker1: 192.168.1.1

      • acc-nxt-ubuntu-docker2: 192.168.1.2

      • acc-nxt-ubuntu-docker3: 192.168.1.3

    • On node1 (192.168.1.1), your env file might look like this:

HOST_IP=192.168.1.1
NODE_HOSTNAME=acc-nxt-ubuntu-docker1
NODE_1_HOSTNAME=acc-nxt-ubuntu-docker1
NODE_2_HOSTNAME=acc-nxt-ubuntu-docker2
NODE_3_HOSTNAME=acc-nxt-ubuntu-docker3
NODE_1_IP=192.168.1.1
NODE_2_IP=192.168.1.2
NODE_3_IP=192.168.1.3
VAULT_PORT=8200
VAULT_PARTITION_DOMAIN=ACMDEMO.LOCAL
VAULT_PARTITION_NAME=acm
VAULT_TRANSIT_KEY_NAME=acm
VAULT_DEFAULT_CA_TTL=87600h
VAULT_UI_ENABLED=true
MONGO_PORT=27017
MONGO_DB_USERNAME=admin
MONGO_DB_PASSWORD=Secure@654
MONGO_DATABASE_NAME=accopsnext
AUTHORIZER_PORT=4000
TENANT_MANAGER_PORT=4001
ARS_HOST=192.168.1.4
ARS_PORT=5066
ARS_USERNAME=admin
ARS_PASSWORD=accopsars
TENANT_ID=tenant_acm
CUSTOMER_ID=customer_acm
CERTIFICATE_MANAGER_PORT=4003
ACM_VAULT_TOKENS=

Environment Variables details

Environment Variable Description Node-specific? affected on update?
HOST_IP IP address of the current node (used for Vault api_addr and TLS SAN) Yes Yes
NODE_HOSTNAME Hostname for this Vault Raft/Mongo node, resolvable via DNS or host entry Yes Yes
NODE_1_HOSTNAME Hostname for cluster node #1, this resolvable via DNS or host entry No Yes
NODE_2_HOSTNAME Hostname for cluster node #2, resolvable via DNS or host entry No Yes
NODE_3_HOSTNAME Hostname for cluster node #3, resolvable via DNS or host entry No Yes
NODE_1_IP IP address of cluster node #1, This should match with NODE_1_HOSTNAME No Yes
NODE_2_IP IP address of cluster node #2, This should match with NODE_2_HOSTNAME No Yes
NODE_3_IP IP address of cluster node #3, This should match with NODE_3_HOSTNAME No Yes
VAULT_PORT Port Vault listens on No Yes
VAULT_PARTITION_DOMAIN Domain allowed in Vault PKI role (e.g. ACMDEMO.LOCAL), This is the user's domain name that must be configured in the Vault partition. No No
VAULT_PARTITION_NAME Name of the Vault PKI partition/role (e.g. acm) No No
VAULT_TRANSIT_KEY_NAME Name assigned to the Vault Transit key No No
VAULT_DEFAULT_CA_TTL TTL for CA which will be created bny configuration script by default No No
VAULT_UI_ENABLED HashCorp user interface enabled control No Yes
MONGO_PORT Port MongoDB listens on No Yes
MONGO_DB_USERNAME Mongo DB username you have configured. it should be same across all nodes No Yes
MONGO_DB_PASSWORD Mongo DB password you have configured. it should be same across all nodes No Yes
MONGO_DATABASE_NAME Name of the application database in MongoDB No Yes
AUTHORIZER_PORT Local port for the Authorizer service No Yes
TENANT_MANAGER_PORT Local port for the Tenant Manager service No Yes
ARS_HOST Host/IP of the Logstash/ARS endpoint No Yes
ARS_PORT Port of the Logstash/ARS endpoint No Yes
ARS_USERNAME Username for the ARS endpoint No Yes
ARS_PASSWORD Password for the ARS endpoint No Yes
TENANT_ID Logical ID for the ACM tenant in the database No Yes
CUSTOMER_ID Logical ID for the customer in the database No Yes
CERTIFICATE_MANAGER_PORT Local port for the Certificate Manager API No Yes
ACM_VAULT_TOKENS Comma-separated list of Vault tokens and unseal key used by the Certificate Manager (ACM) No Yes
  1. Why HOST_IP and NODE_HOSTNAME need to be different on every node?

    • HOST_IP tells the script "this is my address" so that Vault's api_addr, PKI URLs and Docker-Compose bindings advertise the correct interface on each machine.

    • NODE_HOSTNAME labels the local Raft/Mongo member so that when Vault and Mongo form their clusters , each node has a unique identifier.

    • If you reused the same HOST_IP or NODE_HOSTNAME everywhere, the cluster would think multiple members occupied the same address/ID and fail to form.

  2. Why the full list of NODE_<n>_HOSTNAME and NODE_<n>_IP entries must be identical on every node?

    • Those variables (e.g. NODE_1_HOSTNAME/NODE_1_IP, NODE_2_HOSTNAME/NODE_2_IP, ...) define the complete cluster membership for both Vault and Mongo replica-set.

    • Every node needs the same view of "who my peers are" in order to initiate rs.initiate() consistently.

    • If one node's list differed, it wouldn't be able to locate or authenticate with the others.

  3. Why all Vault-related variables must be uniform across the cluster?

    • VAULT_PORT determines which port Vault listens on-clients and peers expect the same port on each address.

    • VAULT_PARTITION_DOMAIN & VAULT_PARTITION_NAME define your PKI role (domain name, naming rules); inconsistency would break certificate issuance or role lookups.

    • VAULT_TRANSIT_KEY_NAME identify and import the same encryption key in every node's Vault data directory for transit operations to work seamlessly. If not then encryption decryption will bet affected as vault run as independent instances.

    • VAULT_PARTITION_DOMAIN This should be the directory server domain to which the user belongs and from which the user is coming to ACM for certificate creation. This domain name needs to be configured in the Vault role as static metadata; it is not used to restrict certificate issuance.

  4. Why all MongoDB-related variables must be the same on every node?

    • MONGO_PORT ensures each Mongo container listens on the same port so the replica-set members can reach each other.

    • MONGO_DB_USERNAME, MONGO_DB_PASSWORD and MONGO_DATABASE_NAME provide a single source of truth for authentication and database naming.

    • If credentials or DB names diverged, some nodes wouldn't be able to join the replica set or clients would connect to different logical databases.

    • MONGO_DB_USERNAME, MONGO_DB_PASSWORD this credential should not be empty, this will used to configure credentials while creating mongo docker instance.

    • In short, these are cluster-wide settings: every node must share them exactly.

3.6 Quick Sequential Steps (Only for reference)

Important

  • These quick sequential steps are provided for reference and visualization.

  • Detailed steps are given in step# 6 and recommended to be used.

  1. Make sure that all prerequisites are installed

  2. Go to Node 1 for configuration of services

    1. Create the base directory, copy the required files, grant appropriate permissions, load Docker images, and follow the steps from 3.1 to 3.4.

    2. Update the environment variables on Node 1 as mentioned in Section 3.5.

    3. Now run the following command. For more details, refer to Section 4.1

      • ./script.sh installation
    4. This script will prompt for Mongo credentials to configure the MongoDB instance. Please enter the credentials and remember them.

      1. These credentials should be the same across all nodes during MongoDB configuration.

      2. You must also update the same credentials in the environment file so ACM can use them.

    5. After completion of ./script.sh installation, Docker containers for all components will be running. Further configuration is required.

    6. Initialize Vault using the ./script.sh init_vault command. For more details, refer to Section 4.2

    7. This will generate the Vault root token and unseal key. Please save them for Node 1, as they will be used in later steps.

    8. Configure Vault using the ./script.sh configure_vault command. For more details, refer to Section 4.3

    9. This will prompt for the Vault token. Please enter the one generated during Vault initialization on this node. Vault tokens are node-specific.

    10. The script will then ask for a symmetric key to configure the Transit engine.

    11. Open another terminal windows and generate the Transit key only on Node 1 (Note it down) and reuse it across all nodes. Command to generate the key:

      • head -c 32 /dev/urandom | base64
    12. Enter the key to complete Vault configuration.

  3. Go to Node 2 for configuration of services

    1. Create the base directory, copy the required files, grant appropriate permissions, load Docker images, and follow the steps from 3.1 to 3.4.

    2. Copy the env file from Node 1 and update only the HOST_IP and NODE_HOSTNAME. The rest should remain the same as Node 1.

    3. Follow the same steps (3 to 9) from Node 1 for service configuration.

    4. The script will then ask for a symmetric key to configure the Transit engine.

    5. Enter the symmetric key that was generated on Node 1. If a different key is used, certificate encryption/decryption will not work correctly.

    6. Enter the same key and complete the configuration.

  4. Go to Node 3 for configuration of services

    1. Create the base directory, copy the required files, grant appropriate permissions, load Docker images, and follow the steps from 3.1 to 3.4..

    2. Copy the env file from Node 1 and update only the HOST_IP and NODE_HOSTNAME. The rest should remain the same as Node 1.

    3. Follow the same steps (3 to 9) from Node 1 for service configuration.

    4. The script will then ask for a symmetric key to configure the Transit engine.

    5. Enter the symmetric key that was generated on Node 1. If a different key is used, certificate encryption/decryption will not work correctly.

    6. Enter the same key and complete the configuration.

  5. At this point, you should have the following ready: Mongo credentials, Vault tokens, and unseal keys

    1. You should have 3 separate Vault tokens and unseal keys - one for each node.

    2. Construct the token string as described in Section 4.4, which will be configured in the env file for ACM.

  6. Go to Node 1 to update configuration

    1. Update the MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variables in the env file. These should match the credentials configured on all nodes.

    2. Update ACM_VAULT_TOKENS in the env file with the token string constructed in Section 4.4 using the Vault token and unseal key.

    3. Redeploy the service using the following command. For more details, refer to Section 4.5**

      ./script.sh redeploy

    4. This will redeploy all services using the updated environment variables.

  7. Go to Node 2 and repeat the same configuration as steps 6

  8. Go to Node 3 and repeat the same configuration as steps 6

  9. Ensure all services are installed and running on all nodes

    • Use the following command to verify:

      docker ps

  10. At this point, all three MongoDB instances should be running and reachable from each other over port 27017. Now let's add them to a cluster.

  11. Cluster initialization is required from a single node only. No additional configuration is needed on other nodes.

  12. Go to Node 1 and run the following command to create the MongoDB cluster. For more details, refer to Section 4.6

    ./script.sh configure_mongo

  13. MongoDB cluster setup will succeed if the command completes successfully. It may fail due to:

    1. Invalid MongoDB credentials in the env file

    2. Inconsistent credentials across MongoDB instances

    3. MongoDB instances being unreachable

  14. Now configure Authorizer and Tenant service metadata in MongoDB. This step is required only on one node (preferably Node 1), as the changes will automatically sync to other nodes due to MongoDB's HA setup. For more details, refer to Section 4.6

  15. Go to Node 1 and run the following command:

    ./script.sh configure_authorizer_tenant_db

  16. This will configure the required data and output the Service Private Key and Service ID. Please save these details, as they will be required to configure HyWorks.

  17. Certificate Manager with all components is now ready in a 3-node cluster setup.

  18. You can now configure the settings in HyWorks by referring to Section 4.8 How to get data required for HyWorks configurations.

4. Installation and Configuration Of Components

4.1 Installation of Components on All Nodes

Note

This step (Components installation) should be done on all nodes.

4.1.1 Prerequisites

  • A dedicated base directory exists on each node.

  • All required files (including script.sh and env) have been copied into the base directory.

  • The installation script has executable permissions (e.g., chmod +x script.sh).

  • Environment variables are configured per node in the env file.

  • The MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variables are defined in the env file and are used to configure the MongoDB instance.

    • Please enclose the username and password values in single quotes (') when setting them into env file.

    Note

    Do not include single (') or double (") quotes within the actual values of MONGO_DB_USERNAME and MONGO_DB_PASSWORD, as this may cause issues while accessing MongoDB.

    For special characters to be used in password use the percent encoding to replace special characters. E.g., %40 can be used for @.

4.1.2 Installation Procedure

  1. Connect to Each Node

    • Ensure you have SSH access and change into the base directory containing script.sh.
  2. Update the environment variables

    • Update following environment variables in env file as per your node.

      1. HOST_IP

      2. NODE_HOSTNAME

    • Below environment variables will remain same on all nodes.

      1. NODE_1_HOSTNAME

      2. NODE_2_HOSTNAME

      3. NODE_3_HOSTNAME

      4. NODE_1_IP

      5. NODE_2_IP

      6. NODE_3_IP

    • Update below Mongo DB environment variables into env file

      1. MONGO_DB_USERNAME

      2. MONGO_DB_PASSWORD

  3. Run the Installation Command

    ./script.sh installation

    Note

    This execution may ask you to enter mondo credentials if you haven't updated the MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable into env. In such case please first update environment variable or enter the credentials correctly and restart the process.

4.1.2.1 Script Workflow Overview
  1. Update MongoDB Prerequisites

    • Get username and password from mongo DB and configure it for mongo docker container.

    • Creates /mongo/config and /mongo/data directories Mongo DB. The /mongo/config directory contains required configuration files, such as the keyfile, while the /mongo/data directory is used to store persistent database data.

    • Sets ownership and permissions, and writes the replica set keyfile. For the MongoDB cluster, inter-node authentication requires a common key to be present on all nodes. This key is predefined within the script and will be securely copied to each node.

  2. Update Vault Prerequisites

    • Creates /vault/config, /vault/data, and /vault/tls directories for Vault. The /vault/config directory contains configuration files such as vault.hcl. The /vault/data directory is used to store Vault's persistent data. The /vault/tls directory holds the TLS certificate files specific to the node.

    • The vault.hcl file contains the configuration settings required to start the Vault node instance.

    • Sets ownership and permissions for directory and configuration file

  3. Generate Configuration Files

    • Renders docker-compose.yml and vault.hcl from templates using the env values.

    • The docker-compose.yml file defines the configuration and deployment details of all Docker containers related to the Certificate Manager services.

  4. Start Containers

    • Executes docker compose up -d to launch MongoDB, Vault, and all dependent services.

4.2 Initialize the Vault on all nodes

Note

This step (Vault Initialization) should be done on all nodes.

All Vault instances are independent, so each one should be initialized separately on all nodes.

The root token and unseal key generated during initialization must be collected from each node.

4.2.1 Prerequisites

Precautions

  • Record Vault Root Tokens and Unseal Key Immediately:

    • At the end of the initialization, the script will output a Vault root token and Unseal key. Save this token and unseal key immediately, associating it with the node's identifier or IP address.

    • This Vault root token and Unseal key will be different for each node and required to be update in environment variable later. So please remember it against token

  • Irrecoverable Tokens: Vault root tokens cannot be recovered if lost. Collect and store the token from each of the three nodes in a secure location before proceeding.

Procedure

  • Initialize the vault using below command, record the Vault unseal key and root token

    -./script.sh init_vault

Script Workflow Overview

  • It will initialize Vault using the vault init command. The output of this command will include the Vault unseal key and root token.

  • It will unseal Vault using the unseal key.

  • The script will print the Vault unseal key and root token to the console for the user to capture.

4.3 Configure the Vault on all nodes

Note

This step (Vault Configuration) should be done on all nodes.

All Vault instances are independent, so each one should be configured separately on all nodes.

During configuration, you will be prompted to provide a symmetric key for the Transit engine - generate it from first node and ensure the same key is used on all nodes.

Prerequisites

  • Vault initialization is complete, and you have the Vault root token available.

  • You have a 32-byte symmetric encryption key available. This key should be generated once and reused on all nodes to ensure consistent encryption and decryption across Vault nodes.

  • Use the following command to generate the symmetric key:

    • head -c 32 /dev/urandom | base64
  • The generated symmetric key will be 32 bytes, and after base64 encoding, it will be 44 characters long.

Procedure

  • Start the Vault configuration using the following command:

    • ./script.sh configure_vault
  • The script will prompt you to enter the Vault root token to log in and perform the configuration. Please enter the token collected for the current node.

  • It will then configure the PKI engine for certificate generation.

  • After configuring the PKI engine, it will prompt you to enter the symmetric key to enable Vault's Transit engine for encryption. Please provide the symmetric key generated in the Prerequisites section.

  • The Transit engine will be enabled in Vault, and the symmetric key will be imported.

Script Workflow Overview
  • The script will prompt for the Vault root token.

  • It will log in to Vault using the provided root token.

  • It will enable the PKI engine and configure the necessary components for user certificate issuance, such as adding roles/partitions with required configurations for True SSO, setting the default CA for certificate signing, etc.

  • After this, the script will prompt for the symmetric key to enable the Transit engine in Vault and import the key for data encryption and decryption purposes.

  • This Transit engine will be used by Accops Certificate Manager to securely encrypt and decrypt user certificate data.

4.4 Update environment variables on all nodes

Note

This step (updating the environment variables) should be done on all nodes.

Prerequisites

  • Mongo DB is container is running and you have Mongo DB username and Password with you.

  • Vault is initialized and you have Vault token and unseal key with you

Procedure

  • Open the env file from base directory in any editor.

  • Update MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable into env file which is same for all nodes and configured in installation phase.

  • Update ACM_VAULT_TOKENS vault token using vault token and unseal key you got from all nodes in vault initialization phase.

  • This will be construct the decimeter separated token and unseal key for a node and comma separated constructed string for all 3 nodes.

  • Follow the below format to construct the ACM_VAULT_TOKENS value:

ACM_VAULT_TOKENS='<NODE_1_TOKEN>$1|<NODE_1_UNSEAL_KEY>,<NODE_2_TOKEN>$1|<NODE_2_UNSEAL_KEY>,<NODE_3_TOKEN>$1|<NODE_3_UNSEAL_KEY>'

#EX: ACM_VAULT_TOKENS='hvs.tKLPsnsoGWZJaIj5LEruQui9$1|80tSCIP/p90CxTJli51kEyw1t9A8gHXTvVaovXHfWzs=,hvs.BvRndqFR5xVKqPa8W2qqBY8c$1|L7aXN7mXygGrML0+FV+qvoE/qzLiepH6jfsAiDEt+kM==,hvs.IcVcR5nz5RyuwVh7k7BjD5iO$1|phlTGL/FDDdOlckd5ALKgaR0lE5KAbPsOqS6uGIu01s=='

Note

!!! Very Important !!!

  • Alway quote the vault of environment variable into single quotes only as string has some escape special characters. Please don't use double quotes, script file can not handle special characters into single quotes.

  • Sequence of node must be same as per defined in environment variables. E.g., ENV file has nodes sequence as node1, node2 & node3 then ACM_VAULT_TOKENS='node1_token$1|node1_unsealkey,node2_token$1|node2_unsealkey,node1_token$1|node1_unsealkey'.

    • Node Token and Unseal Keys are separated by $1| ($1 + Pipe).

    • The values of nodes are separated by comma (,).

  • Value of ACM_VAULT_TOKENS must be wrapped into single quotes.

  • Save the env file after updating above environment variables

4.5 Redeploy the service on all nodes

Note

  • Make sure the environment variables are updated on all nodes as described above with ACM_VAULT_TOKENS, MONGO_DB_USERNAME and MONGO_DB_PASSWORD.

  • Services need to be redeployed after updating environment variables so they can use update configurations

Prerequisites

  • Environment variable should be updated correctly

Procedure

  • Execute the below command from base directory to redeploy the services

    • ./script.sh redeploy
  • This execution may ask you to enter mondo credentials if you haven't updated the MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable into env.

    • In such case please first update environment variable or enter the credentials correctly

4.6 Configure Mongo DB HA Cluster on Primary node only

Note

MongoDB cluster configuration must be done from Primary node only.

MongoDB runs in clustered mode, so the high availability (HA) configuration should be performed on a single (primary) node only.

Prerequisites

  • Mongo DB Container is running on all nodes

  • The MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable is updated into env

  • All nodes are reachable from each other on port 27017

Procedure

  • Execute the below command from base directory to configure Mongo DB HA Cluster
./script.sh configure_mongo
  • This script will configure mondo DB Cluster by reading MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable and NODE_IP

  • This configuration could failed if the Mongo DB credential are invalid or node IP is not reachable.

Node

  • This execution may ask you to enter mondo credentials if you haven't updated the MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable into env.

    • In such case please first update environment variable or enter the credentials correctly and restart the process

4.7 Configure Authorizer and Tenant service metadata in mongo DB on Primary node only

Note

  • Configuration of Authorizer and Tenant service metadata in MongoDB should be performed on a single (primary) node only.
  • This will update metadata required Authorizer and Tenant service into mongo DB

  • Metadata contains new tenant withpublic and private key, New service with it's private and public key with it's identifiyer, roles for service etc.

Prerequisites

  • Mongo DB Container is running on all nodes

  • The MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable is updated into env

Precautions

This configuration will create the Service Private key and Service Id which needs to be saved and configure into HyWorks

Private key will be printed on console and saved into file service_private.pem on base directory. Please clean it once you retrieved it ans save securely.

Service Id will be printed on console, please collect it. It will not be saved in file.

Procedure

  • Execute the below command from base directory to Configure Authorizer and Tenant service metadata in mongo DB
./script.sh configure_authorizer_tenant_db
  • At end of execution this will give you Service private key and service Id, Please save it. It will be required to configure in HyWorks

  • After collecting Service private key please delete file service_private.pem from base directory.

Note

  • This execution may ask you to enter mondo credentials if you haven't updated the MONGO_DB_USERNAME and MONGO_DB_PASSWORD environment variable into env.

    • In such case please first update environment variable or enter the credentials correctly and restart the process

4.8 How to get data required for HyWorks configurations

  1. Tenant Id, Customer Id, Partition Name:

    1. This you can pick from env file you have configured. Those settings should be same an all node while configuring the components.
  2. Service Id and Service Private Key:

    1. This you got at end of section 4.7, this you have configured only on primary node.

5. Load Balancer Configuration

  • All Accops Certificate Manager (ACM) services are configured to run without TLS. The expectation is to offload TLS termination to the load balancer. As a result, TLS is not required on the backend services, which will operate behind the load balancer.

  • The following URLs are to be routed through the load balancer:

    • Accops Certificate Manager: http://172.27.13.51:4002

    • Authorizer: http://172.27.13.51:4000

    • Tenant Service: http://172.27.13.51:4001

6. System Flow: Accops Certificate Management, Controller, Authorization and Tenant service

System Components

  1. Accops Certificate Manager (ACM): Responsible for issuing user certificates based on authorization and identity validation.

  2. Controller: Initiates the request for certificates and handles service-level communications.

  3. Authorizer: Acts as the OAuth-like authorization server validating service identity and issuing tokens (access and auth tokens).

  4. Tenant Management Service: Maintains tenant identities, including cryptographic key pairs (public/private).

1. Initial Setup and Trust Establishment

1.1. Tenant Key Generation

  • Tenant Management Service generates a key pair (public/private).

  • The private key is securely stored and never shared.

  • The public key is distributed to trusted services (Authorizer, Controller, ACM).

2. Service Registration and Identity Bootstrap

2.1. Registering Controller with Authorizer

  • The Authorizer registers the Controller as a trusted service.

  • A key pair is generated for the Controller:

    • Private key is securely delivered to Controller by an admin.

    • Public key is stored in Authorizer's database alongside service metadata (e.g., service ID, roles).

3. Access Token Generation

3.1. Controller Requests Access Token

  • Controller proves its identity to Authorizer by:

    • Encrypting its service ID using its private key.

    • Encrypting tenant association info using tenant's public key.

  • Authorizer validates:

    • Controller's identity using its stored public key.

    • Tenant association using tenant's private key.

  • If verified, Authorizer issues a long-lived Access Token (e.g., valid for 4 hours).

4. Authorization Token Request

4.1. Controller Requests Auth Token

  • Controller uses the Access Token to request an Authorization Token from Authorizer.

  • Auth Token is a JWT (JSON Web Token):

    • Contains claims: service roles, expiry time, allowed actions.

    • Signed using the tenant's private key (stored in Authorizer).

  • Token validity: short-lived (e.g., 5 minutes).

5. Certificate Request to ACM

5.1. Controller Sends Encrypted Certificate Request

  • Sends HTTP request to ACM with:

    • JWT Auth Token in HTTP headers.

    • Request Body encrypted using tenant's public key (includes username, UPN, PFX password, expiry, etc.).

5.2. ACM Validates the Request

  • Decrypts request body using tenant's private key.

  • Validates JWT:

    • Confirms signature using tenant's public key.

    • Checks token expiration and service permissions.

6. Certificate Issuance

  • If all verifications pass:

    • ACM issues the requested certificate to Controller.
  • If token is expired or tampered:

    • Request is rejected.

7. Security Enhancements

  • End-to-End Encryption: Request body is encrypted with tenant's public key and decrypted only by ACM.

  • Token Expiry and Renewal:

    • Auth Tokens are short-lived to limit misuse.

    • Access Tokens expire every few hours; full re-authentication is required.

  • Private Key Management:

    • Private keys are never transmitted or exposed.

    • Controller receives private key once during setup.

8. Deployment Note

  • During installation, the private key is securely copied to the Controller.

  • Post-setup, the private key file should be deleted from ACM or any transient storage.

Sequence Summary

Controller → Authorizer: Prove Identity → Access Token
Controller → Authorizer: Present Access Token → Auth Token
Controller → ACM: Send Encrypted Request + Auth Token
ACM → Validates Token & Request → Issues Certificate