Skip to main content

System Requirements


Overview

In iceDQ, a multi-node setup is essential for achieving a stable, production-grade deployment. Environments must be able to tolerate node failures, support maintenance without downtime, and deliver consistent performance under varying workloads. Deploying the system across multiple virtual machines helps meet these requirements by providing built-in reliability, redundancy, and efficient load distribution.

The sections below outline the system requirements for each supported deployment environment.


Server Requirements

The server infrastructure must be provisioned to support reliable, secure, and high-performance operation of iceDQ in both non-production and production environments. Detailed specifications of minimum requirements for each deployment scenario are provided in the sections that follow.

On-premises

The following server requirements apply exclusively to On-premises deployments.

ItemMinimum RequirmentsComments
Cluster Size2 nodesRecommended baseline for a stable two-node On-premises cluster.
Operating SystemUbuntu 24.04, RHEL 9.x.Supported OS versions to ensure compatibility and performance.
Instance Size8 vCPU, 32GBRecommended for consistent performance under typical workloads.
Disk1 TB SSD (P99 write latency of 10 ms)      This ensures fast I/O and reliable storage performance.
Load Balancer1 Load Balancer VIPThis is required to provide high availability and failover support.
External DatabasePostgreSQL v16This must be provisioned to host the application metadata repository.

AWS EC2

The following server requirements apply exclusively to AWS EC2 deployments.

ItemMinimum RequirmentsComments
Cluster Size2 nodesRecommended baseline for a stable two-node AWS EC2 cluster.
Operating SystemUbuntu 24.04, RHEL 9.x, Amazon Linux 2023Supported OS versions to ensure compatibility and performance.
Instance Sizem7i.2xlarge (8vCPU, 32GB)Recommended for consistent performance under typical workloads.
Disk1TB EBS (P99 write latency of 10 ms)This ensures fast I/O and reliable storage performance.
Load Balancer1 Network Load BalancerThis is required to provide high availability and failover support.
External DatabaseRDS db.m6i.xlarge (4vCPU, 16GB, 32 GB Storage)This must be provisioned to host the application metadata repository.

Azure Instance

The following server requirements apply exclusively to Azure deployments.

ItemMinimum RequirmentsComments
Cluster Size2 nodesRecommended baseline for a stable two-node Azure cluster.
Operating SystemUbuntu 24.04, RHEL 9.x.Supported OS versions to ensure compatibility and performance.
Instance SizeStandard_D8_v5 (8vCPU, 32GB)Recommended for consistent performance under typical workloads.
Disk1TB Premium SSD (P99 write latency of 10 ms)  This ensures fast I/O and reliable storage performance.
Load Balancer1 Azure Load BalancerThis is required to provide high availability and failover support.
External DatabaseAzure Database for PostgreSQL v16
(D4ds_v5 with 32 GB Storage)
This must be provisioned to host the application metadata repository.

Google Instance

The following server requirements apply exclusively to Google Cloud deployments.

ItemMinimum RequirmentsComments
Cluster Size2 nodesRecommended baseline for a stable two-node Google cluster.
Operating SystemUbuntu 24.04, RHEL 9.x.Supported OS versions to ensure compatibility and performance.
Instance Sizen4-standard-8 (8vCPU, 32GB)Recommended for consistent performance under typical workloads.
Disk1TB pd-ssd (P99 write latency of 10 ms)     This ensures fast I/O and reliable storage performance.
Load Balancer1 Google Cloud Load BalancerThis is required to provide high availability and failover support.
External DatabaseGoogle Cloud SQL for PostgreSQL v16This must be provisioned to host the application metadata repository.

Important

External Database is recommended for production deployments.


Packages

Download Packages on the Server

The following packages should be installed on the server before deployment. (To be installed by customer organization)

PackageVerification CommandNotes
jqdpkg -l jq or rpm -q jqRequired for JSON processing and script automation.
nfs-utils (RHEL)dpkg -l nfs-commonRequired for NFS support and mounting network storage.
nfs-common (Ubuntu)  rpm -q nfs-utilsRequired for NFS support and mounting network storage.

Verify Packages on the Server

The following application bundles must be present on the server. (To be provided by vendor organization)

Package / BundleDescription
icedq-stable.tgzMain application bundle required for installation.
kots_linux_amd64.tar.gzKOTS CLI bundle required for deployment and management.            

Note

Ensuring that all packages and bundles are installed and verified is necessary for the proper installation, configuration, and operation of iceDQ in the target environment.


Partition

The server partitions and their sizes must comply with the specifications listed below. This configuration is mandatory for proper operation of iceDQ.

Option 1 – Multiple Partitions

PartitionSize
/50 GB
/tmp50 GB
/var/lib250 GB
/var/lib/kubelet50 GB
/var/lib/containerd50 GB
/var/openebs500 GB

       OR

Option 2 – Single Large Partition

Partition     Size  
/1 TB

Note

Choose either Option 1 for a structured partition layout or Option 2 for a single large root partition, depending upon your organization policy.


Ports

Local Ports Requirements

The following ports are used by local processes on the same node. They must be open and free from use by other processes. No firewall openings are required for these ports.

PortProtocol
2379TCP
2380TCP
4789UDP
6443TCP
7443TCP
9091TCP
9099TCP
9443TCP
10248TCP
10249TCP
10250TCP
10256TCP
10257TCP
10259TCP

Multi-Node Port Requirements

The following ports are used for bidirectional communication between nodes. For multi-node installations, create firewall openings between nodes for these ports.

PortProtocol
2380TCP
4789UDP
6443TCP
9091TCP
9443TCP
10249TCP
10250TCP
10256TCP
50000TCP

Admin Console Port

The KOTS Admin Console requires that port 30000/TCP be open and available. Create a firewall opening for this port so that the Admin Console can be accessed by end users.

Application Port

The application is exposed on a NodePort and requires that port 32222/TCP be open and available. Create a firewall opening for this port to allow access to the application.

Local Artifact Mirror (LAM) Port

The Local Artifact Mirror (LAM) requires that port 50000/TCP be open and available.


Domain & TLS

Custom Domain & TLS Configuration

It is recommended to use a custom domain for the application to improve manageability and accessibility. This is required if a custom domain is being used.

  • DNS Record: The domain’s DNS record must be pre-configured before deployment.
  • TLS Certificate: The TLS certificate must include an unencrypted .key and a .crt file in X.509 (PEM) format.

Note

Ensuring proper domain configuration and TLS certificate setup is critical for secure and reliable access to the application.

SSL Verification Commands

openssl x509 -noout -modulus -in certificate.crt | openssl md5
openssl rsa -noout -modulus -in private.key | openssl md5

Additional Information

The following requirements and considerations must be addressed to ensure proper deployment and operation of iceDQ.

Server Access & Permissions

  • The server must be accessible over SSH using tools such as PuTTY, MobaXTerm, or similar.
  • The installation account/user should have sudo or root access.
  • The server must have a static IP address.

Firewall & Network

  • The system firewall should be disabled on the node.
  • For multi-node deployments or when using an external database, ensure firewall openings exist between the VM/server and the external database (if applicable).
  • IP forwarding must be enabled on the server.

Time Synchronization

  • The server’s NTP clock must be synchronized to ensure consistent timestamps across nodes.

Embedded Cluster File Locations

The embedded cluster creates directories and files in the following locations.

Path
/etc/cni
/etc/k0s
/opt/cni
/opt/containerd
/run/calico
/run/containerd
/run/k0s
/sys/fs/cgroup/kubepods
/sys/fs/cgroup/system.slice/containerd.service
/sys/fs/cgroup/system.slice/k0scontroller.service
/usr/libexec/k0s
/var/lib/calico
/var/lib/cni
/var/lib/containers
/var/lib/kubelet
/var/log/calico
/var/log/containers
/var/log/embedded-cluster
/var/log/pods
/usr/local/bin/k0s