Skip to main content

System Requirements (Minimum)

The topic describes the requirements for installing the application on Virtual Machines, EC2 Instances, Azure VMs or Google Instances.


Single Node

IMPORTANT

Single Node is recommended only for proof of concept or patching environment. See recommendations for proof of concept server sizing

Multi Node Cluster

We recommend multi node cluster for production environments for reliability, redundancy and performance. For maximum resilience, aim for an odd number of VMs in your cluster.

IMPORTANT

All machines should be in same data center and subnet.


Critical Prerequisites

For each virtual machine ensure the following.

  • NTP clock should be in sync.
  • Use Static IPs (dynamic IPs are not supported).
  • Use Static hostname (hostnames cannot change).
  • IP Forwarding should be enabled See How to enable.
  • Embedded Cluster is based on k0s, so all k0s external runtime dependencies apply
  • No previous installations of Kubernetes, Docker, or Containerd should be present on the system

Operating System Prerequisites

Below are the operating systems supported by the platform installer.

  • Ubuntu 24.04 (x86-64)
  • RHEL 9.x (x86-64)
  • Amazon Linux 2023 (x86-64)

Storage Prerequisites

It is recommended to use an SSD for optimal performance. Symbolic links are not supported.

PartitionMinimum SizeDescription
/var/lib/embedded-cluster60 GBData directory used by the cluster. Can be changed during installation.

The installer creates following directories.

  • /etc/cni
  • /etc/k0s
  • /opt/cni
  • /opt/containerd
  • /run/calico
  • /run/containerd
  • /run/k0s
  • /sys/fs/cgroup/kubepods
  • /sys/fs/cgroup/system.slice/containerd.service
  • /sys/fs/cgroup/system.slice/k0scontroller.service
  • /usr/libexec/k0s
  • /var/lib/calico
  • /var/lib/cni
  • /var/lib/containers
  • /var/lib/kubelet
  • /var/log/calico
  • /var/log/containers
  • /var/log/embedded-cluster
  • /var/log/pods
  • /usr/local/bin/k0s

Network Access Control List (ACL) Exceptions

iceDQ installations on the server with tight NAC will need below exceptions to properly install, license and initiate a deployment with the platform installer.

Internal Port Requirements

For Cluster Operation

The following ports must be open and available for use by local processes running on the same node.

PortsProtocolDescription
2379TCPKubernetes etcd
7443TCPKubernetes API
9099TCPKubernetes CNI
10248TCPKubernetes components
10257TCPKubernetes components
10259TCPKubernetes components
50000TCPLAM Port

The following ports are used for bidirectional communication between nodes. For multi-node installations, create firewall openings between nodes for these ports. For single-node installations, ensure that there are no other processes using these ports.

PortsProtocol
2380TCP
4789UDP
6443TCP
9091TCP
9443TCP
10249TCP
10250TCP
10256TCP

External Port Requirements

The following ports are required for users to access the iceDQ platform over the web.

With Network Load Balancer

PortsProtocolDescription
443TCPApplication UI (must mapped to 32222 target port)
8800TCPAdmin Console UI (must mapped to 8800 target port)

Without Network Load Balancer

PortsProtocolDescription
32222TCPApplication UI via Node Port (this port is configurable 30000–32767)
30000TCPAdmin Console UI via Node Port

Outbound URL Requirements

ExceptionPurpose
registry.icedq.comContainer images
proxy.icedq.comContainer images
get.icedq.comInstallation script
resource.icedq.comInstaller license verification
icedq.azurecr.ioContainer dependency images
auth.docker.ioDocker authentication
registry-1.docker.ioDocker registry
production.cloudflare.docker.comDocker infrastructure

External Database

The application is bundled with a postgreSQL database. For production deployment we recommend using external postgreSQL 17.X and above database server.

important

Embedded database is not accessible from outside the cluster.