In our recent article, we covered the New features of OpenShift 4. OpenShift 4 is the best Kubernetes distribution that everyone is eagerly waiting for. Openshift gives you a self-service platform to create, modify, and deploy containerized applications on demand. This guide will dive to the installation of OpenShift Origin (OKD) 3.x on a CentOS 7 VM.
The OpenShift development team has done a commendable job is simplifying OpenShift Cluster setup. A single command is all that’s required to get a running OKD Local cluster.
For Ubuntu, use: How to Setup OpenShift Origin (OKD) on Ubuntu
My Setup is done on a Virtual Machine with the following hardware configurations.
- 8 vCPUs
- 32 GB RAM
- 50 GB free disc space
- CentOS 7 OS
Follow the steps covered in the next section to deploy OpenShift Origin Local cluster on a CentOS 7 Virtual Machine.
Step 1: Update CentOS 7 system
Let’s kick off by updating our CentOS 7 machine.
sudo yum -y update
Step 2: Install and Configure Docker
OpenShift required docker engine on the host machine for running containers. Install Docker on CentOS 7 using the commands below.
sudo yum install -y yum-utils device-mapper-persistent-data lvm2 sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo sudo yum install -y docker-ce docker-ce-cli containerd.io
Add your standard user account to docker group.
sudo usermod -aG docker $USER newgrp docker
After installing Docker, configure the Docker daemon with an insecure registry parameter of
sudo mkdir /etc/docker /etc/containers sudo tee /etc/containers/registries.conf<
We need to reload systemd and restart the Docker daemon after editing the config.
sudo systemctl daemon-reload sudo systemctl restart docker
Enable Docker to start at boot.
$ sudo systemctl enable docker Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
Then enable IP forwarding on your system.
echo "net.ipv4.ip_forward = 1" | sudo tee -a /etc/sysctl.conf sudo sysctl -p
Step 3: Configure Firewalld.
Ensure that your firewall allows containers access to the OpenShift master API (8443/tcp) and DNS (53/udp) endpoints.
DOCKER_BRIDGE=`docker network inspect -f "range .IPAM.Config .Subnet end" bridge` sudo firewall-cmd --permanent --new-zone dockerc sudo firewall-cmd --permanent --zone dockerc --add-source $DOCKER_BRIDGE sudo firewall-cmd --permanent --zone dockerc --add-port=80,443,8443/tcp sudo firewall-cmd --permanent --zone dockerc --add-port=53,8053/udp sudo firewall-cmd --reload
Step 4: Download the Linux oc binary
At this step, we can download the Linux oc binary from openshift-origin-client-tools-VERSION-linux-64bit.tar.gz and place it in your path.
wget https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz tar xvf openshift-origin-client-tools*.tar.gz cd openshift-origin-client*/ sudo mv oc kubectl /usr/local/bin/
Verify installation of OpenShift client utility.
$ oc version oc v3.11.0+0cbc58b kubernetes v1.11.0+d4cacc0 features: Basic-Auth GSSAPI Kerberos SPNEGO
Step 5: Start OpenShift Origin (OKD) Local Cluster
Now bootstrap a local single server OpenShift Origin cluster by running the following command:
$ oc cluster up
The command above will:
- Start OKD Cluster listening on the local interface – 127.0.0.1:8443
- Start a web console listening on all interfaces at
- Launch Kubernetes system components.
- Provisions registry, router, initial templates, and a default project.
- The OpenShift cluster will run as an all-in-one container on a Docker host.
There are a number of options which can be applied when setting up Openshift Origin, view them with:
$ oc cluster up --help
On a successful installation, you should get output similar to below.
Login to server … Creating initial project "myproject" … Server Information … OpenShift server started. The server is accessible via web console at: https://127.0.0.1:8443 You are logged in as: User: developer Password:
To login as administrator: oc login -u system:admin
Example below uses custom options.
$ oc cluster up --routing-suffix=
.xip.io \ --public-hostname=
$ oc cluster up --public-hostname=okd.example.com --routing-suffix='services.example.com'
The OpenShift Origin cluster configuration files will be located inside the
If your cluster setup was successful, you should get a positive output for the following command.
$ oc cluster status Web console URL: https://okd.example.com:8443/console/ Config is at host directory Volumes are at host directory Persistent volumes are at host directory /home/dev/openshift.local.clusterup/openshift.local.pv Data will be discarded when cluster is destroyed
Step 6: Using OpenShift Origin Cluster
To login as an administrator, use:
$ oc login -u system:admin Logged into "https://127.0.0.1:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project
': * default kube-dns kube-proxy kube-public kube-system myproject openshift openshift-apiserver openshift-controller-manager openshift-core-operators openshift-infra openshift-node openshift-service-cert-signer openshift-web-console testproject Using project "default".
As System Admin user, you can few information such as node status.
$ oc get nodes NAME STATUS ROLES AGE VERSION localhost Ready
1h v1.11.0+d4cacc0 $ oc get nodes -o wide
To get more detailed information about a specific node, including the reason for the current condition:
$ oc describe node
To display a summary of the resources you created:
$ oc status In project default on server https://127.0.0.1:8443 svc/docker-registry - 172.30.1.1:5000 dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v3.11 deployment #1 deployed 2 hours ago - 1 pod svc/kubernetes - 172.30.0.1:443 -> 8443 svc/router - 172.30.235.156 ports 80, 443, 1936 dc/router deploys docker.io/openshift/origin-haproxy-router:v3.11 deployment #1 deployed 2 hours ago - 1 pod View details with 'oc describe
/ ' or list everything with 'oc get all'.
To return to the regular
developer user, login as that user:
$ oc login Authentication required for https://127.0.0.1:8443 (openshift) Username: developer Password: developer Login successful.
Confirm if Login was successful.
$ oc whoami developer
Let’s create a test project using oc new-project command.
$ oc new-project dev --display-name="Project1 - Dev" \ --description="My Dev Project" Now using project "dev" on server "https://127.0.0.1:8443".
Using OKD Admin Console
OKD includes a web console which you can use for creation and management actions. This web console is accessible on Server IP/Hostname on the port,
8443 via https.
You should see an OpenShift Origin window with Username and Password forms, similar to this one:
Username: developer Password: developer
You should see a dashboard similar to below.
If you are redirected to https://127.0.0.1:8443/ when trying to access OpenShift web console, then do this:
1. Stop OpenShift Cluster
$ oc cluster down
2. Edit OCP configuration file.
$ vi ./openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig
Locate line “server: https://127.0.0.1:8443“, then replace with:
3. Then start cluster:
$ oc cluster up
Step 7: Deploy Test Application
We can now deploy test Application in the cluster.
1. Login to Openshift cluster:
$ oc login Authentication required for https://https://127.0.0.1:8443 (openshift) Username: developer Password: developer Login successful. You don't have any projects. You can try to create a new project, by running oc new-project
2. Create a test project.
$ oc new-project test-project
3. Tag an application image from Docker Hub registry.
$ oc tag --source=docker openshift/deployment-example:v2 deployment-example:latest Tag deployment-example:latest set to openshift/deployment-example:v2.
4. Deploy Application to OpenShift.
$ oc new-app deployment-example --> Found image da61bb2 (3 years old) in image stream "test-project/deployment-example" under tag "latest" for "deployment-example" * This image will be deployed in deployment config "deployment-example" * Port 8080/tcp will be load balanced by service "deployment-example" * Other containers can access this service through the hostname "deployment-example" * WARNING: Image "test-project/deployment-example:latest" runs as the 'root' user which may not be permitted by your cluster administrator --> Creating resources ... deploymentconfig.apps.openshift.io "deployment-example" created service "deployment-example" created --> Success Application is not exposed. You can expose services to the outside world by executing one or more of the commands below: 'oc expose svc/deployment-example' Run 'oc status' to view your app.
5. Show Application Deployment status.
$ oc status In project test-project on server https://127.0.0.1:8443 svc/deployment-example - 172.30.15.201:8080 dc/deployment-example deploys istag/deployment-example:latest deployment #1 deployed about a minute ago - 1 pod 2 infos identified, use 'oc status --suggest' to see details.
6. Get service detailed information.
$ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE deployment-example ClusterIP 172.30.15.201 8080/TCP 18m $ oc describe svc deployment-example Name: deployment-example Namespace: test-project Labels: app=deployment-example Annotations: openshift.io/generated-by=OpenShiftNewApp Selector: app=deployment-example,deploymentconfig=deployment-example Type: ClusterIP IP: 172.30.15.201 Port: 8080-tcp 8080/TCP TargetPort: 8080/TCP Endpoints: 172.17.0.12:8080 Session Affinity: None Events:
7. Test App local access.
8. Show Pods status
$ oc get pods NAME READY STATUS RESTARTS AGE deployment-example-1-vmf7t 1/1 Running 0 21m
9. Allow external access to the application.
$ oc expose service/deployment-example route.route.openshift.io/deployment-example exposed $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD deployment-example deployment-example-testproject.services.computingpost.com deployment-example 8080-tcp None
10. Test external access to the application.
Open the URL shown in your browser.
Note that I have Wildcard DNS record for *.services.computingpost.com pointing to OpenShift Origin server IP address.
11. Delete test Application
$ oc delete all -l app=deployment-example pod "deployment-example-1-8n8sd" deleted replicationcontroller "deployment-example-1" deleted service "deployment-example" deleted deploymentconfig.apps.openshift.io "deployment-example" deleted route.route.openshift.io "deployment-example" deleted $ oc get pods No resources found.
Read OpenShift Origin documentation and stay connected for more updates.