OpenShift is a Kubernetes platform
- infrastructure as code (IAC) with Templates
- openshift REST api v3.11 based on kubernates v1.14
- on top it has for example Template API
- you can write Template from scratch or export existing resource to yaml, json, ...
API reference
- you can use the CLI reference to list all resources
$ oc api-resources
NAME SHORTNAMES APIGROUP NAMESPACED KIND
...
pods po true Pod
imagestreams is image.openshift.io true ImageStream
templates template.openshift.io true Template
...
- the full object schema
$ oc explain templates
- find out more about an attribute
oc explain templates.objects
- https://kubernetes.io/docs/reference/generated/kubernetes-api/v1.14/#pod-v1-core
- https://docs.openshift.com/container-platform/3.11/rest_api/api/v1.Pod.html#object-schema
- https://docs.openshift.com/container-platform/3.11/rest_api/apis-template.openshift.io/v1.Template.html#object-schema
- odo: OpenShift Do
- developer-focused CLI for OpenShift
- get odo
curl -L https://mirror.openshift.com/pub/openshift-v4/clients/odo/latest/odo-linux-amd64 -o odo
- make it runnable
chmod +x odo
- list developer catalog
odo catalog list components
https://github.com/kedacore/keda/wiki/Using-Keda-and-Azure-Functions-on-Openshift-4
- OCI container: buildah, google jib, ...
container
Containers in OpenShift Container Platform are based on OCI- or Docker-formatted container images.
(container) image, builder image / base image, custom / application image, image streams
- builder image: e.g. a application server
- custom image: e.g. application server and your app
- container images are stored in image registry e.g. docker hub, quay.io,registry.access.redhat.com
- openshift has an internal image registry
- https://blog.openshift.com/image-streams-faq/
- https://itnext.io/variations-on-imagestreams-in-openshift-4-f8ee5e8be633
OpenShift in Action
- Image streams monitor for changes and trigger new deployments and builds for applications.
- Build configs track everything required to build an application deployment.
- Deployment configs keep track of all information required to deploy an application.
- Pods are the default unit of work. They’re where your application code is served.
- Deployments are unique deployed versions of an application.
- Container images are the template used to deploy application pods.
- Services are a consistent interface for all the application pods for a deployment.
- Routes are external-facing, DNS-based load-balancer entries that are connected to services.
- Replication controllers ensure that the desired number of application pods is running at all times.
application components:
- Custom container images
- Image streams
- Application pods
- Build configs
- Deployment configs
- Deployments
- Services
how OpenShift creates and uses custom container images for each application.
Each application deployment in OpenShift creates a custom container image to serve your application. This image is created using the application’s source code and a custom base image called a builder image.
- OpenShift creates a custom container image using your source code and the builder image template you specified. For example, app-cli and app-gui use the PHP builder image.
- This image is uploaded to the OpenShift container image registry.
- OpenShift creates a build config to document how your application is built. This includes which image was created, the builder image used, the location of the source code, and other information.
- OpenShift creates a deployment config to control deployments and deploy and update your applications. Information in deployment configs includes the number of replicas, the upgrade method, and application-specific variables and mounted volumes.
- OpenShift creates a deployment, which represents a single deployed version of an application. Each unique application deployment is associated with your application’s deployment config component.
- The OpenShift internal load balancer is updated with an entry for the DNS record for the application. This entry will be linked to a component that’s created by Kubernetes, which we’ll get to shortly.
- OpenShift creates an image stream component. In OpenShift, an image stream monitors the builder image, deployment config, and other components for changes. If a change is detected, image streams can trigger application redeployments to reflect changes.
build strategies
- Docker build
- expects a repository with a Dockerfile
- Source-to-Image (S2I) build
- builder image: injecting sourcecode into a container (base/builder) image and assemble a new image.
- OpenShift Container Platform also supplies builder images that assist with creating new images by adding your code or configuration to existing images.
- base image (docker context): A base image has FROM scratch in its Dockerfile.
- Custom build
- Pipeline build strategy (CI/CD)
- Pipeline workflows are defined in a Jenkinsfile or embedded directly in the build configuration
- https://docs.openshift.com/container-platform/4.2/builds/understanding-image-builds.html
- Override the build strategy by setting the --strategy flag to either pipeline or source.
$ oc new-app /home/user/code/myapp --strategy=docker
- https://docs.openshift.com/container-platform/4.2/applications/application-life-cycle-management/creating-applications-using-cli.html#build-strategy-detection
- https://docs.openshift.com/container-platform/4.2/builds/understanding-image-builds.html#build-strategy-s2i_understanding-image-builds
- https://developer.ibm.com/tutorials/creating-your-own-source-to-image-entry-openshift/
- https://docs.docker.com/develop/develop-images/baseimages/
- https://docs.openshift.com/container-platform/4.2/openshift_images/images-understand.html#images-about_images-understand
UBI Images
- S2I (Source to Image) are builder images used by OpenShift to build image streams.
- UBI images:
- GraalVM Native S2I
- Binary S2I
- https://github.com/quarkusio/quarkus-images
example
- folder:
s2i-payara/
- created with:
s2i create <imageName> <destination>
s2i create payara-builderimage s2i-payara
links
- https://docs.openshift.com/container-platform/3.11/creating_images/s2i.html#s2i-scripts
- https://blog.openshift.com/create-s2i-builder-image/?extIdCarryOver=true&intcmp=7013a000002CtetAAC&sc_cid=701f2000001OH6pAAG
S2I
Let see what the tool s2i https://github.com/openshift/source-to-image/releases/ says:
Source-to-image (S2I) is a tool for building repeatable docker images.
A command line interface that injects and assembles source code into a docker image.
https://quarkus.io/guides/deploying-to-openshift-s2i
create app
oc new-app quay.io/quarkus/ubi-quarkus-native-s2i:19.2.1~https://github.com/quarkusio/quarkus-quickstarts.git --context-dir=getting-started --name=quarkus-quickstart-native
- tag
19.3.0-java11
doesnt worked for me. - https://quay.io/repository/quarkus/ubi-quarkus-native-s2i?tag=latest&tab=tags
expose service to the outside world
oc expose svc/quarkus-quickstart-native
delete all resources
oc delete all --selector app=quarkus-quickstart-native
Simple demonstration of S2I. OpenShift will pull source (our webapp) and build an image based on Wildfly (Application Server).
Our Webapp has a index.html page and one REST API.
- OpenShift 4 on local machine with CodeReady Containers, KVM, ...
- CodeReady Containers isn't Minishift- it's an entirely new approach to running Kubernetes locally.
REMARK: you need a redhat account: https://cloud.redhat.com/openshift/install/crc/installer-provisioned
install
- you need to download the compress image file from redhad ~ 2GB
https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/
- download / copy also the
pull_secret
. its necassary for the installation - a good intro gives this page: https://labs.consol.de/de/devops/linux/2019/11/29/codeready-containers-on-ubuntu.html
- decompress
tar -xvJf crc-linux-amd64.tar.xz
- create a symbolic link
ln -s crc-linux-1.17.0-amd64/ crc
- add path to crc in your
~/.profile
file and reload it
PATH="$HOME/development/crc:$PATH"
source .profile
- finally the setup
crc setup
...
Checking if CRC bundle is cached in '$HOME/.crc'
write NetworkManager config in /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf
...
quickstart
- start:
crc start
(promt for access token /pull_secret
) - initially around ~10GB will be extracted
Extracting bundle: crc_libvirt_4.5.14.crcbundle ...
- stop:
crc stop
- open web console:
crc console
- openshift cli:
~/.crc/bin/oc
(add bin directory in your path)
virtualbox
- removed virtualbox support: crc-org/crc#838
- no virtualbox support for linux: crc-org/crc#625 (comment)
- get latest crc (e.g.
crc_virtualbox_4.2.8.crcbundle
) from: https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/ - run crc:
crc start --vm-driver virtualbox --bundle path_to_system_bundle
CRC virtual machine on ubuntu
- setup based on with KVM / libvirt (native hypervisor)
- stop the CodeReady Containers virtual machine and OpenShift cluster:
crc stop
crc start
for debug:crc start --log-level debug
- you will once prompt for a image pull secret (a personalized secret)
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, username is 'kubeadmin' and password is
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
- open web console
$ crc console
Opening the OpenShift Web Console in the default browser...
- show status
$ crc status
CRC VM: Running
OpenShift: Running (v4.2.8)
Disk Usage: 9.364GB of 32.2GB (Inside the CRC VM)
Cache Usage: 11.01GB
Cache Directory: /home/code/.crc/cache
troubleshooting 0
$ crc status
ERRO Unable to connect to the server: dial tcp: lookup api.crc.testing: no such host
- exit status 1
- ubuntu is not officially supported: Ubuntu 18.04 LTS or newer and Debian 10 or newer are not officially supported and may require manual set up of the host machine.
- simple solution is to use NetworkManager instead systemd-resolver: https://labs.consol.de/devops/linux/2019/11/29/codeready-containers-on-ubuntu.html
- other discussed solutions how to get work crc on ubuntu:
troubleshooting 1
.crc/machines/crc/crc.qcow2' was not specified in the image metadata (See https://libvirt.org/kbase/backing_chains.html for troubleshooting)')
- crc-org/crc#1596
- https://stackoverflow.com/questions/64413928/starting-codeready-container-with-libvirt-cause-format-of-backing-image-was-not
qemu-img info ~/.crc/cache/crc_libvirt_4.5.14/crc.qcow2
qemu-img rebase -f qcow2 -F qcow2 -b /home/${USER}/.crc/cache/crc_libvirt_4.5.14/crc.qcow2 /home/${USER}/.crc/machines/crc/crc.qcow2
- now you will face permission issues. either move to libvirt dir OR add path to
/etc/apparmor.d/libvirt/TEMPLATE.qemu
sudo mv /home/${USER}/.crc/machines/crc/crc.qcow2 /var/lib/libvirt/images
profile LIBVIRT_TEMPLATE flags=(attach_disconnected) {
#include <abstractions/libvirt-qemu>
/home/${USER}/.crc/cache/crc_libvirt_4.5.14/crc.qcow2 rk,
}
links
- https://code-ready.github.io/crc/
- https://github.com/code-ready/crc/releases
- https://developers.redhat.com/products/codeready-containers
- https://developers.redhat.com/openshift/local-openshift/
- https://developers.redhat.com/blog/2019/09/05/red-hat-openshift-4-on-your-laptop-introducing-red-hat-codeready-containers/
- https://libvirt.org/index.html
- https://docs.openshift.com/container-platform/4.2/welcome/index.html
- OpenShift on local machine with CKD, Minishift, Virtualbox
about
- this description covers the setup with virtualbox under windows. but virtualbox runs also on Linux and Mac.
precondition
- ensure that you have installed VirtualBox on your (local) host machine
steps to do
-
our starting point: https://www.okd.io/minishift/
-
download minishift release https://github.com/minishift/minishift/releases
-
add minishift dir to Path
-
set VirtualBox to minishift
minishift config set vm-driver virtualbox
- start minishift
- i choose version 3.9.0 because versions above make trouble on my machine.
- if you leave memory flag then 4G is default value
minishift start --openshift-version v3.9.0 --memory 8G
- you can list available openshift versions with the following command
minishift openshift version list
- set the oc path to environment
minishift oc-env
- output:
SET PATH=%userprofile%\.minishift\cache\oc\v3.9.0\windows;%PATH%
REM Run this command to configure your shell:
REM @FOR /f "tokens=*" %i IN ('minishift oc-env') DO @call %i
- now you are ready do use oc cli
oc status
In project My Project (myproject) on server https://192.168.99.100:8443
- further commands
- stop minishift
minishift stop
- delete minishift
minishift delete
minishift delete --force --clear-cache
- access (openshift) docker
minishift ssh -- docker ps
- centos
- default login root:centos
- troubleshooting
- if you have connection to port 8443, then ensure to stop process
virtualbox DCHP
in task / process manager.
Error: Get https://192.168.0.20:8443/healthz/ready: dial tcp 192.168.0.20:8443: connectex: No connection could be made because the target machine actively refused it
oc create
: Create a resource by filename or stdin
oc apply
: Apply a configuration to a resource by filename or stdin.
oc new-app
: Create a new application by specifying source code, templates, and/or images
oc process
: Process template into a list of resources specified in filename or stdin
oc <command> -h
or oc <command> --help
: for more information about a given command.
oc get
: Display one or many resources
oc get template
: list templates in project
oc delete template <template>
: delete a template in project
oc delete all --selector app=<app>
: delete all resources belongs to app
oc get is -n <namespace>
or oc get imagestreams -n <namespace>
: list images for given namespace