First, let's remind that if you follow carefully installation steps, there will be a moment you have to install the 'oc' CLI tool and use it to 'login' to your cluster via:
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
# replace <installation_directory> by the directory where you created your
# installation artefacts with the openshift-install prog
$ oc whoami
that shall confirm you are in as 'system:admin'
From that point, finding the URL path to the web console is quite easy, just do the variant:
$ oc whoami --show-console
and there's too a file dropped by the installer in your <installation_directory>:
$ cat <installation_directory>/auth/kubeadmin-password
That last file is actually seen as a security weakness and may disappear in future releases (RedHat recommends you to remove that account ).
Hence there's an alternative way to define a few additional 'admin' user accounts from the 'oc' command line, which is anyhow far better for sharing OC Cluster administration tasks with colleagues, each with one's identity instead of sharing the kubeadmin password, and also with a login method that won't depend from the availability of an IDP in case the later is unavailable for whatever reason (you can combine many authentication means).
Route path:
- add a 'htpasswd' identity provider to the built-in authentication pod
- add the 'cluster-admin' role to the users so created
Please review detailed steps in the above linked docs to understand what you are doing. Here is a brief summary.
#ensure you are properly logged in for the next 'oc' CLI commands
$ export KUBECONFIG=<installation_directory>/auth/kubeconfig
$ oc whoami
system:admin
#ensure the authentication operator is up and running
$ oc get clusteroperators
NAME VERSION AVAILABLE etc...
authentication 4.12.0 True etc...
...
#ensure authentication API pods are deployed
$ oc get pods -n openshift-authentication
NAME READY STATUS etc...
oauth-openshift-84955b4d7c-4d2dc 1/1 Running
oauth-openshift-84955b4d7c-4wx8v 1/1 Running
oauth-openshift-84955b4d7c-7pnqj 1/1 Running
# create an initial htpasswd file (if you already have one, or want to update passwords, omit the 'c' arg)
$ htpasswd -cB users.htpasswd <myLoginNameHere>
# your are prompted for a password twice
# repeat the command for additional users' login names
# prepare the file for inclusion as a string attribute in YAML
$ base64 -w0 users.htpasswd >users.htpasswd.b64
# edit a inject-htpass-secret.yaml file with the following content
apiVersion: v1
kind: Secret
metadata:
name: htpass-secret
namespace: openshift-config
type: Opaque
data:
htpasswd: 'YmVybmFyZG... you paste here between quotes the B64 content of your users.htpasswd.b64 file ... ZtQ1MwaEdDCg=='
# create or update the secret 'htpass-secret' with the new htpasswd artefact
$ oc apply -f inject-htpass-secret.yaml
if you just need to update users/passwords in a existing config, the above is sufficient.
#check you don't have yet a htpasswd identity provider configured
$ oc describe oauth.config.openshift.io/cluster
# or alternatively:
$ oc edit oauth.config.openshift.io cluster
# and you shall see that the Spec attribute is an empty object
#Then, add the provider. Edit an config-OAuth-id-provider.yaml file as below.
# you can only customize the name for your provider, here 'htpasswd_provider'
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
name: cluster
spec:
identityProviders:
- name: htpasswd_provider
mappingMethod: claim
type: HTPasswd
htpasswd:
fileData:
name: htpass-secret
# and apply (or update the htpasswd_provider ! ...or add it!)
$ oc apply -f config-OAuth-id-provider.yaml
Last, add a cluster-admin role to users
#each user must login once first,
# which is the way for the authentication operator to discover that a new user exists
#then, add the cluster role
$ oc adm policy add-cluster-role-to-user cluster-admin <userLoginNameHere>
#if you are already logged in, you may see your web console updating its display instantly
Enjoy local console logins!