Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG][DEPLOY] MongoDB keeps init and never come up #99

Open
gyliu513 opened this issue Jun 23, 2020 · 2 comments
Open

[BUG][DEPLOY] MongoDB keeps init and never come up #99

gyliu513 opened this issue Jun 23, 2020 · 2 comments
Assignees
Labels
bug Something isn't working deploy specific to this repository... does not imply product specific issues

Comments

@gyliu513
Copy link

gyliu513 commented Jun 23, 2020

Describe the bug
A clear and concise description of what the bug is.

To Reproduce

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots

[root@ocp43-dev-inf prereqs]# oc get pods -n open-cluster-management
NAME                                                              READY   STATUS             RESTARTS   AGE
acm-controller-9dd999dcc-c42jg                                    1/1     Running            0          14m
acm-proxyserver-7c45654bf6-bg9jg                                  1/1     Running            0          14m
application-chart-ae36f-applicationui-55f468c4f8-6hcnt            1/1     Running            0          13m
cert-manager-e61c1-6cf697d5df-rhhcc                               1/1     Running            0          14m
cert-manager-webhook-03d4d-cainjector-8d76f6646-t5crm             1/1     Running            0          13m
cert-manager-webhook-74bdc8455d-jn6qf                             1/1     Running            0          13m
cluster-manager-64964fdf4f-bmhld                                  1/1     Running            0          15m
cluster-manager-64964fdf4f-vfckd                                  1/1     Running            0          15m
cluster-manager-64964fdf4f-xwk9v                                  1/1     Running            0          15m
configmap-watcher-4b90e-66c867f4f5-7rj8z                          1/1     Running            0          13m
console-chart-65124-consoleapi-56ff5dbdfb-5cslf                   1/1     Running            0          10m
console-chart-65124-consoleui-7b4fd788c6-4zrqr                    1/1     Running            0          10m
console-header-55ffb7666d-pnnp7                                   1/1     Running            0          10m
etcd-cluster-8mvj8pznpx                                           0/1     Init:0/1           0          14m
etcd-operator-558567f79d-wsczb                                    3/3     Running            0          15m
grc-7079f-grcui-7d9c8dd454-d8qf5                                  1/1     Running            0          13m
grc-7079f-grcuiapi-74896974c4-6pvj6                               1/1     Running            0          13m
grc-7079f-policy-propagator-67c7546d77-6v628                      1/1     Running            0          13m
hive-operator-6bf77bd558-v5nqd                                    1/1     Running            0          15m
klusterlet-addon-controller-5f47d9f99-dd8pw                       1/1     Running            0          13m
managedcluster-import-controller-69b69bf967-kjmw9                 1/1     Running            0          13m
management-ingress-2511c-6c7dff479c-9tzqr                         2/2     Running            0          12m
mcm-apiserver-564cb96f8d-2v4gc                                    0/1     CrashLoopBackOff   7          14m
mcm-apiserver-6f794b6df-ggm44                                     0/1     CrashLoopBackOff   6          12m
mcm-controller-8676c9b6db-gqkw2                                   1/1     Running            0          14m
mcm-webhook-98957b97f-7sdjw                                       1/1     Running            0          14m
multicluster-hub-custom-registry-64cdb758bc-7d9g7                 1/1     Running            0          16m
multicluster-mongodb-0                                            0/1     Init:1/2           0          13m
multicluster-operators-application-68445cbf88-5rxjr               4/4     Running            0          15m
multicluster-operators-hub-subscription-84c69bb5bf-h766b          1/1     Running            0          15m
multicluster-operators-standalone-subscription-55cc9d964d-c9p7q   1/1     Running            0          15m
multiclusterhub-operator-7cf7b55cc7-kh2cg                         1/1     Running            0          6m50s
multiclusterhub-repo-fdd98b94f-nwrvh                              1/1     Running            0          14m
search-operator-5c9f65c7c9-td78r                                  1/1     Running            0          10m
search-prod-798d3-redisgraph-58858bdb48-t5drs                     1/1     Running            0          10m
search-prod-798d3-search-aggregator-65c8cbcd4f-j4qpr              1/1     Running            0          10m
search-prod-798d3-search-api-6df9bd58cd-qwzkf                     1/1     Running            0          10m
search-prod-798d3-search-collector-774667b9f-68qg8                1/1     Running            0          10m
topology-30155-topology-5447cdd666-sq2vw                          1/1     Running            0          10m
topology-30155-topologyapi-5fc4c96466-h6m45                       1/1     Running            0          10m
[root@ocp43-dev-inf prereqs]# oc get pods -n open-cluster-management | grep -v Running
NAME                                                              READY   STATUS             RESTARTS   AGE
etcd-cluster-8mvj8pznpx                                           0/1     Init:0/1           0          14m
mcm-apiserver-564cb96f8d-2v4gc                                    0/1     CrashLoopBackOff   7          14m
mcm-apiserver-6f794b6df-ggm44                                     0/1     CrashLoopBackOff   6          12m
multicluster-mongodb-0                                            0/1     Init:1/2           0          13m
[root@ocp43-dev-inf prereqs]# oc describe pods -n open-cluster-management etcd-cluster-8mvj8pznpx mcm-apiserver-564cb96f8d-2v4gc mcm-apiserver-6f794b6df-ggm44 multicluster-mongodb-0

Name:         etcd-cluster-8mvj8pznpx
Namespace:    open-cluster-management
Priority:     0
Node:         worker1.ocp43-dev.os.fyre.ibm.com/10.16.100.29
Start Time:   Tue, 23 Jun 2020 07:54:33 -0700
Labels:       app=etcd
              etcd_cluster=etcd-cluster
              etcd_node=etcd-cluster-8mvj8pznpx
Annotations:  etcd.version: 3.2.13
              k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.254.8.26"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.254.8.1"
                    ]
                }]
              openshift.io/scc: multicloud-scc
Status:       Pending
IP:           10.254.8.26
IPs:
  IP:           10.254.8.26
Controlled By:  EtcdCluster/etcd-cluster
Init Containers:
  check-dns:
    Container ID:  cri-o://0616c0d1d24a5d34578631732fbee767547736ec874f105423004c72b149d3c3
    Image:         busybox:1.28.0-glibc
    Image ID:      docker.io/library/busybox@sha256:0b55a30394294ab23b9afd58fab94e61a923f5834fba7ddbae7f8e0c11ba85e6
    Port:          <none>
    Host Port:     <none>
    Command:
      /bin/sh
      -c

                TIMEOUT_READY=0
                while ( ! nslookup etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc )
                do
                  # If TIMEOUT_READY is 0 we should never time out and exit
                  TIMEOUT_READY=$(( TIMEOUT_READY-1 ))
                              if [ $TIMEOUT_READY -eq 0 ];
                                  then
                                      echo "Timed out waiting for DNS entry"
                                      exit 1
                                  fi
                              sleep 1
                            done
    State:          Running
      Started:      Tue, 23 Jun 2020 07:55:01 -0700
    Ready:          False
    Restart Count:  0
    Environment:    <none>
    Mounts:         <none>
Containers:
  etcd:
    Container ID:
    Image:         quay.io/coreos/etcd:v3.2.13
    Image ID:
    Ports:         2380/TCP, 2379/TCP
    Host Ports:    0/TCP, 0/TCP
    Command:
      /usr/local/bin/etcd
      --data-dir=/var/etcd/data
      --name=etcd-cluster-8mvj8pznpx
      --initial-advertise-peer-urls=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2380
      --listen-peer-urls=http://0.0.0.0:2380
      --listen-client-urls=http://0.0.0.0:2379
      --advertise-client-urls=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2379
      --initial-cluster=etcd-cluster-8mvj8pznpx=http://etcd-cluster-8mvj8pznpx.etcd-cluster.open-cluster-management.svc:2380
      --initial-cluster-state=new
      --initial-cluster-token=2341bb1f-5975-444c-8aef-4c6f8ee82d83
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Liveness:       exec [/bin/sh -ec ETCDCTL_API=3 etcdctl endpoint status] delay=10s timeout=10s period=60s #success=1 #failure=3
    Readiness:      exec [/bin/sh -ec ETCDCTL_API=3 etcdctl endpoint status] delay=1s timeout=5s period=5s #success=1 #failure=3
    Environment:    <none>
    Mounts:
      /var/etcd from etcd-data (rw)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  etcd-data:
    Type:        PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:   etcd-cluster-8mvj8pznpx
    ReadOnly:    false
QoS Class:       BestEffort
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason                  Age        From                                        Message
  ----     ------                  ----       ----                                        -------
  Warning  FailedScheduling        <unknown>  default-scheduler                           pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Warning  FailedScheduling        <unknown>  default-scheduler                           pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled               <unknown>  default-scheduler                           Successfully assigned open-cluster-management/etcd-cluster-8mvj8pznpx to worker1.ocp43-dev.os.fyre.ibm.com
  Normal   SuccessfulAttachVolume  15m        attachdetach-controller                     AttachVolume.Attach succeeded for volume "pvc-b8bdd641-d4fa-437a-b0bd-2923c1234880"
  Normal   Pulling                 14m        kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Pulling image "busybox:1.28.0-glibc"
  Normal   Pulled                  14m        kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Successfully pulled image "busybox:1.28.0-glibc"
  Normal   Created                 14m        kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Created container check-dns
  Normal   Started                 14m        kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Started container check-dns


Name:         mcm-apiserver-564cb96f8d-2v4gc
Namespace:    open-cluster-management
Priority:     0
Node:         worker1.ocp43-dev.os.fyre.ibm.com/10.16.100.29
Start Time:   Tue, 23 Jun 2020 07:54:30 -0700
Labels:       app=mcm-apiserver
              pod-template-hash=564cb96f8d
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.254.8.32"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.254.8.1"
                    ]
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.254.8.32
IPs:
  IP:           10.254.8.32
Controlled By:  ReplicaSet/mcm-apiserver-564cb96f8d
Containers:
  mcm-apiserver:
    Container ID:  cri-o://ff500f77e94a41b9971e0a039ada8ede86bb4168f55044ef01b961f76c400f68
    Image:         quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
    Image ID:      quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
    Port:          <none>
    Host Port:     <none>
    Args:
      /mcm-apiserver
      --mongo-database=mcm
      --enable-admission-plugins=HCMUserIdentity,KlusterletCA,NamespaceLifecycle
      --secure-port=6443
      --tls-cert-file=/var/run/apiserver/tls.crt
      --tls-private-key-file=/var/run/apiserver/tls.key
      --klusterlet-cafile=/var/run/klusterlet/ca.crt
      --klusterlet-certfile=/var/run/klusterlet/tls.crt
      --klusterlet-keyfile=/var/run/klusterlet/tls.key
      --http2-max-streams-per-connection=1000
      --etcd-servers=http://etcd-cluster.open-cluster-management.svc.cluster.local:2379
      --mongo-host=multicluster-mongodb
      --mongo-replicaset=rs0
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 23 Jun 2020 08:04:50 -0700
      Finished:     Tue, 23 Jun 2020 08:05:11 -0700
    Ready:          False
    Restart Count:  7
    Limits:
      memory:  2Gi
    Requests:
      cpu:      200m
      memory:   256Mi
    Liveness:   http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MONGO_USERNAME:  <set to the key 'user' in secret 'mongodb-admin'>      Optional: false
      MONGO_PASSWORD:  <set to the key 'password' in secret 'mongodb-admin'>  Optional: false
      MONGO_SSLCA:     /certs/mongodb-ca/tls.crt
      MONGO_SSLCERT:   /certs/mongodb-client/tls.crt
      MONGO_SSLKEY:    /certs/mongodb-client/tls.key
    Mounts:
      /certs/mongodb-ca from mongodb-ca-cert (rw)
      /certs/mongodb-client from mongodb-client-cert (rw)
      /var/run/apiserver from apiserver-certs (rw)
      /var/run/klusterlet from klusterlet-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from acm-foundation-sa-token-jqbzg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  apiserver-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-apiserver-self-signed-secrets
    Optional:    false
  klusterlet-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-klusterlet-self-signed-secrets
    Optional:    false
  mongodb-ca-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicloud-ca-cert
    Optional:    false
  mongodb-client-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicluster-mongodb-client-cert
    Optional:    false
  acm-foundation-sa-token-jqbzg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  acm-foundation-sa-token-jqbzg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                  From                                        Message
  ----     ------       ----                 ----                                        -------
  Normal   Scheduled    <unknown>            default-scheduler                           Successfully assigned open-cluster-management/mcm-apiserver-564cb96f8d-2v4gc to worker1.ocp43-dev.os.fyre.ibm.com
  Warning  FailedMount  14m (x8 over 15m)    kubelet, worker1.ocp43-dev.os.fyre.ibm.com  MountVolume.SetUp failed for volume "mongodb-ca-cert" : secret "multicloud-ca-cert" not found
  Warning  FailedMount  14m (x8 over 15m)    kubelet, worker1.ocp43-dev.os.fyre.ibm.com  MountVolume.SetUp failed for volume "mongodb-client-cert" : secret "multicluster-mongodb-client-cert" not found
  Warning  FailedMount  13m                  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[acm-foundation-sa-token-jqbzg apiserver-certs klusterlet-certs mongodb-ca-cert mongodb-client-cert]: timed out waiting for the condition
  Normal   Pulling      12m                  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Pulling image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
  Normal   Pulled       12m                  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Successfully pulled image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
  Normal   Created      12m                  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Created container mcm-apiserver
  Normal   Started      12m                  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Started container mcm-apiserver
  Warning  Unhealthy    12m (x2 over 12m)    kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Liveness probe failed: Get https://10.254.8.32:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy    12m (x2 over 12m)    kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Readiness probe failed: Get https://10.254.8.32:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  BackOff      5m2s (x31 over 11m)  kubelet, worker1.ocp43-dev.os.fyre.ibm.com  Back-off restarting failed container


Name:         mcm-apiserver-6f794b6df-ggm44
Namespace:    open-cluster-management
Priority:     0
Node:         worker2.ocp43-dev.os.fyre.ibm.com/10.16.100.30
Start Time:   Tue, 23 Jun 2020 07:56:02 -0700
Labels:       app=mcm-apiserver
              certmanager.k8s.io/time-restarted=2020-6-23.1456
              pod-template-hash=6f794b6df
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.254.0.38"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.254.0.1"
                    ]
                }]
              openshift.io/scc: restricted
Status:       Running
IP:           10.254.0.38
IPs:
  IP:           10.254.0.38
Controlled By:  ReplicaSet/mcm-apiserver-6f794b6df
Containers:
  mcm-apiserver:
    Container ID:  cri-o://b4b4cc14454bd0b3b7d43ac339dabbdcc9500d4aa2644004b0febae4d58bce70
    Image:         quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
    Image ID:      quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259
    Port:          <none>
    Host Port:     <none>
    Args:
      /mcm-apiserver
      --mongo-database=mcm
      --enable-admission-plugins=HCMUserIdentity,KlusterletCA,NamespaceLifecycle
      --secure-port=6443
      --tls-cert-file=/var/run/apiserver/tls.crt
      --tls-private-key-file=/var/run/apiserver/tls.key
      --klusterlet-cafile=/var/run/klusterlet/ca.crt
      --klusterlet-certfile=/var/run/klusterlet/tls.crt
      --klusterlet-keyfile=/var/run/klusterlet/tls.key
      --http2-max-streams-per-connection=1000
      --etcd-servers=http://etcd-cluster.open-cluster-management.svc.cluster.local:2379
      --mongo-host=multicluster-mongodb
      --mongo-replicaset=rs0
    State:          Running
      Started:      Tue, 23 Jun 2020 08:09:26 -0700
    Last State:     Terminated
      Reason:       Error
      Exit Code:    2
      Started:      Tue, 23 Jun 2020 08:09:14 -0700
      Finished:     Tue, 23 Jun 2020 08:09:24 -0700
    Ready:          False
    Restart Count:  8
    Limits:
      memory:  2Gi
    Requests:
      cpu:      200m
      memory:   256Mi
    Liveness:   http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MONGO_USERNAME:  <set to the key 'user' in secret 'mongodb-admin'>      Optional: false
      MONGO_PASSWORD:  <set to the key 'password' in secret 'mongodb-admin'>  Optional: false
      MONGO_SSLCA:     /certs/mongodb-ca/tls.crt
      MONGO_SSLCERT:   /certs/mongodb-client/tls.crt
      MONGO_SSLKEY:    /certs/mongodb-client/tls.key
    Mounts:
      /certs/mongodb-ca from mongodb-ca-cert (rw)
      /certs/mongodb-client from mongodb-client-cert (rw)
      /var/run/apiserver from apiserver-certs (rw)
      /var/run/klusterlet from klusterlet-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from acm-foundation-sa-token-jqbzg (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  apiserver-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-apiserver-self-signed-secrets
    Optional:    false
  klusterlet-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-klusterlet-self-signed-secrets
    Optional:    false
  mongodb-ca-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicloud-ca-cert
    Optional:    false
  mongodb-client-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicluster-mongodb-client-cert
    Optional:    false
  acm-foundation-sa-token-jqbzg:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  acm-foundation-sa-token-jqbzg
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                    From                                        Message
  ----     ------       ----                   ----                                        -------
  Normal   Scheduled    <unknown>              default-scheduler                           Successfully assigned open-cluster-management/mcm-apiserver-6f794b6df-ggm44 to worker2.ocp43-dev.os.fyre.ibm.com
  Warning  FailedMount  13m (x6 over 13m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  MountVolume.SetUp failed for volume "mongodb-client-cert" : secret "multicluster-mongodb-client-cert" not found
  Warning  Unhealthy    12m (x2 over 12m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Readiness probe failed: Get https://10.254.0.38:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Normal   Killing      12m                    kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Container mcm-apiserver failed liveness probe, will be restarted
  Normal   Pulling      12m (x3 over 13m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Pulling image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
  Normal   Pulled       12m (x3 over 12m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Successfully pulled image "quay.io/open-cluster-management/multicloud-manager@sha256:7e6fa2399ac53feda232bff542feadc4861ec03a1548c36973ccadc9f7e14259"
  Normal   Created      12m (x3 over 12m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Created container mcm-apiserver
  Normal   Started      12m (x3 over 12m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Started container mcm-apiserver
  Warning  Unhealthy    11m (x4 over 12m)      kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Liveness probe failed: Get https://10.254.0.38:6443/healthz: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Unhealthy    8m29s (x2 over 8m39s)  kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Readiness probe failed: Get https://10.254.0.38:6443/healthz: dial tcp 10.254.0.38:6443: connect: connection refused
  Warning  BackOff      3m30s (x33 over 11m)   kubelet, worker2.ocp43-dev.os.fyre.ibm.com  Back-off restarting failed container


Name:         multicluster-mongodb-0
Namespace:    open-cluster-management
Priority:     0
Node:         worker0.ocp43-dev.os.fyre.ibm.com/10.16.100.28
Start Time:   Tue, 23 Jun 2020 07:55:22 -0700
Labels:       app=multicluster-mongodb
              controller-revision-hash=multicluster-mongodb-557c8b465f
              release=multicluster-mongodb-62daa
              statefulset.kubernetes.io/pod-name=multicluster-mongodb-0
Annotations:  k8s.v1.cni.cncf.io/networks-status:
                [{
                    "name": "openshift-sdn",
                    "interface": "eth0",
                    "ips": [
                        "10.254.12.37"
                    ],
                    "dns": {},
                    "default-route": [
                        "10.254.12.1"
                    ]
                }]
              openshift.io/scc: anyuid
Status:       Pending
IP:           10.254.12.37
IPs:
  IP:           10.254.12.37
Controlled By:  StatefulSet/multicluster-mongodb
Init Containers:
  install:
    Container ID:  cri-o://9e4d0cfc80e3cebdb12181b8499fff387559e54552d72f3c8b3368481eda1daa
    Image:         quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3
    Image ID:      quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3
    Port:          <none>
    Host Port:     <none>
    Command:
      /install/install.sh
    Args:
      --work-dir=/var/lib/mongodb/work-dir
      --config-dir=/var/lib/mongodb/data/configdb
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 23 Jun 2020 07:56:41 -0700
      Finished:     Tue, 23 Jun 2020 07:56:41 -0700
    Ready:          True
    Restart Count:  0
    Limits:
      memory:  5Gi
    Requests:
      memory:     2Gi
    Environment:  <none>
    Mounts:
      /ca-readonly from ca (rw)
      /configdb-readonly from config (rw)
      /install from install (rw)
      /keydir-readonly from keydir (rw)
      /tmp from tmp-mongodb (rw)
      /var/lib/mongodb/data/configdb from configdir (rw)
      /var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
      /var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
      /var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
  bootstrap:
    Container ID:  cri-o://ac85c09ab3dc962da8a3e3b2abb263717d5fb787adf1a7b872bcad43e8d5fbd0
    Image:         quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
    Image ID:      quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
    Port:          <none>
    Host Port:     <none>
    Command:
      /var/lib/mongodb/work-dir/peer-finder
    Args:
      -on-start=/init/on-start.sh
      -service=multicluster-mongodb
    State:          Running
      Started:      Tue, 23 Jun 2020 07:57:03 -0700
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  5Gi
    Requests:
      memory:  2Gi
    Environment:
      POD_NAMESPACE:       open-cluster-management (v1:metadata.namespace)
      REPLICA_SET:         rs0
      AUTH:                true
      ADMIN_USER:          <set to the key 'user' in secret 'mongodb-admin'>      Optional: false
      ADMIN_PASSWORD:      <set to the key 'password' in secret 'mongodb-admin'>  Optional: false
      NETWORK_IP_VERSION:  ipv4
    Mounts:
      /init from init (rw)
      /tmp from tmp-mongodb (rw)
      /var/lib/mongodb/data/configdb from configdir (rw)
      /var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
      /var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
      /var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
Containers:
  multicluster-mongodb:
    Container ID:
    Image:         quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866
    Image ID:
    Port:          27017/TCP
    Host Port:     0/TCP
    Command:
      mongod
      --config=/var/lib/mongodb/data/configdb/mongod.conf
    State:          Waiting
      Reason:       PodInitializing
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  5Gi
    Requests:
      memory:   2Gi
    Liveness:   exec [mongo --ssl --sslCAFile=/var/lib/mongodb/data/configdb/tls.crt --sslPEMKeyFile=/var/lib/mongodb/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=30s timeout=5s period=10s #success=1 #failure=3
    Readiness:  exec [mongo --ssl --sslCAFile=/var/lib/mongodb/data/configdb/tls.crt --sslPEMKeyFile=/var/lib/mongodb/work-dir/mongo.pem --eval db.adminCommand('ping')] delay=5s timeout=1s period=10s #success=1 #failure=3
    Environment:
      AUTH:            true
      ADMIN_USER:      <set to the key 'user' in secret 'mongodb-admin'>      Optional: false
      ADMIN_PASSWORD:  <set to the key 'password' in secret 'mongodb-admin'>  Optional: false
    Mounts:
      /tmp from tmp-mongodb (rw)
      /var/lib/mongodb/data/configdb from configdir (rw)
      /var/lib/mongodb/data/db from mongodbdir (rw,path="datadir")
      /var/lib/mongodb/work-dir from mongodbdir (rw,path="workdir")
      /var/run/secrets/kubernetes.io/serviceaccount from multicluster-mongodb-token-4g4x5 (ro)
Conditions:
  Type              Status
  Initialized       False
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mongodbdir:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  mongodbdir-multicluster-mongodb-0
    ReadOnly:   false
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multicluster-mongodb
    Optional:  false
  init:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multicluster-mongodb-init
    Optional:  false
  install:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      multicluster-mongodb-install
    Optional:  false
  ca:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicloud-ca-cert
    Optional:    false
  keydir:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicluster-mongodb-keyfile
    Optional:    false
  configdir:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-mongodb:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  tmp-metrics:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
    SizeLimit:  <unset>
  multicluster-mongodb-token-4g4x5:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicluster-mongodb-token-4g4x5
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute
                 node.kubernetes.io/unreachable:NoExecute
Events:
  Type     Reason                  Age                From                                        Message
  ----     ------                  ----               ----                                        -------
  Warning  FailedScheduling        <unknown>          default-scheduler                           pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Warning  FailedScheduling        <unknown>          default-scheduler                           pod has unbound immediate PersistentVolumeClaims (repeated 3 times)
  Normal   Scheduled               <unknown>          default-scheduler                           Successfully assigned open-cluster-management/multicluster-mongodb-0 to worker0.ocp43-dev.os.fyre.ibm.com
  Normal   SuccessfulAttachVolume  14m                attachdetach-controller                     AttachVolume.Attach succeeded for volume "pvc-891a979a-c1ce-4234-a016-98fde691c76f"
  Warning  FailedMount             13m (x7 over 14m)  kubelet, worker0.ocp43-dev.os.fyre.ibm.com  MountVolume.SetUp failed for volume "ca" : secret "multicloud-ca-cert" not found
  Normal   Pulling                 13m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Pulling image "quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3"
  Normal   Pulled                  12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Successfully pulled image "quay.io/open-cluster-management/multicluster-mongodb-init@sha256:904ebd15cf4074dca8d8f980433501af7037335ecaf06c79c90b3fda9a99b7e3"
  Normal   Created                 12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Created container install
  Normal   Started                 12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Started container install
  Normal   Pulling                 12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Pulling image "quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866"
  Normal   Pulled                  12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Successfully pulled image "quay.io/open-cluster-management/multicluster-mongodb@sha256:9320e0acc578efd94b6056b8be344b3e742fd0597568013187ef69ecbd077866"
  Normal   Created                 12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Created container bootstrap
  Normal   Started                 12m                kubelet, worker0.ocp43-dev.os.fyre.ibm.com  Started container bootstrap

Desktop (please complete the following information):

  • OS: [e.g. mac, rhel, etc..]
[root@ocp43-dev-inf deploy]# oc get nodes -owide
NAME                                STATUS   ROLES    AGE   VERSION   INTERNAL-IP    EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                CONTAINER-RUNTIME
master0.ocp43-dev.os.fyre.ibm.com   Ready    master   10h   v1.16.2   10.16.96.192   <none>        Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker0.ocp43-dev.os.fyre.ibm.com   Ready    worker   10h   v1.16.2   10.16.100.28   <none>        Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker1.ocp43-dev.os.fyre.ibm.com   Ready    worker   10h   v1.16.2   10.16.100.29   <none>        Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
worker2.ocp43-dev.os.fyre.ibm.com   Ready    worker   11h   v1.16.2   10.16.100.30   <none>        Red Hat Enterprise Linux CoreOS 43.81.202004130853.0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.16.5-1.dev.rhaos4.3.git91157c1.el8
  • Browser [e.g. chrome, safari, firefox]
  • Snapshot [e.g. SNAPSHOT-XX-XX-XX-XX]
2.0.0-SNAPSHOT-2020-06-23-14-20-27

Additional context
Add any other context about the problem here.

@gyliu513 gyliu513 added bug Something isn't working deploy specific to this repository... does not imply product specific issues labels Jun 23, 2020
@morningspace
Copy link
Member

I saw the same issue in my env w/ this error:

MountVolume.SetUp failed for volume "ca" : secret "multicloud-ca-cert" not found

Also, I didn't see below pods in my env:

cert-manager-e61c1-6cf697d5df-rhhcc
cert-manager-webhook-03d4d-cainjector-8d76f6646-t5crm
cert-manager-webhook-74bdc8455d-jn6qf

@Daniel-Vaz
Copy link

I have a similar problem. I installed "Advanced Cluster Management for Kubernetes" Operator via the OperatorHub. Created a simple MultiClusterHub and got stuck at this point. I followed the official red hat documentation.

Using Openshft version 4.4.8 installed on vSphere UPI.

[root@helpernode ~]# oc get nodes -o wide
NAME                                 STATUS   ROLES    AGE     VERSION           INTERNAL-IP      EXTERNAL-IP      OS-IMAGE                                                       KERNEL-VERSION                CONTAINER-RUNTIME
master0.openshiftcluster.lab.local   Ready    master   4d23h   v1.17.1 3f6f40d   10.150.176.207   10.150.176.207   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8
master1.openshiftcluster.lab.local   Ready    master   5d1h    v1.17.1 3f6f40d   10.150.176.208   10.150.176.208   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8
master2.openshiftcluster.lab.local   Ready    master   10d     v1.17.1 3f6f40d   10.150.176.209   10.150.176.209   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8
worker0.lab.local                    Ready    worker   5d1h    v1.17.1 3f6f40d   10.150.176.210   10.150.176.210   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8
worker1.openshiftcluster.lab.local   Ready    worker   10d     v1.17.1 3f6f40d   10.150.176.211   10.150.176.211   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8
worker2.lab.local                    Ready    worker   5d1h    v1.17.1 3f6f40d   10.150.176.212   10.150.176.212   Red Hat Enterprise Linux CoreOS 44.81.202006080130-0 (Ootpa)   4.18.0-147.8.1.el8_1.x86_64   cri-o://1.17.4-14.dev.rhaos4.4.gitb93af5d.el8

[root@helpernode ~]# oc version
Client Version: version.Info{Major:"4", Minor:"1 ", GitVersion:"v4.1.18-201909201915 72d1bea-dirty", GitCommit:"72d1bea", GitTreeState:"dirty", BuildDate:"2019-09-21T02:11:40Z", GoVersion:"go1.11.13", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"17 ", GitVersion:"v1.17.1 3f6f40d", GitCommit:"3f6f40d", GitTreeState:"clean", BuildDate:"2020-06-08T07:13:25Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"}

All resources from namespace open-cluster-management.

[root@helpernode ~]# oc get all -n open-cluster-management
NAME                                                                  READY   STATUS              RESTARTS   AGE
pod/application-chart-5a30b-applicationui-75b7656f8b-tpznc            1/1     Running             0          2d5h
pod/configmap-watcher-708df-766b45dc94-zfzv8                          1/1     Running             0          2d5h
pod/etcd-operator-5f96987979-bkv7m                                    3/3     Running             0          2d5h
pod/hive-operator-f87cb8795-9hbqq                                     1/1     Running             0          2d5h
pod/mcm-apiserver-564dfd455-44pqg                                     0/1     ContainerCreating   0          32h
pod/mcm-controller-5f875c7695-zpxxn                                   1/1     Running             0          2d5h
pod/mcm-webhook-5978645464-nwr4h                                      1/1     Running             0          2d5h
pod/multicluster-operators-application-7c8dbb89c5-8x65f               4/4     Running             0          2d5h
pod/multicluster-operators-hub-subscription-96c947f4-zlw8w            1/1     Running             0          2d5h
pod/multicluster-operators-standalone-subscription-657746c9d5-8g9vw   1/1     Running             4          2d5h
pod/multiclusterhub-operator-6b747fc95-lnxlw                          1/1     Running             0          32h
pod/multiclusterhub-repo-794c964dcf-l8hfl                             1/1     Running             0          2d5h

NAME                                                             TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
service/application-chart-5a30b-applicationui                    ClusterIP   172.30.59.166    <none>        3001/TCP            2d5h
service/etcd-restore-operator                                    ClusterIP   172.30.3.165     <none>        19999/TCP           2d5h
service/mcm-apiserver                                            ClusterIP   172.30.61.5      <none>        443/TCP             2d5h
service/mcm-webhook                                              ClusterIP   172.30.29.252    <none>        443/TCP             2d5h
service/multicluster-operators-application-metrics               ClusterIP   172.30.35.242    <none>        8386/TCP,8689/TCP   2d5h
service/multicluster-operators-channel-metrics                   ClusterIP   172.30.28.246    <none>        8384/TCP,8687/TCP   2d5h
service/multicluster-operators-deployable-metrics                ClusterIP   172.30.125.23    <none>        8385/TCP,8688/TCP   2d5h
service/multicluster-operators-hub-subscription-metrics          ClusterIP   172.30.55.92     <none>        8383/TCP,8686/TCP   2d5h
service/multicluster-operators-placementrule-metrics             ClusterIP   172.30.146.248   <none>        8383/TCP,8686/TCP   2d5h
service/multicluster-operators-standalone-subscription-metrics   ClusterIP   172.30.84.10     <none>        8383/TCP,8686/TCP   2d5h
service/multiclusterhub-operator-metrics                         ClusterIP   172.30.81.143    <none>        8383/TCP,8686/TCP   32h
service/multiclusterhub-operator-webhook                         ClusterIP   172.30.220.212   <none>        443/TCP             32h
service/multiclusterhub-repo                                     ClusterIP   172.30.40.158    <none>        3000/TCP            2d5h

NAME                                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/application-chart-5a30b-applicationui            1/1     1            1           2d5h
deployment.apps/configmap-watcher-708df                          1/1     1            1           2d5h
deployment.apps/etcd-operator                                    1/1     1            1           2d5h
deployment.apps/hive-operator                                    1/1     1            1           2d5h
deployment.apps/mcm-apiserver                                    0/1     1            0           2d5h
deployment.apps/mcm-controller                                   1/1     1            1           2d5h
deployment.apps/mcm-webhook                                      1/1     1            1           2d5h
deployment.apps/multicluster-operators-application               1/1     1            1           2d5h
deployment.apps/multicluster-operators-hub-subscription          1/1     1            1           2d5h
deployment.apps/multicluster-operators-standalone-subscription   1/1     1            1           2d5h
deployment.apps/multiclusterhub-operator                         1/1     1            1           2d5h
deployment.apps/multiclusterhub-repo                             1/1     1            1           2d5h

NAME                                                                        DESIRED   CURRENT   READY   AGE
replicaset.apps/application-chart-5a30b-applicationui-75b7656f8b            1         1         1       2d5h
replicaset.apps/configmap-watcher-708df-766b45dc94                          1         1         1       2d5h
replicaset.apps/etcd-operator-5f96987979                                    1         1         1       2d5h
replicaset.apps/hive-operator-f87cb8795                                     1         1         1       2d5h
replicaset.apps/mcm-apiserver-564dfd455                                     1         1         0       2d5h
replicaset.apps/mcm-controller-5f875c7695                                   1         1         1       2d5h
replicaset.apps/mcm-webhook-5978645464                                      1         1         1       2d5h
replicaset.apps/multicluster-operators-application-7c8dbb89c5               1         1         1       2d5h
replicaset.apps/multicluster-operators-hub-subscription-96c947f4            1         1         1       2d5h
replicaset.apps/multicluster-operators-standalone-subscription-657746c9d5   1         1         1       2d5h
replicaset.apps/multiclusterhub-operator-6b747fc95                          1         1         1       32h
replicaset.apps/multiclusterhub-repo-794c964dcf                             1         1         1       2d5h

Failing Pod Description:

[root@helpernode ~]# oc describe pod mcm-apiserver-564dfd455-44pqg
Name:               mcm-apiserver-564dfd455-44pqg
Namespace:          open-cluster-management
Priority:           0
PriorityClassName:  <none>
Node:               worker2.lab.local/10.150.176.212
Start Time:         Sat, 27 Jun 2020 07:13:02  0100
Labels:             app=mcm-apiserver
                    pod-template-hash=564dfd455
Annotations:        openshift.io/scc: anyuid
Status:             Pending
IP:
Controlled By:      ReplicaSet/mcm-apiserver-564dfd455
Containers:
  mcm-apiserver:
    Container ID:
    Image:         registry.redhat.io/rhacm1-tech-preview/multicloud-manager-rhel8@sha256:a8fb58443c3177e5ac78933a35198a75b7746396d68f477d484b4e0c4bc7d295
    Image ID:
    Port:          <none>
    Host Port:     <none>
    Args:
      /mcm-apiserver
      --mongo-database=mcm
      --enable-admission-plugins=HCMUserIdentity,KlusterletCA,NamespaceLifecycle
      --secure-port=6443
      --tls-cert-file=/var/run/apiserver/tls.crt
      --tls-private-key-file=/var/run/apiserver/tls.key
      --klusterlet-cafile=/var/run/klusterlet/ca.crt
      --klusterlet-certfile=/var/run/klusterlet/tls.crt
      --klusterlet-keyfile=/var/run/klusterlet/tls.key
      --http2-max-streams-per-connection=1000
      --etcd-servers=http://etcd-cluster.open-cluster-management.svc.cluster.local:2379
      --mongo-host=multicluster-mongodb
      --mongo-replicaset=rs0
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Limits:
      memory:  2Gi
    Requests:
      cpu:      200m
      memory:   256Mi
    Liveness:   http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Readiness:  http-get https://:6443/healthz delay=2s timeout=1s period=10s #success=1 #failure=3
    Environment:
      MONGO_USERNAME:  <set to the key 'user' in secret 'mongodb-admin'>      Optional: false
      MONGO_PASSWORD:  <set to the key 'password' in secret 'mongodb-admin'>  Optional: false
      MONGO_SSLCA:     /certs/mongodb-ca/tls.crt
      MONGO_SSLCERT:   /certs/mongodb-client/tls.crt
      MONGO_SSLKEY:    /certs/mongodb-client/tls.key
    Mounts:
      /certs/mongodb-ca from mongodb-ca-cert (rw)
      /certs/mongodb-client from mongodb-client-cert (rw)
      /var/run/apiserver from apiserver-certs (rw)
      /var/run/klusterlet from klusterlet-certs (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from hub-sa-token-6xwh2 (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  apiserver-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-apiserver-self-signed-secrets
    Optional:    false
  klusterlet-certs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  mcm-klusterlet-self-signed-secrets
    Optional:    false
  mongodb-ca-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicloud-ca-cert
    Optional:    false
  mongodb-client-cert:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  multicluster-mongodb-client-cert
    Optional:    false
  hub-sa-token-6xwh2:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  hub-sa-token-6xwh2
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  <none>
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
                 node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason       Age                   From                        Message
  ----     ------       ----                  ----                        -------
  Warning  FailedMount  176m (x136 over 33h)  kubelet, worker2.lab.local  Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[klusterlet-certs mongodb-ca-cert mongodb-client-cert hub-sa-token-6xwh2 apiserver-certs]: timed out waiting for the condition
  Warning  FailedMount  152m (x148 over 33h)  kubelet, worker2.lab.local  Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[hub-sa-token-6xwh2 apiserver-certs klusterlet-certs mongodb-ca-cert mongodb-client-cert]: timed out waiting for the condition
  Warning  FailedMount  90m (x134 over 33h)   kubelet, worker2.lab.local  Unable to attach or mount volumes: unmounted volumes=[mongodb-client-cert mongodb-ca-cert], unattached volumes=[mongodb-client-cert hub-sa-token-6xwh2 apiserver-certs klusterlet-certs mongodb-ca-cert]: timed out waiting for the condition
  Warning  FailedMount  66m (x967 over 33h)   kubelet, worker2.lab.local  MountVolume.SetUp failed for volume "mongodb-client-cert" : secret "multicluster-mongodb-client-cert" not found
  Warning  FailedMount  61m (x286 over 33h)   kubelet, worker2.lab.local  Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[apiserver-certs klusterlet-certs mongodb-ca-cert mongodb-client-cert hub-sa-token-6xwh2]: timed out waiting for the condition
  Warning  FailedMount  56m (x145 over 33h)   kubelet, worker2.lab.local  Unable to attach or mount volumes: unmounted volumes=[mongodb-ca-cert mongodb-client-cert], unattached volumes=[mongodb-ca-cert mongodb-client-cert hub-sa-token-6xwh2 apiserver-certs klusterlet-certs]: timed out waiting for the condition
  Warning  FailedMount  52m (x974 over 33h)   kubelet, worker2.lab.local  MountVolume.SetUp failed for volume "mongodb-ca-cert" : secret "multicloud-ca-cert" not found

multiclusterhub YAML:

apiVersion: operators.open-cluster-management.io/v1beta1
kind: MultiClusterHub
metadata:
  creationTimestamp: '2020-06-26T09:47:41Z'
  finalizers:
    - finalizer.operators.open-cluster-management.io
  generation: 3
  name: multiclusterhub
  namespace: open-cluster-management
  resourceVersion: '105195441'
  selfLink: >-
    /apis/operators.open-cluster-management.io/v1beta1/namespaces/open-cluster-management/multiclusterhubs/multiclusterhub
  uid: 0c8894ef-476d-41e1-8051-d6ed50fea16e
spec:
  cloudPakCompatibility: false
  etcd:
    storage: 1Gi
    storageClass: thin
  failover: false
  hive:
    backup:
      velero: {}
    failedProvisionConfig: {}
  imagePullSecret: myclustersecret
  ipv6: false
  mongo:
    storage: 5Gi
    storageClass: thin
  overrides: {}
status:
  currentVersion: 1.0.1
  deployments:
    - name: hive-operator
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:22:04Z'
            lastUpdateTime: '2020-06-26T09:22:04Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:21:12Z'
            lastUpdateTime: '2020-06-26T09:22:04Z'
            message: ReplicaSet "hive-operator-f87cb8795" has successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 13
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: mcm-webhook
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:50:08Z'
            lastUpdateTime: '2020-06-26T09:50:08Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:47:45Z'
            lastUpdateTime: '2020-06-26T09:50:08Z'
            message: ReplicaSet "mcm-webhook-5978645464" has successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: multicluster-operators-hub-subscription
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:23:18Z'
            lastUpdateTime: '2020-06-26T09:23:18Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:21:13Z'
            lastUpdateTime: '2020-06-26T09:23:18Z'
            message: >-
              ReplicaSet "multicluster-operators-hub-subscription-96c947f4" has
              successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 13
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: multiclusterhub-repo
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:49:10Z'
            lastUpdateTime: '2020-06-26T09:49:10Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:47:42Z'
            lastUpdateTime: '2020-06-26T09:49:10Z'
            message: >-
              ReplicaSet "multiclusterhub-repo-794c964dcf" has successfully
              progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: application-chart-5a30b-applicationui
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T10:04:27Z'
            lastUpdateTime: '2020-06-26T10:04:27Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T10:04:27Z'
            lastUpdateTime: '2020-06-26T10:04:27Z'
            message: >-
              ReplicaSet "application-chart-5a30b-applicationui-75b7656f8b" has
              successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: mcm-controller
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:50:09Z'
            lastUpdateTime: '2020-06-26T09:50:09Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:47:45Z'
            lastUpdateTime: '2020-06-26T09:50:09Z'
            message: >-
              ReplicaSet "mcm-controller-5f875c7695" has successfully
              progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: etcd-operator
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:21:16Z'
            lastUpdateTime: '2020-06-26T09:21:16Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:21:10Z'
            lastUpdateTime: '2020-06-26T09:21:16Z'
            message: ReplicaSet "etcd-operator-5f96987979" has successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 2
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: multicluster-operators-standalone-subscription
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:21:12Z'
            lastUpdateTime: '2020-06-26T09:22:00Z'
            message: >-
              ReplicaSet
              "multicluster-operators-standalone-subscription-657746c9d5" has
              successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
          - lastTransitionTime: '2020-06-28T07:53:45Z'
            lastUpdateTime: '2020-06-28T07:53:45Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
        observedGeneration: 13
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: mcm-apiserver
      status:
        conditions:
          - lastTransitionTime: '2020-06-26T09:47:44Z'
            lastUpdateTime: '2020-06-26T09:47:44Z'
            message: Deployment does not have minimum availability.
            reason: MinimumReplicasUnavailable
            status: 'False'
            type: Available
          - lastTransitionTime: '2020-06-27T06:23:03Z'
            lastUpdateTime: '2020-06-27T06:23:03Z'
            message: ReplicaSet "mcm-apiserver-564dfd455" has timed out progressing.
            reason: ProgressDeadlineExceeded
            status: 'False'
            type: Progressing
        observedGeneration: 1
        replicas: 1
        unavailableReplicas: 1
        updatedReplicas: 1
    - name: configmap-watcher-708df
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:50:24Z'
            lastUpdateTime: '2020-06-26T09:50:24Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:49:33Z'
            lastUpdateTime: '2020-06-26T09:50:24Z'
            message: >-
              ReplicaSet "configmap-watcher-708df-766b45dc94" has successfully
              progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 1
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
    - name: multicluster-operators-application
      status:
        availableReplicas: 1
        conditions:
          - lastTransitionTime: '2020-06-26T09:22:51Z'
            lastUpdateTime: '2020-06-26T09:22:51Z'
            message: Deployment has minimum availability.
            reason: MinimumReplicasAvailable
            status: 'True'
            type: Available
          - lastTransitionTime: '2020-06-26T09:21:12Z'
            lastUpdateTime: '2020-06-26T09:22:51Z'
            message: >-
              ReplicaSet "multicluster-operators-application-7c8dbb89c5" has
              successfully progressed.
            reason: NewReplicaSetAvailable
            status: 'True'
            type: Progressing
        observedGeneration: 13
        readyReplicas: 1
        replicas: 1
        updatedReplicas: 1
  desiredVersion: 1.0.1
  phase: Pending

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working deploy specific to this repository... does not imply product specific issues
Projects
None yet
Development

No branches or pull requests

4 participants