Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PostgreSQL in version 16.2 with image ghcr.io/cloudnative-pg/postgresql:16.1 #93

Open
pchovelon opened this issue Feb 27, 2024 · 3 comments

Comments

@pchovelon
Copy link

Hi,

I want to create a version 16.1 PostgreSQL cluster on a minikube env:

cluster.yaml :

apiVersion: v1
data:
  username: ZGFsaWJv
  password: REBsaWJv
kind: Secret
metadata:
  name: dalibo-password
  labels:
    cnpg.io/reload: "true"
type: kubernetes.io/basic-auth

---

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: cluster-prod-161
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:16.1
  instances: 1
  storage:
    size: 1Gi
  managed:
    roles:
    - name: dalibo
      ensure: present
      comment: Utilisateur Dalibo
      login: true
      superuser: false
      passwordSecret:
        name: dalibo-password

Everything seems good :

$ k get cluster
NAME               AGE   INSTANCES   READY   STATUS                     PRIMARY
cluster-prod-161   11m   1           1       Cluster in healthy state   cluster-prod-161-1

 k describe pod cluster-prod-161-1
Name:             cluster-prod-161-1
Namespace:        default
Priority:         0
Service Account:  cluster-prod-161
Node:             minikube/192.168.49.2
Start Time:       Tue, 27 Feb 2024 16:14:43 +0100
Labels:           cnpg.io/cluster=cluster-prod-161
                  cnpg.io/instanceName=cluster-prod-161-1
                  cnpg.io/instanceRole=primary
                  cnpg.io/podRole=instance
                  role=primary
Annotations:      cnpg.io/nodeSerial: 1
                  cnpg.io/operatorVersion: 1.22.0
                  cnpg.io/podEnvHash: 5b996998b8
                  cnpg.io/podSpec:
                    {"volumes":[{"name":"pgdata","persistentVolumeClaim":{"claimName":"cluster-prod-161-1"}},{"name":"scratch-data","emptyDir":{}},{"name":"sh...
Status:           Running
SeccompProfile:   RuntimeDefault
IP:               10.244.0.5
IPs:
  IP:           10.244.0.5
Controlled By:  Cluster/cluster-prod-161
Init Containers:
  bootstrap-controller:
    Container ID:    docker://796d1b8c6ab5868085bc7a6d3abc1029a0c3536a60601dfad0ea76343f04fee6
    Image:           ghcr.io/cloudnative-pg/cloudnative-pg:1.22.0
    Image ID:        docker-pullable://ghcr.io/cloudnative-pg/cloudnative-pg@sha256:d9cff469d73f60df87f8f21c99e4c4e5f5f2b120b60de6500e5b74e5c45d332f
    Port:            <none>
    Host Port:       <none>
    SeccompProfile:  RuntimeDefault
    Command:
      /manager
      bootstrap
      /controller/manager
      --log-level=info
    State:          Terminated
      Reason:       Completed
      Exit Code:    0
      Started:      Tue, 27 Feb 2024 16:14:44 +0100
      Finished:     Tue, 27 Feb 2024 16:14:44 +0100
    Ready:          True
    Restart Count:  0
    Environment:    <none>
    Mounts:
      /controller from scratch-data (rw)
      /dev/shm from shm (rw)
      /etc/app-secret from app-secret (rw)
      /run from scratch-data (rw)
      /var/lib/postgresql/data from pgdata (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhv8g (ro)
Containers:
  postgres:
    Container ID:    docker://1e82e2624392d7a8181a3316b12f1d85dda8728cabfec53080c79c541a5a56ed
    Image:           ghcr.io/cloudnative-pg/postgresql:16.1
    Image ID:        docker-pullable://ghcr.io/cloudnative-pg/postgresql@sha256:6e0bdf657ef224339abc6cd7ad669b049fb26875e7d9456d46bf40ff38e26b1b
    Ports:           5432/TCP, 9187/TCP, 8000/TCP
    Host Ports:      0/TCP, 0/TCP, 0/TCP
    SeccompProfile:  RuntimeDefault
    Command:
      /controller/manager
      instance
      run
      --log-level=info
    State:          Running
      Started:      Tue, 27 Feb 2024 16:14:44 +0100
    Ready:          True
    Restart Count:  0
    Liveness:       http-get http://:8000/healthz delay=0s timeout=5s period=10s #success=1 #failure=3
    Readiness:      http-get http://:8000/readyz delay=0s timeout=5s period=10s #success=1 #failure=3
    Startup:        http-get http://:8000/healthz delay=0s timeout=5s period=10s #success=1 #failure=360
    Environment:
      PGDATA:        /var/lib/postgresql/data/pgdata
      POD_NAME:      cluster-prod-161-1
      NAMESPACE:     default
      CLUSTER_NAME:  cluster-prod-161
      PGPORT:        5432
      PGHOST:        /controller/run
    Mounts:
      /controller from scratch-data (rw)
      /dev/shm from shm (rw)
      /etc/app-secret from app-secret (rw)
      /run from scratch-data (rw)
      /var/lib/postgresql/data from pgdata (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-dhv8g (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             True 
  ContainersReady   True 
  PodScheduled      True 
Volumes:
  pgdata:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  cluster-prod-161-1
    ReadOnly:   false
  scratch-data:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     
    SizeLimit:  <unset>
  shm:
    Type:       EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:     Memory
    SizeLimit:  <unset>
  app-secret:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  cluster-prod-161-app
    Optional:    false
  kube-api-access-dhv8g:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  11m   default-scheduler  Successfully assigned default/cluster-prod-161-1 to minikube
  Normal  Pulled     11m   kubelet            Container image "ghcr.io/cloudnative-pg/cloudnative-pg:1.22.0" already present on machine
  Normal  Created    11m   kubelet            Created container bootstrap-controller
  Normal  Started    11m   kubelet            Started container bootstrap-controller
  Normal  Pulled     11m   kubelet            Container image "ghcr.io/cloudnative-pg/postgresql:16.1" already present on machine
  Normal  Created    11m   kubelet            Created container postgres
  Normal  Started    11m   kubelet            Started container postgres

However, the PostgreSQL's version isn't 16.1 but 16.2 :

$ kubectl exec -it cluster-prod-161-1 -- psql -q -x -c "select version();"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
-[ RECORD 1 ]------------------------------------------------------------------------------------------------------------------------
version | PostgreSQL 16.2 (Debian 16.2-1.pgdg110+2) on x86_64-pc-linux-gnu, compiled by gcc (Debian 10.2.1-6) 10.2.1 20210110, 64-bit

What I did :

minikube start
kubectl apply -f https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.22/releases/cnpg-1.22.0.yaml
kubectl apply -f cluster.yaml

Any idea ?

Thanks

@pchovelon
Copy link
Author

pchovelon commented Feb 28, 2024

After an other try this morning, the issue seems only related to the last 16.1 image's version :

With :
imageName: ghcr.io/cloudnative-pg/postgresql:16.1@sha256:f5f919e6fb4a818d5544ac12e5ed6bdfa3fd1958ead2008a1e47df2ab1662403

Got :

k exec cluster-prod-1 -- psql -c "show server_version;"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
         server_version         
--------------------------------
 16.1 (Debian 16.1-1.pgdg110+1)
(1 row)

With :

imageName: ghcr.io/cloudnative-pg/postgresql:16.1@sha256:6e0bdf657ef224339abc6cd7ad669b049fb26875e7d9456d46bf40ff38e26b1b

Got :

k exec cluster-prod-1 -- psql -c "show server_version;"
Defaulted container "postgres" out of: postgres, bootstrap-controller (init)
         server_version         
--------------------------------
 16.2 (Debian 16.2-1.pgdg110+2)
(1 row)

@NiccoloFei
Copy link
Contributor

It seems like that's the case for all the "previous" minor versions: 16.1 / 15.5 / 14.10 / 13.13 / 12.17
Though the CNPG images don't install the PG packages directly, that's done directly in the https://hub.docker.com/_/postgres images they depend on. What could have happened is that we received a tag which wasn't matching the Postgres package that was installed in the images (e.g a 16.1 tag which in reality contained 16.2 pkgs).
Anyway, even if something like that happened, all the "previous" minor version images on https://hub.docker.com/_/postgres seems correct now, so it was eventually fixed.

Unfortunately we don't have any automation in place to rebuild older images, so that would have to be done manually.
Alternatively, to fix those images and make it so the tag properly reflects the server version, we could move the 16.1 tag to the former image release. We'd just lose some python dependencies updates in this case, but at least we'd fix the mismatch.

For example:

  • Move 16.1 tag from 16.1-19 to 16.1-18 (and delete 16.1-19)
  • Move 15.5 tag from 15.5-18 to 15.5-17 (and delete 15.5-18)
  • and so on..

@pchovelon
Copy link
Author

I don't know how many time it will take you to rebuild all images manually.

For sure it could be nice if this issue is fixed, or at least we should write something in the documentation or elsewhere to warn about the mismatching versions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants