Red Hat Quay, (or Kwaaaay as my US colleagues pronounce it), is a Container Registry originally from the guys at CoreOS, who were recently purchased by Red Hat. A container registry plays a pivotal role in a successful container strategy, making it simple for developers and administrators to store, manage, distribute and deploy container images across their container platforms, be that on a laptop, standalone server or a distributed solution like Kubernetes.

Quay has a number of additional features that make it an ideal choice within Enterprises. These include high availability, geo-replication, auditing, authentication and team based collaboration.

Podman is a new CLI tool based on libpod, that enables users to run standalone OCI containers, similar to the Docker CLI. However, unlike Docker, Podman doesn't require a daemon to pull, push, inspect or run containers, making it a lighter weight solution and simpler to integrate into systemd for system services.

Red Hat Quay exclusively ships as a container image, with the default documentation advising the use of Docker as the runtime. This works great, however I've been wanting an excuse to get to grips with Podman and attempt to use it to run system services, and this seemed the ideal opportunity.

This article describes setting up Quay using Podman. Note, this is a simple deployment of Quay for development purposes, and not for Production, which would require highly available services and object storage using Ceph. If you have a need for HA Quay, this is not the article for you, so head over to the official documentation.

Pre-Requisites

OK, so I'll presume you have an up-to-date and registered RHEL 7.6 host to start from. The Quay docs recommend 2vCPUs and 4GB RAM and some disk space as a minimum. I'm using a t3.medium in AWS.

Podman is in the RHEL Extras repository, so enable the repo and install using yum. The version as of writing is 0.12.1.2.

$ sudo subscription-manager repos --enable="rhel-7-server-extras-rpms"
$ sudo yum install podman

Quay Architecture

Quay is architecturally, very simple, with a core Quay service, a SQL database (PostgreSQL or MySQL) and Redis key-value store. Optionally, there is also the Clair security scanning tool, which we are installing, and Quay builder images, which we are not. For this demo, we'll be using PostgreSQL for the database, as Clair requires a PostgreSQL database to store its data. Quay communicates to all of the dependent components, clients simply access the Quay service.

High Level Quay Components

Download the Container Images

You need to have a CoreOS login to download Quay and Clair. Red Hat customers can retrieve a login by following the instructions in this article. https://access.redhat.com/solutions/3533201

$ podman login -u="<your_username>" -p="<your_token>" quay.io
Login Succeeded!
$ sudo podman pull quay.io/coreos/quay:v2.9.3
$ sudo podman pull quay.io/coreos/clair-jwt:v2.0.7

Quay and Clair require a valid PostgreSQL and Redis container image. Here I'm using official Red Hat images from the Red Hat registry.

$ sudo podman pull registry.access.redhat.com/rhscl/postgresql-10-rhel7
$ sudo podman pull registry.access.redhat.com/rhscl/redis-32-rhel7

Prepare and Run the Container Images

The Quay documentation focuses on using Docker to run these container images, so a bit of detective work is required to map everything into Podman so that it can run reliably using systemd. Importantly, the User the container image runs as, any Volumes that are required and basic Usage instructions need to be extracted.

PostgreSQL

One advantage of using the RHSCL PostgreSQL image, is of course that there is some documentation on how to use the image :)

Using podman inspect determine any embedded User, Volume and basic Usage information.

$ sudo podman inspect registry.access.redhat.com/rhscl/postgresql-10-rhel7 | grep User
            "User": "26",
$ sudo podman inspect registry.access.redhat.com/rhscl/postgresql-10-rhel7 | grep -A2 Volumes
            "Volumes": {
                "/var/lib/pgsql/data": {}
            },
$ sudo podman inspect registry.access.redhat.com/rhscl/postgresql-10-rhel7 | grep usage
            "usage": "docker run -d --name postgresql_database -e POSTGRESQL_USER=user -e POSTGRESQL_PASSWORD=pass -e POSTGRESQL_DATABASE=db -p 5432:5432 rhscl/postgresql-10-rhel7"

We need a directory structure to help map any persistent data into our containers. I've chosen to create this within /opt/containers/, choose wherever you feel is appropriate. In addition, we need to chown the dirs and update ACLs to the same UID/GID of that defined within the container (UID and GID 26 in this case).

$ sudo mkdir -p /opt/containers/var/lib/pgsql/data
$ sudo chown 26:26 /opt/containers/var/lib/pgsql/data
$ sudo setfacl -m u:26:-wx /opt/containers/var/lib/pgsql/data

Now lets create a systemd unit file, that will manage the PostgreSQL container. Note that in the example below, I'm setting the PostgreSQL user/password and database to those that I wish to create for use with Quay.

/etc/systemd/system/postgresql-service.service

[Unit]
Description=PostgreSQL Podman Container for Quay
After=network.target

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm "postgresql-service"

ExecStart=/usr/bin/podman run --name postgresql-service -v /opt/containers/var/lib/pgsql/data:/var/lib/pgsql/data:Z -e POSTGRESQL_USER=quay -e POSTGRESQL_PASSWORD=quaysecret -e POSTGRESQL_ADMIN_PASSWORD=quayadmin -e POSTGRESQL_DATABASE=quay --net host registry.access.redhat.com/rhscl/postgresql-10-rhel7

ExecReload=-/usr/bin/podman stop "postgresql-service"
ExecReload=-/usr/bin/podman rm "postgresql-service"
ExecStop=-/usr/bin/podman stop "postgresql-service"
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Finally we can reload the systemd daemon to see our unit file, and then start the PostgreSQL service.

$ sudo systemctl daemon-reload
$ sudo systemctl start postgresql-service
$ sudo systemctl status postgresql-service
● postgresql-service.service - Custom PostgreSQL Podman Container
   Loaded: loaded (/etc/systemd/system/postgresql-service.service; disabled; vendor preset: disabled)
   Active: active (running) since Mon 2019-01-21 17:33:10 UTC; 5s ago
  Process: 21654 ExecStartPre=/usr/bin/podman rm postgresql-service (code=exited, status=125)
 Main PID: 21675 (podman)
   CGroup: /system.slice/postgresql-service.service
           └─21675 /usr/bin/podman run --name postgresql-service -v /opt/containers/var/lib/pgsql/data:/var/lib/pgsql/data:Z -e POSTGRESQL_USER=quay -e POSTGRESQL_PASSWORD=qua...

Cool :)

Quay requires the TRGM Extension to be enabled within PostgreSQL. The RHSCL image includes the extension, but it is not enabled by default, so lets enable it.

$ sudo podman exec -it postgresql-service /bin/bash -c 'echo "SELECT * FROM pg_available_extensions" | /opt/rh/rh-postgresql10/root/usr/bin/psql'
        name        | default_version | installed_version |                               comment
--------------------+-----------------+-------------------+----------------------------------------------------------------------
 adminpack          | 1.1             |                   | administrative functions for PostgreSQL

$ sudo podman exec -it postgresql-service /bin/bash -c 'echo "CREATE EXTENSION pg_trgm" | /opt/rh/rh-postgresql10/root/usr/bin/psql'
 CREATE EXTENSION

$ sudo podman exec -it postgresql-service /bin/bash -c 'echo "SELECT * FROM pg_extension" | /opt/rh/rh-postgresql10/root/usr/bin/psql'
  extname | extowner | extnamespace | extrelocatable | extversion | extconfig | extcondition
 ---------+----------+--------------+----------------+------------+-----------+--------------
  plpgsql |       10 |           11 | f              | 1.0        |           |
  pg_trgm |       10 |         2200 | t              | 1.3        |           |
 (2 rows)

To enable the Quay installer to fully manage the database setup, you need to make the quay user a PostgreSQL SUPERUSER.

$ sudo podman exec -it postgresql-service /bin/bash -c 'echo "ALTER USER quay WITH SUPERUSER;" | /opt/rh/rh-postgresql10/root/usr/bin/psql'
ALTER ROLE

NOTE: You will probably wish to remove the SUPERUSER privilege after installation, but it is not covered in this article.

Last but not least, enable the appropriate firewalld service rule.

$ sudo firewall-cmd --zone=public --add-service=postgresql

Redis

The approach taken above to discover PostgreSQL resources, can be used for the Redis containers. For brevity, I've skipped the verbose steps and just included the commands and files required below.

Image User Volumes Usage Port/Service
registry.access.redhat.com/rhscl/redis-32-rhel7 1001 /var/lib/redis/data docker run -d --name redis_database -p 6379:6379 rhscl/redis-32-rhel7 6379/redis

Filesystems

$ sudo mkdir -p /opt/containers/var/lib/redis/data
$ sudo chown 1001:1001 /opt/containers/var/lib/redis/data
$ sudo setfacl -m u:1001:-wx /opt/containers/var/lib/redis/data

/etc/systemd/system/redis-service.service

[Unit]
Description=Redis Podman Container for Quay
After=network.target

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm "redis-service"

ExecStart=/usr/bin/podman run --name redis-service -v /opt/containers/var/lib/redis/data:/var/lib/redis/data:Z -e REDIS_PASSWORD=quaysecret --net host registry.access.redhat.com/rhscl/redis-32-rhel7

ExecReload=-/usr/bin/podman stop "redis-service"
ExecReload=-/usr/bin/podman rm "redis-service"
ExecStop=-/usr/bin/podman stop "redis-service"
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Firewall

$ sudo firewall-cmd --zone=public --add-service=redis

Quay

The approach taken above to discover PostgreSQL resources, can be used for the Quay container. For brevity, I've skipped the verbose steps and just included the commands and files required below.

Image User Volumes Usage Port/Service
quay.io/coreos/quay:v2.9.3 Not defined (root) /conf/stack, /datastorage See Quay Documentation 80/http, 443/https

Filesystems

$ sudo mkdir -p /opt/containers/var/lib/quay/datastorage
$ sudo mkdir -p /opt/containers/var/lib/quay/config
$ sudo setfacl -m u:0:-wx /opt/containers/var/lib/quay/config
$ sudo setfacl -m u:0:-wx /opt/containers/var/lib/quay/datastorage

/etc/systemd/system/quay-service.service

[Unit]
Description=Quay Service Podman Container
After=network.target
Wants=postgresql-service.service redis-service.service

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm "quay-service"

ExecStart=/usr/bin/podman run --name quay-service -v /opt/containers/var/lib/quay/datastorage:/datastorage:Z -v /opt/containers/var/lib/quay/config:/conf/stack:Z --net host quay.io/coreos/quay:v2.9.3

ExecReload=-/usr/bin/podman stop "quay-service"
ExecReload=-/usr/bin/podman rm "quay-service"
ExecStop=-/usr/bin/podman stop "quay-service"
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Firewall

$ sudo firewall-cmd --zone=public --add-service=http
$ sudo firewall-cmd --zone=public --add-service=https

Clair

The approach taken above to discover PostgreSQL resources, can be used for the Clair container. For brevity, I've skipped the verbose steps and just included the commands and files required below.

Image User Volumes Usage Port/Service
quay.io/coreos/clair-jwt:v2.0.7 Not defined (root) /config See Clair Documentation 6060, 6061

Filesystems

$ sudo mkdir -p /opt/containers/var/lib/clair/config
$ sudo setfacl -m u:0:-wx /opt/containers/var/lib/clair/config

/etc/systemd/system/clair-service.service

[Unit]
Description=Clair Service Podman Container
After=network.target
Wants=postgresql-service.service

[Service]
Type=simple
TimeoutStartSec=5m
ExecStartPre=-/usr/bin/podman rm "clair-service"

ExecStart=/usr/bin/podman run --name clair-service -v /opt/containers/var/lib/clair/config:/config:Z --net host quay.io/coreos/clair-jwt:v2.0.7

ExecReload=-/usr/bin/podman stop "clair-service"
ExecReload=-/usr/bin/podman rm "clair-service"
ExecStop=-/usr/bin/podman stop "clair-service"
Restart=always
RestartSec=30

[Install]
WantedBy=multi-user.target

Firewall

$ sudo firewall-cmd --zone=public --add-port=6060/tcp
$ sudo firewall-cmd --zone=public --add-port=6061/tcp

Configure Quay and Clair

Quay and Clair require additional configuration, with Quay using a WebUI based setup and Clair a config file.

Configure Quay

First off start the Quay service.

$ sudo systemctl start quay-service

Give it a minute or so (it will do some housekeeping), and navigate to http://<your_quay_fqdn>/setup and follow the guided setup.

Fill in the database information.
quay-setup-01

After a few moments, the database will be configured.
quay-setup-02

Quay will at this point require a restart of the container.
quay-setup-03
Although the container will stop, it will not restart due to being driven by systemd, so manually restart the container using the command line. $ sudo systemctl restart quay-service

Once the container has restarted, the guided setup will ask you to create a Quay superuser to administer Quay.
quay-setup-04

Once complete, once again Quay will require a container restart (so manually do so as above), and finally Quay will be installed.
quay-setup-06

Navigate to the Quay UI by clicking the link to the superuser panel. You should be greeted with a message to update any Redis configuration. Scroll down and fill in the appropriate configuration for Redis.
quay-setup-07

Quay setup is now complete.

Clair Setup

Clair requires a PostgreSQL database to store any security data it collates. As we have a DB running for Quay, I'm using the same container to host the Clair DB too.

$ sudo podman exec -it postgresql-service /bin/bash
bash-4.2$ /opt/rh/rh-postgresql10/root/bin/psql
psql (10.6)
Type "help" for help.
postgres=# CREATE DATABASE clairdb;
postgres=# CREATE USER clair WITH ENCRYPTED PASSWORD 'clairsecret';
CREATE ROLE
postgres=# GRANT ALL PRIVILEGES ON DATABASE clairdb TO clair;
GRANT
postgres=# ALTER DATABASE clairdb OWNER TO clair;
ALTER DATABASE
postgres=# \l
                                 List of databases
   Name    |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges
-----------+----------+----------+------------+------------+-----------------------
 clairdb   | clair    | UTF8     | en_US.utf8 | en_US.utf8 | =Tc/clair            +
           |          |          |            |            | clair=CTc/clair
 postgres  | postgres | UTF8     | en_US.utf8 | en_US.utf8 |
 quay      | quay     | UTF8     | en_US.utf8 | en_US.utf8 |
 template0 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.utf8 | en_US.utf8 | =c/postgres          +
           |          |          |            |            | postgres=CTc/postgres
(5 rows)
postgres=# \q
bash-4.2$ exit
exit

To setup Clair, you need to create a config file and deposit this in the config filesystem path so that the container can use it. The Clair documentation covers this well, but a copy of the config used in this example is below.

/opt/containers/var/lib/clair/config/config.yaml

clair:
  database:
    type: pgsql
    options:
      # A PostgreSQL Connection string pointing to the Clair Postgres database.
      # Documentation on the format can be found at: http://www.postgresql.org/docs/9.4/static/libpq-connect.html
      source: postgresql://[email protected]:5432/clairdb?sslmode=disable
      cachesize: 16384
  api:
    # The port at which Clair will report its health status. For example, if Clair is running at
    # https://clair.mycompany.com, the health will be reported at
    # http://clair.mycompany.com:6061/health.
    healthport: 6061

    port: 6062
    timeout: 900s

    # paginationkey can be any random set of characters. *Must be the same across all Clair instances*.
    paginationkey:

  updater:
    # interval defines how often Clair will check for updates from its upstream vulnerability databases.
    interval: 6h
    notifier:
      attempts: 3
      renotifyinterval: 1h
      http:
        # QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
        # For example: https://myregistry.mycompany.com
        endpoint: http://kwaaaay.spectre.portalvein.io/secscan/notify
        proxy: http://localhost:6063

jwtproxy:
  signer_proxy:
    enabled: true
    listen_addr: :6063
    ca_key_file: /certificates/mitm.key # Generated internally, do not change.
    ca_crt_file: /certificates/mitm.crt # Generated internally, do not change.
    signer:
      issuer: security_scanner
      expiration_time: 5m
      max_skew: 1m
      nonce_length: 32
      private_key:
        type: autogenerated
        options:
          rotate_every: 12h
          key_folder: /config/
          key_server:
            type: keyregistry
            options:
              # QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
              # For example: https://myregistry.mycompany.com
              registry: http://kwaaaay.spectre.portalvein.io/keys/


  verifier_proxies:
  - enabled: true
    # The port at which Clair will listen.
    listen_addr: :6060

    # If Clair is to be served via TLS, uncomment these lines. See the "Running Clair under TLS"
    # section below for more information.
    # key_file: /config/clair.key
    # crt_file: /config/clair.crt

    verifier:
      # CLAIR_ENDPOINT is the endpoint at which this Clair will be accessible. Note that the port
      # specified here must match the listen_addr port a few lines above this.
      # Example: https://myclair.mycompany.com:6060
      audience: http://kwaaaay.spectre.portalvein.io:6060

      upstream: http://localhost:6062
      key_server:
        type: keyregistry
        options:
          # QUAY_ENDPOINT defines the endpoint at which Quay Enterprise is running.
          # Example: https://myregistry.mycompany.com
          registry: http://kwaaaay.spectre.portalvein.io/keys/

Start the Clair service.

$ sudo systemctl start clair-service

Clair will take a couple of minutes to initiate, as it attempts to download security datasets such as CVEs from various sources to build up and populate its database.

Once started, head back to the superuser admin panel. You should have a notification alert, as Clair will attempt to automatically create a service key to use to access Quay. Accept the Service Key creation.
clair-setup-01

Finally, enable Security Scanning, by enabling security scanning in the WebUI.
clair-setup-02

The Proof

OK, so that sounded long winded, what did we end up with. Quay and Clair integrate by running security scans against images you upload to Quay, so lets do just that.

Here, I'm using Skopeo and a Quay access token, to copy the default RHEL7 container images from registry.access.redhat.com to the Quay server we've just spun up. Note that I'm not having to use sudo, docker pull/push or a daemon to make this copy, as Skopeo can copy directly.

$ skopeo copy --dest-tls-verify=false --dest-creds=\$app:<QUAY-TOKEN>  docker://registry.access.redhat.com/rhel7/rhel docker://kwaaaay.spectre.portalvein.io:80/rhel7/rhel
Getting image source signatures
Copying blob sha256:cf9df0949547455093eff8889609c44291b52007ebb5f539ca58baa297b66e55
 72.31 MB / 72.31 MB [=====================================================] 25s
Copying blob sha256:de169e86e901aa79d8c519430a91e2858aa4ccf00583117a1f9fd5e9862d203b
 1.20 KB / 1.20 KB [========================================================] 0s
Copying config sha256:729d47af99c598ff34c2406fea47c9a0d373a537d98d772f8167456241e0edf3
 6.18 KB / 6.18 KB [========================================================] 0s
Writing manifest to image destination
Writing manifest to image destination
Storing signatures

If we now navigate back to the Quay UI, we'll find that the image has been pushed to Quay and it has already been scanned and has passed all tests by Clair.

proof-01

Roundup

Thats thankfully the end, thanks for taking the time to read.

All code snippets, systemd unit files, and the source text for this blog can be found at Github