|
| 1 | +# The Singularity ecosystem |
| 2 | + |
| 3 | +We've spent a lot of time on building and using your own containers so that you understand how Singularity works. Now let's talk more about the [Singularity Container Services](https://cloud.sylabs.io/home) and [Docker Hub](https://hub.docker.com/). |
| 4 | + |
| 5 | +[Docker Hub](https://hub.docker.com/) hosts over 100,000 pre-built, ready-to-use containers. And the Container Library has a large and growing number of pre-built containers. We've already talked about pulling and building containers from Docker Hub and the Container Library, but there are more details you should be aware of. |
| 6 | + |
| 7 | +## Tags and hashes |
| 8 | + |
| 9 | +First, Docker Hub and the container library both have a concept of a tagged image. Tags make it convenient for developers to release several different versions of the same container. For instance, if you wanted to specify that you need Debian version 9, you could do so like this: |
| 10 | + |
| 11 | +``` |
| 12 | +$ singularity pull library://debian:9 |
| 13 | +``` |
| 14 | + |
| 15 | +Or within a definition file: |
| 16 | + |
| 17 | +``` |
| 18 | +Bootstrap: library |
| 19 | +From: debian:9 |
| 20 | +``` |
| 21 | +The syntax is similar to specify a tagged container from Docker Hub. |
| 22 | + |
| 23 | +There is a _special_ tag in both the Singularity Library and Docker Hub called **latest**. If you omit the `:<tag>` suffix from your `pull` or `build` command or from within your definition file you will get the container tagged with `latest` by default. This sometimes causes confusion if the `latest` tag doesn't exist for a particular container and an error is encountered. In that case a tag must be supplied. |
| 24 | + |
| 25 | +Tags are not immutable and may change without warning. For insance, the latest tag is automatically assigned to the latest build of a container in Docker Hub. So pulling by tag (or pulling `latest` by default) can result in your pulling 2 different images with the same command. If you are interested in pulling the same container multiple times, you should pull by the hash. Continuing with our Debian 9 example, this will ensure that you get the same one even if the developers change that tag: |
| 26 | + |
| 27 | +``` |
| 28 | +$ singularity pull debian:sha256.b92c7fdfcc6152b983deb9fde5a2d1083183998c11fb3ff3b89c0efc7b240448 |
| 29 | +``` |
| 30 | + |
| 31 | +## Default entities and collections |
| 32 | + |
| 33 | +Let's think about this command: |
| 34 | + |
| 35 | +``` |
| 36 | +$ singularity pull library://debian |
| 37 | +``` |
| 38 | + |
| 39 | +When you run that command there are several default values that are provided for you to allow Singularity to build an entire URI. This is what the full command actually looks like: |
| 40 | + |
| 41 | +``` |
| 42 | +$ singularity pull library://library/default/debian:latest |
| 43 | +``` |
| 44 | + |
| 45 | +This container is being pulled from the URI `library`, the entity `library`, the collection `default`, and the tag `latest`. If you try this shorthand version of the command with the `lolcow` container, you will find that it fails: |
| 46 | + |
| 47 | +``` |
| 48 | +$ singularity pull library://lolcow |
| 49 | +FATAL: While pulling library image: image lolcow:latest (amd64) does not exist in the library |
| 50 | +``` |
| 51 | + |
| 52 | +There is no default container called `lolcow` within the `library` and `default` entity and collection. For that container to work properly, you must supply the entity (`godlovedc`) and the collection (`funny`) like so: |
| 53 | + |
| 54 | +``` |
| 55 | +$ singularity pull library://godlovedc/funny/lolcow |
| 56 | +``` |
| 57 | + |
| 58 | +Similarly, when pulling from Docker Hub there are some intelligent defaults supplied. Consider the following command: |
| 59 | + |
| 60 | +``` |
| 61 | +$ singularity pull docker://godlovedc/lolcow |
| 62 | +``` |
| 63 | + |
| 64 | +When executed this is the command that Singularity actually acts on: |
| 65 | + |
| 66 | +``` |
| 67 | +$ singularity pull docker://index.docker.io/godlovedc/lolcow:latest |
| 68 | +``` |
| 69 | + |
| 70 | +In this example the registry (`index.docker.io`) and the tag (`latest`) are implied. When downloading special images like Debian and Ubuntu, the user (`godovedc` in the above command) can also be implied. These values may need to be manually supplied for some containers on Docker Hub or to download from different registries like Quay.io. |
| 71 | + |
| 72 | +## Using trusted containers |
| 73 | + |
| 74 | +When you build and or run a container, you are running someone else's code on your system. Doing so comes with certain inherent security risks. The blog posts [here](https://medium.com/sylabs/cve-2019-5736-and-its-impact-on-singularity-containers-8c6272b4bce6) and [here](https://medium.com/sylabs/a-note-on-cve-2019-14271-running-untrusted-containers-as-root-is-still-a-bad-idea-245d227d4e02) provide some background on the kinds of security concerns containers can cause. |
| 75 | + |
| 76 | +Container security is a large topic and we cannot cover all of the facets in this class, but here are a few general guidelines. |
| 77 | + |
| 78 | +- Don't build containers from untrusted sources or run them as root |
| 79 | +- Review the `runscript` before you run it |
| 80 | +- Use the `--no-home` and `--contain-all` options when running an unfamiliar container |
| 81 | +- Establish your level of trust with a container |
| 82 | + |
| 83 | +The last point is particularly important and can be accomplished in a few different ways. |
| 84 | + |
| 85 | +### Docker Hub Official and Certified images |
| 86 | + |
| 87 | +The Docker team works with upstream maintainers (like Canonical, CentOS, etc.) to create **Official** images. They've been reviewed by humans, scanned for vulnerabilities, and approved. You can find more details [here](https://docs.docker.com/docker-hub/official_images/) |
| 88 | + |
| 89 | +There are a series of steps that upstream maintainers can perform to produce **Certified** images. This includes a standard of best practices and some baseline testing. You can find more details [here](https://docs.docker.com/docker-hub/publish/certify-images/) |
| 90 | + |
| 91 | +### Signing and verifying Singularity images |
| 92 | + |
| 93 | +Singularity gives image maintainers the ability to cryptographically sign images and downstream users can use builtin tools to verify that these images are bit-for-bit reproductions of the originals. This removes any dependencies on web infrastructure and prevents a specific type of time-of-check to time-of-use (TOCTOU) attack. |
| 94 | + |
| 95 | +This model also differs from the Docker model of trust because the decision of whether or not to trust a particular image is left to the user and maintainer. Sylabs does not "vouch" for a particular set of images the way that Docker does. It's up to users to obtain fingerprints from maintainers and to judge whether or not they trust a particular maintainer's image. |
| 96 | + |
| 97 | +## Building and hosting your containers |
| 98 | + |
| 99 | +Docker Hub allows you to save a Docker File (Docker's version of a Singularity definition file) to a GitHub repo and then link that repo to a Docker Hub repo. Every time a new commit is pushed to the GitHub repo, a new container will be build on Docker Hub. |
| 100 | + |
| 101 | +For instance, the [godlovedc/lolcow](https://hub.docker.com/repository/docker/godlovedc/lolcow) container is linked to the [GodloveD/lolcow](https://github.com/GodloveD/lolcow/blob/master/Dockerfile) repo on GitHub. |
| 102 | + |
| 103 | +The [Singularity Remote Builder](https://cloud.sylabs.io/builder) offers a few different ways to build your containers. You can compose a definition file or drag-and-drop using the web GUI. Or you can log in and create an access token. This allows you to do nifty things like search the Cloud Library with the `search` command and build containers from the command line using `--remote` option. |
| 104 | + |
| 105 | +Here's a quick example. First, I'll use the `remote login` command to generate a token: |
| 106 | + |
| 107 | +``` |
| 108 | +$ singularity remote login SylabsCloud |
| 109 | +INFO: Authenticating with remote: SylabsCloud |
| 110 | +Generate an API Key at https://cloud.sylabs.io/auth/tokens, and paste here: |
| 111 | +API Key: |
| 112 | +INFO: API Key Verified! |
| 113 | +``` |
| 114 | + |
| 115 | +I had to actually visit the website, create the token and copy the text into the prompt (which does not echo to the screen). |
| 116 | + |
| 117 | +Now I can search for users, collections, and containers like so: |
| 118 | + |
| 119 | +``` |
| 120 | +$ singularity search wine |
| 121 | +No users found for 'wine' |
| 122 | +
|
| 123 | +No collections found for 'wine' |
| 124 | +
|
| 125 | +Found 1 containers for 'wine' |
| 126 | + library://godloved/base/wine |
| 127 | + Tags: latest |
| 128 | +``` |
| 129 | + |
| 130 | +And I can also use the `--remote` option to build my containers. Note that this **does not require root!** |
| 131 | + |
| 132 | +``` |
| 133 | +$ cat alpine.def |
| 134 | +Bootstrap: library |
| 135 | +From: alpine |
| 136 | +
|
| 137 | +%post |
| 138 | + echo "Install stuff here" |
| 139 | +
|
| 140 | +$ singularity build --remote alpine.sif alpine.def |
| 141 | +INFO: Remote "default" added. |
| 142 | +INFO: Authenticating with remote: default |
| 143 | +INFO: API Key Verified! |
| 144 | +INFO: Remote "default" now in use. |
| 145 | +INFO: Starting build... |
| 146 | +INFO: Downloading library image |
| 147 | +INFO: Running post scriptlet |
| 148 | +Install stuff here |
| 149 | ++ echo 'Install stuff here' |
| 150 | +INFO: Creating SIF file... |
| 151 | +INFO: Build complete: /tmp/image-302588342 |
| 152 | +WARNING: Skipping container verifying |
| 153 | + 2.59 MiB / 2.59 MiB 100.00% 26.13 MiB/s 0s |
| 154 | +INFO: Build complete: alpine.sif |
| 155 | +
|
| 156 | +$ ls alpine.sif |
| 157 | +alpine.sif |
| 158 | +
|
| 159 | +$ singularity shell alpine.sif |
| 160 | +Singularity> cat /etc/os-release |
| 161 | +NAME="Alpine Linux" |
| 162 | +ID=alpine |
| 163 | +VERSION_ID=3.9.2 |
| 164 | +PRETTY_NAME="Alpine Linux v3.9" |
| 165 | +HOME_URL="https://alpinelinux.org/" |
| 166 | +BUG_REPORT_URL="https://bugs.alpinelinux.org/" |
| 167 | +Singularity> exit |
| 168 | +student@sing-class2:~$ |
| 169 | +``` |
| 170 | + |
| 171 | +The build happens transparently. Even though we are building on the cloud, it _looks_ like the container is built right here on our system and it downloads automatically. |
| 172 | + |
| 173 | +## Signing and sharing containers |
| 174 | +You can generate a new PGP key with the `key` command like so: |
| 175 | + |
| 176 | +``` |
| 177 | +$ singularity key newpair |
| 178 | +Enter your name (e.g., John Doe) : Class Admin |
| 179 | +Enter your email address (e.g., john.doe@example.com) : class.admin@mymail.com |
| 180 | +Enter optional comment (e.g., development keys) : This is an example key for a class |
| 181 | +Enter a passphrase : |
| 182 | +Retype your passphrase : |
| 183 | +Would you like to push it to the keystore? [Y,n] y |
| 184 | +Generating Entity and OpenPGP Key Pair... done |
| 185 | +Key successfully pushed to: https://keys.sylabs.io |
| 186 | +``` |
| 187 | + |
| 188 | +This lets you cryptographically sign the container you just created with the `sign` command: |
| 189 | + |
| 190 | +``` |
| 191 | +$ singularity sign alpine.sif |
| 192 | +Signing image: alpine.sif |
| 193 | +Enter key passphrase : |
| 194 | +Signature created and applied to alpine.sif |
| 195 | +``` |
| 196 | + |
| 197 | +The you can push it to the library like so: |
| 198 | + |
| 199 | +``` |
| 200 | +$ singularity push alpine.sif library://godloved/base/alpine:latest |
| 201 | +INFO: Container is trusted - run 'singularity key list' to list your trusted keys |
| 202 | + 2.59 MiB / 2.59 MiB [========================================================] 100.00% 10.72 MiB/s 0s |
| 203 | +``` |
| 204 | + |
| 205 | +Then when others `pull` the container they can use the `verify` command to make sure that it has not been tampered with. |
| 206 | + |
| 207 | +``` |
| 208 | +$ singularity verify alpine.sif |
| 209 | +Container is signed by 1 key(s): |
| 210 | +
|
| 211 | +Verifying partition: FS: |
| 212 | +73B905527AB1AA3929B6A736A47CBE85B37CB086 |
| 213 | +[LOCAL] Class Admin (This is an example key for a class) <class.admin@mymail.com> |
| 214 | +[OK] Data integrity verified |
| 215 | +
|
| 216 | +INFO: Container verified: alpine.sif |
| 217 | +``` |
| 218 | + |
| 219 | +--- |
| 220 | +**NOTE** |
| 221 | + |
| 222 | +Anyone can sign a container. So just because a container is signed, does not mean it should be trusted. Users must obtain the fingerprint associated with a given maintainer's key and compare it with that displayed by the `verify` command to ensure that the container is authentic. After that it is up to the user to decide if they trust the maintainer. |
| 223 | + |
| 224 | +--- |
0 commit comments