-
Notifications
You must be signed in to change notification settings - Fork 0
Expand file tree
/
Copy pathindex.json
More file actions
106 lines (106 loc) · 85.5 KB
/
index.json
File metadata and controls
106 lines (106 loc) · 85.5 KB
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
[
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/1-intro/",
"title": "The DevSecOps Workshop",
"tags": [],
"description": "",
"content": "Intro This is the storyline you\u0026rsquo;ll follow:\n Create application using the browser based development environment Red Hat OpenShift Dev Spaces Setting up the Inner Development Loop for the individual developer Use the cli tool odo to create, push, change apps on the fly Setting up the Outer Development Loop for the team CI/CD Learn to work with OpenShift Pipelines based on Tekton Use OpenShift GitOps based on ArgoCD Secure your app and OpenShift cluster with ACS Introduction to ACS Example use cases Add ACS scanning to Tekton Pipeline What to Expect This workshop is for intermediate OpenShift users. A good understanding of how OpenShift works along with hands-on experience is expected. For example we will not tell you how to log in with oc to your cluster or tell you what it is\u0026hellip; ;)\n We try to balance guided workshop steps and challenging you to use your knowledge to learn new skills. This means you\u0026rsquo;ll get detailed step-by-step instructions for every new chapter/task, later on the guide will become less verbose and we\u0026rsquo;ll weave in some challenges.\nWorkshop Environment To run this workshop you basically need a fresh and empty OpenShift 4.10 cluster with cluster-admin access. In addition you will be asked to use the oc commandline client for some tasks.\nAs Part of a Red Hat Workshop As part of the workshop you will be provided with freshly installed OpenShift 4.10 clusters. Depending on attendee numbers we might ask you to gather in teams. Some workshop tasks must be done only once for the cluster (e.g. installing Operators), others like deploying and securing the application can be done by every team member separately in their own Project. This will be mentioned in the guide.\nYou\u0026rsquo;ll get all access details for your lab cluster from the facilitators. This includes the URL to the OpenShift console and information about how to SSH into your bastion host to run oc if asked to.\nOn Your Own As there is not special setup for the OpenShift cluster you should be able to run the workshop with any 4.10 cluster of you own. Just make sure you have cluster admin privileges.\nThis workshop was tested with these versions :\n Red Hat OpenShift : 4.10.26 Red Hat Advanced Cluster Security for Kubernetes: 3.71.0 Red Hat OpenShift Dev Spaces : 3.1.0 Red Hat OpenShift Pipelines: 1.8.0 Red Hat OpenShift GitOps: 1.6.0 Red Hat Quay: 3.7.7 Red Hat Quay Bridge Operator: 3.7.7 Red Hat Data Foundation : 4.10.5 Gitea Operator 1.3.0 Workshop Flow We\u0026rsquo;ll tackle the topics at hand step by step with an introduction covering the things worked on before every section.\nAnd finally a sprinkle of JavaScript magic You\u0026rsquo;ll notice placeholders for cluster access details, mainly the part of the domain that is specific to your cluster. There are two options:\n Whenever you see the placeholder \u0026lt;DOMAIN\u0026gt; replace it with the value for your environment This is the part to the right of apps. e.g. for console-openshift-console.apps.cluster-t50z9.t50z9.sandbox4711.opentlc.com replace with cluster-t50z9.t50z9.sandbox4711.opentlc.com Use a query parameter in the URL of this lab guide to have all occurences replaced automagically, e.g.: http://devsecops-workshop.github.io/?domain=cluster-t50z9.t50z9.sandbox4711.opentlc.com "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/2-prepare-cluster/",
"title": "Prepare Cluster",
"tags": [],
"description": "",
"content": "Cluster Preparation During this workshop you\u0026rsquo;ll install and use a good number of software components. The first one is OpenShift Data Foundation for providing storage. We\u0026rsquo;ll start with it because the install takes a fair amount of time. Number two is Gitea for providing Git services in your cluster with more to follow in subsequent chapters.\nBut fear not, all are managed by Kubernetes Operators on OpenShift.\nInstall OpenShift Data Foundation Let\u0026rsquo;s install OpenShift Data Foundation which you might know under the old name OpenShift Container Storage. It is engineered as the data and storage services platform for OpenShift and provides software-defined storage for containers.\n Login to the OpenShift Webconsole with you cluster admin credentials In the Web Console, go to Operators \u0026gt; OperatorHub and search for the OpenShift Data Foundation operator Click image to enlarge Install the operator with default settings After the operator has been installed it will inform you to install a StorageSystem. From the operator overview page click Create StorageSystem with the following settings:\n Backing storage: Leave Deployment Type Full deployment and for Backing storage type make sure gp2 is selected. Click Next Capacity and nodes: Leave the Requested capacity as is (2 TiB) and select all nodes. Click Next Security and network: Leave set to Default (SDN) Click Next You\u0026rsquo;ll see a review of your settings, hit Create StorageSystem\nDon\u0026rsquo;t worry if you see a 404 Page. The ODF Operator has just extended the OpenShift Console which may no be availabe in your current view. Just relead the browser page once and your will see the System Overview\n Click image to enlarge As mentioned already this takes some time so go ahead and install the other prerequisites. We\u0026rsquo;ll come back later.\nInstall and Prepare Gitea We\u0026rsquo;ll need Git repository services to keep our app and infrastructure source code, so let\u0026rsquo;s just install trusted Gitea using an operator:\nGitea is an OpenSource Git Server similar to GitHub. A team at Red Hat was so nice to create an Operator for it. This is a good example of how you can integrate an operator into your catalog that is not part of the default OperatorHub already.\n To integrate the Gitea operator into your Operator catalog you need to access your cluster with the oc client. You can do this in two ways:\n If you don\u0026rsquo;t already have the oc client installed, you can download the matching version for your operating system here Login to the OpenShift Webconsole with you cluster admin credentials On the top right click on your username and then Copy login command to copy your login token On you local machine open a terminal and login with the oc command you copied above Or, if working on a Red Hat RHPDS environment:\n Use the information provided to login to your bastion host via SSH When logged in as lab-user you will be able to run oc commands without additional login. Now using oc add the Gitea Operator to your OpenShift OperatorHub catalog:\noc apply -f https://raw.githubusercontent.com/redhat-gpte-devopsautomation/gitea-operator/master/catalog_source.yaml In the Web Console, go to Operators \u0026gt; OperatorHub and search for Gitea (You may need to disable search filters) Install the Gitea Operator with default settings Create a new OpenShift project called git Go to Installed Operators \u0026gt; Gitea Operator and click on the Create Instance tile in the git project Click image to enlarge On the Create Gitea page switch to the YAML view and add the following spec values : spec: giteaAdminUser: gitea giteaAdminPassword: \u0026quot;gitea\u0026quot; giteaAdminEmail: opentlc-mgr@redhat.com Click Create After creation has finished:\n Access the route URL (you\u0026rsquo;ll find it e.g. in Networking \u0026gt; Routes \u0026gt; repository \u0026gt; Location) This will take you to the Gitea web UI Sign-In to Gitea with user gitea and password gitea If your Gitea UI appears in a language other then English (depending on your locale settings), switch it to English. Change the language in your Gitea UI, the example below shows a German example: Click image to enlarge Click image to enlarge Now we will clone a git repository of a sample application into our Gitea, so we have some code to works with\n Clone the example repo: Click the + dropdown and choose New Migration As type choose Git URL: https://github.com/devsecops-workshop/quarkus-build-options.git Click Migrate Repository In the cloned repository you\u0026rsquo;ll find a devspaces_devfile.yml. We will need the URL to the file soon, so keep the tab open.\nCheck OpenShift Data Foundation (ODF) Storage Deployment Now it\u0026rsquo;s time to check if the StorageSystem deployment from ODF completed succesfully. In the openShift web console:\n Open Storage-\u0026gt;DataFoundation On the overview page go to the Storage Systems tab Click ocs-storagecluster-storagesystem On the next page make sure the status indicators on the Block and File and Object tabs are green! Click image to enlarge Click image to enlarge Your container storage is ready to go, explore the information on the overview pages if you\u0026rsquo;d like.\nYour cluster is now prepared for the next step, proceed to the Inner Loop.\nArchitecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/3-inner-loop/",
"title": "Inner Loop",
"tags": [],
"description": "",
"content": "In this part of the workshop you\u0026rsquo;ll experience how modern software development using the OpenShift tooling can be done in a fast, iterative way. Inner loop here means this is the way, sorry, process, for developers to try out new things and quickly change and test their code on OpenShift without having to build new images all the time or being a Kubernetes expert. Install and Prepare Red Hat OpenShift Dev Spaces OpenShift Dev Spaces is a browser-based IDE for Cloud Native Development. All the heavy lifting is done though a container running your workspace on OpenShift. All you really need is a laptop. You can easily switch and setup customized environment, plugin, build tools and runtimes. So switching from one project context to another is as easy a switching a website. No more endless installation and configuration marathons on your dev laptop. It is already part of your OpenShift subscription. If you want to find out more have a look here\n Install the Red Hat OpenShift Dev Spaces Operator from OperatorHub (not the previous Codeready Workspaces versions!) with default settings Go to Installed Operators -\u0026gt; Red Hat OpenShift Dev Spaces and create a new instance (Red Hat OpenShift Dev Spaces instance Specification) using the default settings in the project openshift-operators Wait until deployment has finished. This may take a couple of minutes as several components will be deployed. Once the instance status is ready (You can check the YAML of the instance: status \u0026gt; chePhase: Active), look up the devspaces Route in the openshift-workspaces namespace (You may need to toggle the Show default project button). Open the link in a new browser tab, click on Log in with OpenShift and log in with your OCP credentials Allow selected permissions We could create a workspace from one of the templates that come with CodeReady Workspaces, but we want to use a customized workspace with some additionally defined plugins in a v2 devfile in our git repo. With devfiles you can share a complete workspace setup and with the click of a link and you will end up in a fully configured project in your browser.\n In the left menu click on Create Workspace Copy the raw URL of the devspaces_devfile.yml file in your Gitea repository by clicking on the file and then on the Raw button (or Originalversion in German). Paste the full URL into the Git Repo URL field and click Create \u0026amp; Open Click image to enlarge You\u0026rsquo;ll get into the Creating a workspace \u0026hellip; view, give the workspace containers some time to spin up. When your workspace has finally started, have a good look around in the UI. It should look familiar if you have ever worked with VSCode or similar IDEs.\nWhile working with Dev Spaces make sure you have AdBlockers disabled, you are not on a VPN and a have good internet connection to ensure a stable setup. If you are facing any issues try to reload the Browser window. If that doesn\u0026rsquo;t help restart the workspace in the main Dev Spaces site under Workspaces and then menu Restart Workspace\n Clone the Quarkus Application Code As an example you\u0026rsquo;ll create a new Java application. You don\u0026rsquo;t need to have prior experience programming in Java as this will be kept really simple.\nWe will use a Java application based on the Quarkus stack. Quarkus enables you to create much smaller and faster containerized Java applications than ever before. You can even transcompile these apps to native Linux binaries that start blazingly fast. The app that we will use is just a starter sample created with the Quarkus Generator with a simple RESTful API that answers to http Requests. But at the end of the day this setup will work with any Java application. Fun fact: Every OpenShift Subscription already comes with a Quarkus Subscription.\n Let\u0026rsquo;s clone our project into our workspace :\n Bring up your OpenShiftDev Spaces in your browser In center of the editor area click on Clone Git Repository \u0026hellip;. and then at the top enter the Git URL to your Gitea Repo (You can copy the URL by clicking on the clipboard icon in Gitea) and press enter In the following dialog Choose a folder to clone \u0026hellip; Navigate up 2 directories by clicking .. twice Select the folder projects Click the button OK In the following dialog when asked how to open the code, click on Open The windows will briefly reload and then you will be in the cloned project folder Access OpenShift and Create the Development Stage Project Now we want to create a new OpenShift project for our app:\n Open a terminal in your DevSpaces IDE In the top left \u0026lsquo;hamburger\u0026rsquo; menu click on Terminal \u0026gt; New Terminal) The oc OpenSHift cli client is already installed and you are already logged into the cluster So go ahead and create a new project workshop-dev oc new-project workshop-dev Use odo to Deploy and Update our Application odo or \u0026lsquo;OpenShift do\u0026rsquo; is a cli that enables developers to get started quickly with cloud native app development without being a Kubernetes expert. It offers support for multiple runtimes and you can easily setup microservice components, push code changes into running containers and debug remotely with just a few simple commands. To find out more, have look here\nodo is smart enough to figure out what programming language and frameworks you are using. So let\u0026rsquo;s let initialize our project\nodo init You can then opt-into telemetry (Y/n) A matching Quarkus DevFile is found in the odo repository. Choose Y to download You can select a container in which odo will be started. Hit Enter (None) As componenten name keep the suggestion. Hit Enter odo is not intialized for your app. Let\u0026rsquo;s deploy the app to openshift in odo dev mode\nodo dev This will compile the app, start a pod in the OpenShift project and inject the app.\nThere will be a couple of popups in the bottom right corner\n \u0026ldquo;A new process is listening \u0026hellip;\u0026rdquo; -\u0026gt; Choose Yes \u0026ldquo;Redirect is not enabled \u0026hellip;\u0026rdquo; \u0026ndash;\u0026gt; Click on Open in New Tab \u0026ldquo;Do you want VS Code - Open SOurce to open an external website\u0026rdquo; \u0026ndash;\u0026gt; Choose Open A new tab will open and show the webpage of your app. You may have to wait a reload in a few seconds.\nTo test the app:\nYour app should show up as a simple web page. In the RESTEasy JAX-RS section click the @Path endpoint /hello to see the result.\nNow for the fun part: Using odo you can just dynamically change your code and push it out again without doing a new image build! No dev magic involved:\n In your CRW Workspace on the left, expand the file tree to open file src/main/java/org/acme/GreetingRessource.java and change the string \u0026ldquo;Hello RESTEasy\u0026rdquo; to \u0026ldquo;Hello Workshop\u0026rdquo; (CRW saves every edit directly. No need to save)\n And reload the app webpage.\n Bam! The change should be there in a matter of seconds\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/4-outer-loop/",
"title": "Outer Loop",
"tags": [],
"description": "",
"content": "Now that you have seen how a developer can quickly start to code using modern cloud native tooling, it\u0026rsquo;s time to learn how to proceed with the application towards production. The first step is to implement a CI/CD pipeline to automate new builds. Let\u0026rsquo;s call this stage int for integration.\nInstall OpenShift Pipelines To create and run the build pipeline you\u0026rsquo;ll use OpenShift Pipelines based on project Tekton. The first step is to install it:\n Install the Red hat OpenShift Pipelines Operator from OperatorHub with default settings Since the Piplines assets are installed asynchronously it is possible that the Pipeline Templates are not yet setup when proceeding immedately to the next step. So now is good time to grab a coffee.\n Create App Deployment and Build Pipeline After installing the Operator create a new deployment of your game-changing application:\n Create a new OpenShift project workshop-int Switch to the OpenShift Developer Console Click the +Add menu entry to the left and choose Import from Git As Git Repo URL enter the clone URL for the quarkus-build-options repo in your your Gitea instance (There might be a warning about the repo url that you can ignore) Click Show advanced Git options and for Git reference enter master As Import Strategy select Builder Image As Builder Image select Java and openjdk-11-el7 / Red Hat OpenJDK 11 (RHEL 7) As Application Name enter workshop-app As Name enter workshop Check Add pipeline If you don\u0026rsquo;t have the checkbox Add pipeline and get the message There are no pipeline templates available for Java and Deployment combination in the next step then just give it few more minutes and reload the page.\n Click Create In the main menu left, click on Pipelines and observe how the Tekton Pipeline is created and run. Install Red Hat Quay Container Registry The image that we have just deployed was pushed to the internal OpenShift Registry which is a great starting point for your cloud native journey. But if you require more control over you image repos, a graphical GUI, scalability, internal security scanning and the like you may want to upgrade to Red Hat Quay. So as a next step we want to replace the internal registry with Quay.\nQuay installation is done through an operator, too:\n In Operators-\u0026gt;OperatorHub filter for Quay Install the Red Hat Quay operator with default settings Create a new namespace quay In the namespace go to Administration-\u0026gt;LimitRanges and delete the quay-core-resource-limits Click image to enlarge In the operator overview of the Quay Operator on the Quay Registry tile click Create instance If the YAML view is shown sitch to Form view Make sure you are in the quay project Change the name to quay Click Create Click the new Quayregistry, scroll down to Conditions and wait until the Available type changes to True Click image to enlarge Now that the Registry is installed you have to configure a superuser:\n Make sure you are in the quay Project Go to Networking-\u0026gt;Routes, access the Quay portal using the URL of the first route (quay-quay) Click Create Account As username put in quayadmin, a (fake) email address and a password. Click Create Account again In the OpenShift web console open Workloads-\u0026gt;Secrets Search for quay-config-editor-credentials-..., open the secret and copy the values, you\u0026rsquo;ll need them in a second. Go back to the Routes and open the quay-quay-config-editor route Login with the values of the secret from above Click Sign in Scroll down to Access Settings As Super User put in quayadmin click Validate Configuration Changes and after the validation click Reconfigure Quay Reconfiguring Quay takes some time. The easiest way to determine if it\u0026rsquo;s been finished is to open the Quay portal (using the quay-quay Route). At the upper right you\u0026rsquo;ll see the username (quayadmin), if you click the username the drop-down should show a link Super User Admin Panel. When it shows up you can proceed.\n Click image to enlarge Integrate Quay as Registry into OpenShift To synchronize the internal default OpenShift Registry with the Quay Registry, Quay Bridge is used.\n In the OperatorHub of your cluster, search for the Quay Bridge Operator Install it with default settings While the Operator is installing, create a new Organization in Quay: Access the Quay Portal In the top + menu click Create New Organization Name it openshift_integration Click Create Organization We need an OAuth Application in Quay for the integration:\n Again In the Quay Portal, click the Applications icon in the menubar to the left Click Create New Application at the upper right Name it openshift, press Enter and click on the new openshift item by clicking it In the menubar to the left click the Generate Token icon Check all boxes and click Generate Access token Click image to enlarge In the next view click Authorize Application and confirm In the next view copy the Access Token and save it somewhere, we\u0026rsquo;ll need it again Now we finally create an Quay Bridge instance. In the OpenShift web console make sure you are in the quay Project. Then:\n Create a new Secret\n Go to Workloads-\u0026gt;Secrets and click Create-\u0026gt;Key/value secret Secret name: quay-credentials Key: token Value: paste the Access Token you generated in the Quay Portal before Click Create Go to the Red Hat Quay Bridge Operator overview (make sure you are in the quay namespace)\n On the Quay Integration tile click Create Instance\n Open Credentials secret Namespace containing the secret: quay Key within the secret: token Copy the Quay Portal hostname (including https://) and paste it into the Quay Hostname field Set Insecure registry to true Click Create And you are done with the installation and integration of Quay as your registry! Test if the integration works:\n In the Quay Portal you should see your Openshift Projects are synced and represented as Quay Organizations, prefixed with openshift_ (you might have to reload the browser). E.g. there should be a openshift_git Quay Organization. In the OpenShift web console create a new test Project, make sure it\u0026rsquo;s synced to Quay as an Organization and delete it again. Adjust the Pipeline to Deploy to Quay The current Pipeline deploys to the internal Registry by default so the image that was created by the first (automatic) run was pushed there.\nTo leverage our brand new Quay registry we need to modify the Pipeline so it pushes images to the Quay registry. In addition the ImageStream must be modified to point to the Quay registry, too.\nCreate a new s2i-java ClusterTask The first thing is to create a new source-to-image Task to automatically update the ImageStream to point to Quay. You could of course copy and modify the default s2i-java task using the build-in YAML editor of the web console. But to make this as painless as possible we have prepared the needed YAML object definition for you already.\n Login to your bastion host via SSH, from here you can run oc commands Clone the Git repo with the YAML files you\u0026rsquo;ll need: git clone https://github.com/devsecops-workshop/yaml.git Change into the yaml directory Apply the first YAML: oc create -f s2i-java-workshop.yml There is an issue with the delivered version of the Skopeo Clustertask, so we will also import an updated version. This may not be necessary in the future Apply the YAML: oc create -f skopeo-update.yml\n You can do the above steps from any Linux system where you set up the oc command.\n You should now have a new ClusterTask named s2i-java-workshop, go to the web console and check:\n Switch to the Administrator console Switch to the workshop-int Project Go to Pipelines-\u0026gt;Tasks-\u0026gt;ClusterTasks Search for the s2i-java-workshop ClusterTask and open it Switch to the YAML view Please take the time to review the additions to the default s2i-java task:\n In the params section are two new parameters: - default: \u0026#39;\u0026#39; description: The name of the ImageStream which should be updated name: IMAGESTREAM type: string - default: \u0026#39;\u0026#39; description: The Tag of the ImageStream which should be updated name: IMAGESTREAMTAG type: string At the end of the steps section is a new step: - env: - name: HOME value: /tekton/home image: \u0026#39;image-registry.openshift-image-registry.svc:5000/openshift/cli:latest\u0026#39; name: update-image-stream resources: {} script: \u0026gt; #!/usr/bin/env bash oc tag --source=docker $(params.IMAGE) $(params.IMAGESTREAM):$(params.IMAGESTREAMTAG) --insecure securityContext: runAsNonRoot: true runAsUser: 65532 Modify the Pipeline After adding the new task we need to modify the pipeline to:\n Introduce the new parameters into the Pipeline configuration Use the new s2i-java-workshop task To make this easier we again provide you with a full YAML definition of the Pipeline. Do the following:\n Bring up the terminal and make sure you are in the yaml directory. If you use this lab guide with your domain as query parameter, you are good to go with the command, if not, you have to replace \u0026lt;DOMAIN\u0026gt; manually in the following command.\n To replace the DOMAIN placeholder with your lab domain, run: sed -i 's/DOMAIN/\u0026lt;DOMAIN\u0026gt;/g' workshop-pipeline-without-git-update.yml Apply the new definition: oc replace -f workshop-pipeline-without-git-update.yml Again take the time to review the changes in the web console:\n In the menu go to Pipelines-\u0026gt;Pipelines Click the workshop Pipeline and switch to YAML These are the new parameters in the pipeline: - default: workshop name: IMAGESTREAM type: string - default: latest name: IMAGESTREAMTAG type: string The preexisting parameter IMAGE_NAME now points to your local Quay registry: - default: \u0026gt;- quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int/workshop name: IMAGE_NAME type: string And finally the build task was modified:\n The new parameters in the params section of the build: tasks: - name: build params: [...] - name: IMAGESTREAM value: $(params.IMAGESTREAM) - name: IMAGESTREAMTAG value: $(params.IMAGESTREAMTAG) The name of the taskRef was changed to s2i-java-workshop: taskRef: kind: ClusterTask name: s2i-java-workshop You are done with adapting the Pipeline to use the Quay registry! Give it a try:\n First go to the Quay portal to the openshift_workshop-int organization. In the openshift_workshop-int / workshop repository access Tags in the menu to the left. There should be no image (yet). Now it\u0026rsquo;s time to configure and start the Pipeline. In the Pipelines view go to the top right menu and choose Actions -\u0026gt; Start. In the Start Pipeline window that opens, but before starting the actual pipeline we need to add a Secret so pipeline can authenticate and push against the Quay repo:\n Switch to the Quay Portal and click on the openshift_workshop-int / workshop repository On the left click on Settings Click on the openshift_workshop-int+builder Robot account and copy the username and token Back in the Start Pipeline form At the buttom, click on Show credential options and then Add secret Set these values Secret name : quay-workshop-int-token Access to : Image Registry Authentication type : Basic Authentication Server URL : quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int (make sure to point to your cluster URL) Username : openshift_workshop-int+builder Password or token : the token you copied from the Quay robot account before \u0026hellip; Then click on the checkmark below to add the secret The secret has just been added and will be mounted automatically everytime the pipeline runs Hit Start Once the Pipeline run has finished, go to the Quayportal again and check the Repository openshift_workshop-int/workshop again. Under Tags you should now see a new workshop Image version that was just pushed by the pipeline.\nCongratulations : Quay is now a first level citizen of your pipeline build strategy.\nCreate an ImageStream Tag with an Old Image Version Now there your build pipeline has been set up and is ready. There is one more step in preparation of the security part of this workshop. We need a way to build and deploy from an older image with some security issues in it. For this we will add another ImageStream Tag in the default Java ImageStream that points to an older version with a known issue in it.\n Using the Administrator view, switch to the project openshift and under Builds click on the ImageStreams Search and open the ImageStream java Switch to YAML view and add the following snippet to the tags: section. Be careful to keep the needed indentation! - name: java-old-image annotations: description: Build and run Java applications using Maven and OpenJDK 8. iconClass: icon-rh-openjdk openshift.io/display-name: Red Hat OpenJDK 8 (UBI 8) sampleContextDir: undertow-servlet sampleRepo: \u0026#34;https://github.com/jboss-openshift/openshift-quickstarts\u0026#34; supports: \u0026#34;java:8,java\u0026#34; tags: \u0026#34;builder,java,openjdk\u0026#34; version: \u0026#34;8\u0026#34; from: kind: DockerImage name: \u0026#34;registry.redhat.io/openjdk/openjdk-11-rhel7:1.10-1\u0026#34; generation: 4 importPolicy: {} referencePolicy: type: Local This will add a tag java-old-image that points to an older version of the RHEL Java image. The image and security vulnerabilities can be inspected in the Red Hat Software Catalog here\n Have a look at version 1.10-1 We will use this tag to test our security setup in a later chapter.\nCreate a new Project For the subsequent exercises we need a new project:\n Create a new OpenShift Project workshop-prod Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/5-gitops/",
"title": "Configure GitOps",
"tags": [],
"description": "",
"content": "Now that our CI/CD build and integration stage is ready we could promote the app version directly to a production stage. But with the help of the GitOps approach, we can leverage our Git System to handle promotion that is tracked through commits and can deploy and configure the whole production environment. This stage is just too critical to configure manually and without audit.\nInstall OpenShift GitOps So let\u0026rsquo;s start be installing the OpenShift GitOps Operator based on project ArgoCD.\n Install the Red Hat OpenShift GitOps Operator from OperatorHub with default settings The installation of the GitOps Operator will give you a clusterwide ArgoCD instance available at the link in the top right menu, but since we want to have an instance to manage just our prod namespaces we will create another ArgoCD in that specific namespace.\n You should already have created an OpenShift Project workshop-prod In the project workshop-prod click on Installed Operators and then Red Hat OpenShift GitOps. On the ArgoCD \u0026ldquo;tile\u0026rdquo; click on Create instance to create an ArgoCD instance in the workshop-prod project. Click image to enlarge Keep the settings as they are and click Create Prepare the GitOps Config Repository In Gitea create a New Migration and clone the Config GitOps Repo which will be the repository that contains our GitOps infrastructure components and state The URL is https://github.com/devsecops-workshop/openshift-gitops-getting-started.git Have quick look at the structure of this project :\napp - contains yamls for the deployment, service and route resources needed by our application. These will be applied to the cluster. There is also a kustomization.yaml defining that kustomize layers will be applied to all yamls\nenvironments/dev - contains the kustomization.yaml which will be modified by our builds with new Image versions. ArgoCD will pick up these changes and trigger new deployments.\nSetup GitOps Project Let\u0026rsquo;s setup the project that tells ArgoCD to watch our config repo and updated resources in the workshop-prod project accordingly.\n Give namespace workshop-prod permissions to pull images from workshop-int oc policy add-role-to-user \\ system:image-puller system:serviceaccount:workshop-prod:default \\ --namespace=workshop-int Find the local ArgoCD URL (not the global instance) by going to Networking \u0026gt; Routes in namespace workshop-prod Open the ArgoCD website ignoring the certificate warning Don\u0026rsquo;t login with OpenShift but with username and password User is admin and password will be in Secret argocd-cluster ArgoCD works with the concept of Apps. We will create an App and point it to the Config Git Repo. ArgoCD will look for k8s yaml files in the repo and path and deploy them to the defined namespace. Additionally ArgoCD will also react to changes to the repo and reflect these to the namespace. You can also enable self-healing to prevent configuration drift. If you want find out more about OpenShift GitOps have look here :\n Create App Click the Manage your applications icon on the left Click Create Application Application Name: workshop Project: default SYNC POLICY: Automatic Repository URL: Copy the URL of your config repo from Gitea Path: environments/dev Cluster URL: https://kubernetes.default.svc Namespace: workshop-prod Click Create Click on Sync and then Synchronize to manually trigger the first sync Click on the workshop to show the deployment graph Watch the resources (Deployment, Service, Route) get rolled out to the namespace workshop-prod. Notice we have also scaled our app to 2 pods in the prod stage as we want some HA.\nOur complete prod stage is now configured and controlled though GitOps. But how do we tell ArgoCD that there is a new version of our app to deploy? Well, we will add a step to our build pipeline updating the config repo.\nAs we do not want to modify our original repo file we will use a tool called Kustomize that can add incremental change layers to YAML files. Since ArgoCD permanently watches this repo it will pick up these Kustomize changes.\nIt is also possible to update the repo with a Pull request. Then you have an approval process for your prod deployment.\n Add Kustomize and Git Push Tekton Task Let\u0026rsquo;s add a new custom Tekton task that can update the Image tag via Kustomize after the build an then push the change to out git config repo.\n In the namespace workshop-int switch to the Administrator Perspective and go to Pipelines \u0026gt; Tasks \u0026gt; Create Task Replace the YAML definition with the following and click Create: apiVersion: tekton.dev/v1beta1 kind: Task metadata: annotations: tekton.dev/pipelines.minVersion: 0.12.1 tekton.dev/tags: git name: git-update-deployment namespace: workshop-int labels: app.kubernetes.io/version: \u0026#34;0.1\u0026#34; operator.tekton.dev/provider-type: community spec: description: This Task can be used to update image digest in a Git repo using kustomize params: - name: GIT_REPOSITORY type: string - name: CURRENT_IMAGE type: string - name: NEW_IMAGE type: string - name: NEW_DIGEST type: string - name: KUSTOMIZATION_PATH type: string results: - description: The commit SHA name: commit steps: - image: \u0026#34;docker.io/alpine/git:v2.26.2\u0026#34; name: git-clone resources: {} script: | rm -rf git-update-digest-workdir git clone $(params.GIT_REPOSITORY) git-update-digest-workdir workingDir: $(workspaces.workspace.path) - image: \u0026#34;quay.io/wpernath/kustomize-ubi:latest\u0026#34; name: update-digest resources: {} script: \u0026gt; #!/usr/bin/env bash echo \u0026#34;Start\u0026#34; pwd cd git-update-digest-workdir/$(params.KUSTOMIZATION_PATH) pwd #echo \u0026#34;kustomize edit set image #$(params.CURRENT_IMAGE)=$(params.NEW_IMAGE)@$(params.NEW_DIGEST)\u0026#34; kustomize version kustomize edit set image $(params.CURRENT_IMAGE)=$(params.NEW_IMAGE)@$(params.NEW_DIGEST) echo \u0026#34;##########################\u0026#34; echo \u0026#34;### kustomization.yaml ###\u0026#34; echo \u0026#34;##########################\u0026#34; ls cat kustomization.yaml workingDir: $(workspaces.workspace.path) - image: \u0026#34;docker.io/alpine/git:v2.26.2\u0026#34; name: git-commit resources: {} script: \u0026gt; pwd cd git-update-digest-workdir git config user.email \u0026#34;tekton-pipelines-ci@redhat.com\u0026#34; git config user.name \u0026#34;tekton-pipelines-ci\u0026#34; git status git add $(params.KUSTOMIZATION_PATH)/kustomization.yaml # git commit -m \u0026#34;[$(context.pipelineRun.name)] Image digest updated\u0026#34; git status git commit -m \u0026#34;[ci] Image digest updated\u0026#34; git status git push RESULT_SHA=\u0026#34;$(git rev-parse HEAD | tr -d \u0026#39;\\n\u0026#39;)\u0026#34; EXIT_CODE=\u0026#34;$?\u0026#34; if [ \u0026#34;$EXIT_CODE\u0026#34; != 0 ] then exit $EXIT_CODE fi # Make sure we don\u0026#39;t add a trailing newline to the result! echo -n \u0026#34;$RESULT_SHA\u0026#34; \u0026gt; $(results.commit.path) workingDir: $(workspaces.workspace.path) workspaces: - description: The workspace consisting of maven project. name: workspace Add Tekton Tasks to your Pipeline to Promote your Image to workshop-prod So now we a new Tekton Task our catalog to update a Gitops Git repo, but we still need to pomote the actual Image from out workshop-int to workshop-prod project. Otherwise the image will not be available for our Deployment.\n Go to Pipelines \u0026gt; Pipelines \u0026gt; workshop and then YAML You can edit pipelines either directly in YAML or in the visual Pipeline Builder. We will see how to use the Builder later on so let\u0026rsquo;s edit the YAML for now.\n Add the new Task to your Pipeline by adding it to the YAML like this:\n First we will add a new Pipeline Parameter \u0026lsquo;GIT_CONFIG_REPO\u0026rsquo; at the beginning of the Pipeline and set it by default to our GitOps Config Repository (This will be updated by the Pipeline and then trigger ArgoCD to deploy to Prod) So in the YAML view at the end of the spec \u0026gt; params section add the following (if the \u0026lt;DOMAIN\u0026gt; placeholder hasn\u0026rsquo;t been replaced automatically, do it manually): - default: \u0026gt;- https://repository-git.apps.\u0026lt;DOMAIN\u0026gt;/gitea/openshift-gitops-getting-started.git name: GIT_CONFIG_REPO type: string Next insert the new tasks at the tasks level right after the deploy task We will map the Pipeline parameter GIT_CONFIG_REPO to the Task parameter GIT_REPOSITORY Make sure to fix indentation after pasting into the YAML! In the OpenShift YAML viewer/editor you can mark multiple lines and use tab to indent this lines for one step.\n - name: skopeo-copy params: - name: srcImageURL value: \u0026#39;docker://$(params.QUAY_URL)/openshift_workshop-int/workshop:latest\u0026#39; - name: destImageURL value: \u0026#39;docker://$(params.QUAY_URL)/openshift_workshop-prod/workshop:latest\u0026#39; - name: srcTLSverify value: \u0026#39;false\u0026#39; - name: destTLSverify value: \u0026#39;false\u0026#39; runAfter: - build taskRef: kind: ClusterTask name: skopeo-copy-updated workspaces: - name: images-url workspace: workspace - name: git-update-deployment params: - name: GIT_REPOSITORY value: $(params.GIT_CONFIG_REPO) - name: CURRENT_IMAGE value: \u0026gt;- image-registry.openshift-image-registry.svc:5000/workshop-int/workshop:latest - name: NEW_IMAGE value: \u0026gt;- image-registry.openshift-image-registry.svc:5000/workshop-int/workshop - name: NEW_DIGEST value: $(tasks.build.results.IMAGE_DIGEST) - name: KUSTOMIZATION_PATH value: environments/dev runAfter: - skopeo-copy taskRef: kind: Task name: git-update-deployment workspaces: - name: workspace workspace: workspace The Pipeline should now look like this. Notice that the new tasks runs in parallel to the deploy task\n Click image to enlarge Now the pipeline is set. The last thing we need is authentication against the Gitea repo and the workshop-prod Quay org. We will add those from the start pipeline form next. Make sure to replace the placeholder if required.\nUpdate our Prod Stage via Pipeline and GitOps Click on Pipeline Start\n In the form go down and expand Show credential options Click Add Secret, then enter Secret name : quay-workshop-prod-token Access to: Image Registry Authentication type: Basic Authentication Server URL: quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-prod Username: openshift_workshop-prod+builder Password : (Retrieve this from the Quay org openshift_workshop-int as before) Click the checkmark Then click Add Secret again Secret name : gitea-secret Access to: Git Server Authentication type: Basic Authentication Server URL: https://repository-git.apps.\u0026lt;DOMAIN\u0026gt;/gitea/openshift-gitops-getting-started.git Username: gitea Password : gitea Click the checkmark Run the pipeline by clicking Start and see that in your Gitea repo /environment/dev/kustomize.yaml is updated with the new image version Notice that the deploy and the git-update steps now run in parallel. This is one of the powers of Tekton. It can scale natively with pods on OpenShift.\n This will tell ArgoCD to update the Deployment with this new image version\n Check that the new image is rolled out (you may need to sync manually in ArgoCD to speed things up)\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/10-rhacs-setup/",
"title": "Install and Configure ACS",
"tags": [],
"description": "",
"content": "During the workshop you went through the OpenShift developer experience starting from software development using Quarkus and odo, moving on to automating build and deployment using Tekton pipelines and finally using GitOps for production deployments.\nNow it\u0026rsquo;s time to add another extremely important piece to the setup; enhancing application security in a containerized world. Using a recent addition to the OpenShift portfolio: Red Hat Advanced Cluster Security for Kubernetes!\nInstall RHACS Install the Operator Install the \u0026ldquo;Advanced Cluster Security for Kubernetes\u0026rdquo; operator from OperatorHub:\n Switch Update approval to Manual Apart from this use the default settings Approve the installation when asked Red Hat recommends installing the Red Hat Advanced Cluster Security for Kubernetes Operator in the rhacs-operator namespace. This will happen by default..\n Install the main component Central You must install the ACS Central instance in its own project and not in the rhacs-operator and openshift-operator projects, or in any project in which you have installed the ACS Operator!\n Navigate to Operators → Installed Operators Select the ACS operator You should now be in the rhacs-operator project the Operator created, create a new OpenShift Project for the Central instance: Select Project: rhacs-operator → Create project Create a new project called stackrox (Red Hat recommends using stackrox as the project name.) In the Operator view under Provided APIs on the tile Central click Create Instance Accept the name stackrox-central-services Adjust the memory limit of the central instance to 6Gi (Central Component Settings-\u0026gt;Resources-\u0026gt;Limits\u0026gt;Memory). Click Create After deployment has finished (Status Conditions: Deployed, Initialized in the Operator view on the tab Central) it can take some time until the application is completely up and running. One easy way to check the state is to switch to the Developer console view at the upper left. Then make sure you are in the stackrox project and open the Topology map. You\u0026rsquo;ll see the three deployments of an Central instance:\n scanner-db scanner centrals Wait until all Pods have been scaled up properly.\nVerify the Installation\nSwitch to the Administrator console view again. Now to check the installation of your Central instance, access the ACS Portal:\n Look up the central-htpasswd secret that was created to get the password If you access the details of your Central instance in the Operator page you\u0026rsquo;ll find the complete commandline using oc to retrieve the password from the secret under Admin Credentials Info. Just sayin\u0026hellip; ;)\n Look up and access the route central which was also generated automatically. This will get you to the ACS Portal, accept the self-signed certificate and login as user admin with the password from the secret.\nNow you have a Central instance that provides the following services in an RHACS setup:\n The application management interface and services. It handles data persistence, API interactions, and user interface access. You can use the same Central instance to secure multiple OpenShift or Kubernetes clusters.\n Scanner, which is a vulnerability scanner for scanning container images. It analyzes all image layers to check known vulnerabilities from the Common Vulnerabilities and Exposures (CVEs) list. Scanner also identifies vulnerabilities in packages installed by package managers and in dependencies for multiple programming languages.\n To actually do and see anything you need to add a SecuredCluster (be it the same or another OpenShift cluster). For effect go to the ACS Portal, the Dashboard should by pretty empty, click on the Compliance link in the menu to the left, lots of zero\u0026rsquo;s and empty panels, too.\nThis is because you don\u0026rsquo;t have a monitored and secured OpenShift cluster yet.\nPrepare to add Secured Clusters First you have to generate an init bundle which contains certificates and is used to authenticate a SecuredCluster to the Central instance, again regardless if it\u0026rsquo;s the same cluster as the Central instance or a remote/other cluster.\nIn the ACS Portal:\n Navigate to Platform Configuration → Integrations. Under the Authentication Tokens section, click on Cluster Init Bundle. Click Generate bundle Enter a name for the cluster init bundle and click Generate. Click Download Kubernetes Secret File to download the generated bundle. The init bundle needs to be applied on all OpenShift clusters you want to secure \u0026amp; monitor.\nPrepare the Secured Cluster For this workshop we run Central and SecuredCluster on one OpenShift cluster. E.g. we monitor and secure the same cluster the central services live on.\nApply the init bundle\n Use the oc command to log in to the OpenShift cluster as cluster-admin. The easiest way might be to use the Copy login command link from the UI Switch to the Project you installed ACS Central in, it should be stackrox. Run oc create -f \u0026lt;init_bundle\u0026gt;.yaml -n stackrox pointing to the init bundle you downloaded from the Central instance and the Project you created. This will create a number of secrets: secret/collector-tls created secret/sensor-tls created secret/admission-control-tls created Add the Cluster as SecuredCluster to ACS Central You are ready to install the SecuredClusters instance, this will deploy the secured cluster services:\n In the OpenShift Web Console go to the ACS Operator in Operators-\u0026gt;Installed Operators Using the Operator create an instance of the Secured Cluster type in the Project you created (should be stackrox) Change the Cluster Name for the cluster if you want, it\u0026rsquo;ll appear under this name in the ACS Portal And most importantly for Central Endpoint enter the address and port number of your Central instance, this is the same as the ACS Portal. If your ACS Portal is available at https://central-stackrox.apps.\u0026lt;DOMAIN\u0026gt; the endpoint is central-stackrox.apps.\u0026lt;DOMAIN\u0026gt;:443. Under Admission Control Settings make sure listenOnCreates, listenOnUpdates and ListenOnEvents is enabled Set Contact Image Scanners to ScanIfMissing **Collector Settings** change the value for **Collection** form `EBPF` to `KernelModule`. This is a workaround for a known issue. -- Click Create Now go to your ACS Portal again, after a couple of minutes you should see you secured cluster under Platform Configuration-\u0026gt;Clusters. Wait until all Cluster Status indicators become green.\nConfigure Quay Integrations in ACS For ACS to be able to access images in your local Quay registry, one additional step has to be taken.\nAccess the ACS Portal and go to Platform Configuration -\u0026gt; Integrations -\u0026gt; Generic Docker Registry. You should see a number of autogenerated (from existing pull-secrets) entries.\nYou have to change the entry pointing to the local Quay registry, it should look like:\nAutogenerated https://quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt; ....\nOpen and edit the integration using the three dots at the right:\n Username: quayadmin Password: the password you entered when creating the quayadmin user Make sure Update stored credentials is checked Press the Test button to validate the connection Press Save when the test is successful. Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/11-rhacs-warmup/",
"title": "Getting to know ACS",
"tags": [],
"description": "",
"content": "Before we start to integrate Red Hat Advanced Cluster Security in our setup, you should become familiar with the basic concepts.\nACS Features ACS delivers on these security use cases:\n Vulnerability Management: Protect the software supply chain and prevent known vulnerabilities from being used as an entry point in your applications. Configuration Management: Leverage the OpenShift platform for declarative security to prevent or limit attacks, even in the presence of exploitable vulnerabilities. Network Segmentation: Using Kubernetes network policies in OpenShift, restrict open network paths for isolation and to prevent lateral movement by attackers. Risk Profiling: Prioritize applications and security risks automatically to focus investigation and mitigation efforts. Threat detection and incident response: Continuous observation and response in order to take action on attack-related activity, and to use observed behavior to inform mitigation efforts to harden security. Compliance: Making sure that industry and regulatory standards are being met in your OpenShift environments. UI Overview Click image to enlarge Dashboard: The dashboard serves as the security overview - helping the security team understand what the sources of risk are, categories of violations, and gaps in compliance. All of the elements are clickable for more information and categories are customizable.\n Top bar: Near the top, we see a condensed overview of the status. It provides insight into the status of clusters, nodes, violations and so on. The top bar provides links to Search, Command-line tools, Cluster Health, Documentation, API Reference, and logged-in user account\n Left menus: The left hands side menus provide navigation into each of the security use-cases, as well as product configuration to integrate with your existing tooling.\n Global Search: On every page throughout the UI, the global search allows you to search for any data that ACS tracks.\n Exploring the Security Use Cases Now start to explore the Security Use Cases ACS targets as provided in the left side menu.\n Network Graph:\n The Network Graph is a flow diagram, firewall diagram, and firewall rule builder in one. The default view Active the actual traffic for the Past Hour between the deployments in all namespaces is shown. Violations:\n Violations record all times where a policy criteria was triggered by any of the objects in your cluster - images, components, deployments, runtime activity. Compliance:\n The compliance reports gather information for configuration, industry standards, and best practices for container-based workloads running in OpenShift. Vulnerability Management:\n Vulnerability Management provides several important reports - where the vulnerabilities are, which are the most widespread or the most recent, where my images are coming from. In the upper right are buttons to link to all policies, CVEs, and images, and a menu to bring you to reports by cluster, namespace, deployment, and co. Configuration Management:\n Configuration management provides visibility into a number of infrastructure components: clusters and nodes, namespaces and deployments, and Kubernetes systems like RBAC and secrets. Risk:\n The Risk view goes beyond the basics of vulnerabilities. It helps to understand how deployment configuration and runtime activity impact the likelihood of an exploit occurring and how successful those exploits will be. This list view shows all deployments, in all clusters and namespaces, ordered by Risk priority. Filters Most UI pages have a filters section at the top that allows you to narrow the view to matching or non-matching criteria. Almost all of the attributes that ACS gathers are filterable, try it out:\n Go to the Risk view Click in the Filters Bar Start typing Process Name and select the Process Name key Type java and press enter; click away to get the filters dropdown to clear You should see your deployment that has been “seen” running Java since it started Try another one: limit the filter to your Project namespace only Note the Create Policy button. It can be used to create a policy from the search filter to automatically identify this criteria. System Policies As the foundation of ACS are the system policies, have a good look around:\n Navigate to the Policiy Management section from Platform Configuration in the left side menu. You will get an overview of the Built-in Policies All of the policies that ship with the product are designed with the goal of providing targeted remediation that improves security hardening. You’ll see this list contains many Build and Deploy time policies to catch misconfigurations early in the pipeline, but also Runtime policies. These policies come from us at Red Hat - our expertise, our interpretation of industry best practice, and our interpretation of common compliance standards, but you can modify them or create your own. By default only some policies are enforced. If you want to get an overview which ones, you can use the filter view introduced above. Use Enforcement as filter key and FAIL_BUILD_ENFORCEMENT as value.\n Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/12-create-policy/",
"title": "Create a Custom Security Policy",
"tags": [],
"description": "",
"content": "Objective You should have one or more pipelines to build your application from the first workshop part, now we want to secure the build and deployment of it. For the sake of this workshop we\u0026rsquo;ll take a somewhat simplified use case:\nWe want to scan our application image for the Red Hat Security Advisory RHSA-2021:4904 concerning openssl-lib.\nIf this RHSA is found in an image we don\u0026rsquo;t want to deploy the application using it.\nThese are the steps you will go through:\n Create a custom Security Policy to check for the advisory Test if the policy is triggered in non-enforcing mode with an older image version that contains the issue and then with a newer version with the issue fixed. The final goal is to integrate the policy into the build pipeline Create a Custom System Policy First create the system policy. In the ACS Portal do the following:\n Platform Configuration-\u0026gt;Policy Management-\u0026gt;Create policy Policy Details Name: Workshop RHSA-2021:4904 Severity: Critical Categories: Workshop This will create a new Category if it doesn\u0026rsquo;t exist Click Next Policy Behaviour Lifecycle Stages: Build, Deploy Response method: Inform Click Next Policy Criteria Find the CVE policy criterium under Drag out policy fields in Image contents Drag \u0026amp; drop it on the drop zone of Policy Section 1 Put RHSA-2021:4904 into the CVE identifier field Click Next Policy Scope You could limit the scope the policy is applied in, do nothing for now. Review Policy Have a quick look around, if the policy would create a violation you get a preview here. Click Save Click image to enlarge Test the Policy Start the pipeline with the affected image version:\n In the OpenShift Web Console go to the Pipeline in your workshop-int project, start it and set Version to java-old-image (Remember how we set up this ImageStream tag to point to an old and vulnerable version of the image?) In the ACS Portal follow the Violations view To make it easier spotting the violations for this deployment you can filter the list by entering namespace and then workshop-int in the filter bar.\n Expected result: You\u0026rsquo;ll see the build deployments (Quarkus-Build-Options-Git-Gsklhg-Build-...) come and go when they are finished. When the final build is deployed you\u0026rsquo;ll see a violation in ACS Portal for policy Workshop RHSA-2021:4904 (Check the Time of the violation) There will be other policy violations listed, triggered by default policies, have a look around. Note that none of the policies is enforced (so that the pipeline build would be stopped) yet!\n Now start the pipeline with the fixed image version that doesn\u0026rsquo;t contain the CVE anymore:\n Start the pipeline again but this time leave the Java Version as is (openjdk-11-el7). Follow the Violations in the ACS Portal Expected result: You\u0026rsquo;ll see the build deployments come up and go When the final build is deployed you\u0026rsquo;ll see the policy violation for Workshop RHSA-2021:4904 for your deployment is gone because the image no longer contains it. This shows how ACS is automatically scanning images when they become active against all enabled policies. But we don\u0026rsquo;t want to just admire a violation after the image has been deployed, we want to disable the deployment during build time! So the next step is to integrate the check into the build pipeline and enforce it (don\u0026rsquo;t deploy the application).\nArchitecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/13-rhacs-pipeline/",
"title": "Integrating ACS into the Pipeline",
"tags": [],
"description": "",
"content": "Finally: Putting the Sec in DevSecOps! There are basically two ways to interface with ACS. The UI, which focuses on the needs of the security team, and a separate \u0026ldquo;interface\u0026rdquo; for developers to integrate into their existing toolset (CI/CD pipeline, consoles, ticketing systems etc): The roxctl commandline tool. This way ACS provides a familiar interface to understand and address issues that the security team considers important.\n ACS policies can act during the CI/CD pipeline to identify security risk in images before they are run as a container.\nIntegrate Image Scan into the Pipeline You should have created and build a custom policy in ACS and tested it for triggering violations. Now you will integrate it into the build pipeline.\nLet\u0026rsquo;s go: Prepare roxctl Build-time policies require the use of the roxctl command-line tool which is available for download from the ACS Central UI, in the upper right corner of the dashboard. Roxctl needs to authenticate to ACS Central to do anything. It can use either username and password API tokens to authenticate against Central. It\u0026rsquo;s good practice to use a token so that\u0026rsquo;s what we\u0026rsquo;ll do.\nCreate the roxctl token In the ACS portal:\n Navigate to Platform Configuration \u0026gt; Integrations. Scroll down to the Authentication Tokens category, and select API Token. Click Generate Token. Enter the name pipeline for the token and select the role Admin. Select Generate Save the contents of the token somewhere! Create OCP secret with token Change to the OpenShift Web Console and create a secret with the API token in the project your pipeline lives in:\n In the UI switch to your workshop-int Project Create a new key/value Secret named roxsecrets Introduce these key/values into the secret: rox_central_endpoint: \u0026lt;the URL to your ACS Portal\u0026gt; If the DOMAIN placeholder was automatically replaced it should be: central-stackrox.apps.\u0026lt;DOMAIN\u0026gt;:443 If not, replace it manually with your DOMAIN rox_api_token: \u0026lt;the API token you generated\u0026gt; Even if the form says Drag and drop file with your value here\u0026hellip; you can just paste the text.\n Remove ImageStream Change Trigger There is one more thing you have to do before integrating the image scanning into your build pipeline: When you created your deployment, a trigger was automatically added that will deploy a new version when the image referenced by the ImageStream changes.\nThis is not what we want! Because this way a newly build image would be deployed into a running container even if the roxctl scan finds a policy violation and terminates the pipeline.\nHave a look for yourself:\n In the OCP console go to Workloads-\u0026gt;Deployments and open the workshop deployment Switch to the YAML view Near the top under annotations (around lines 11-12) you\u0026rsquo;ll find an annotation image.openshift.io/triggers. Remove exactly these two lines and click Save:\nimage.openshift.io/triggers: \u0026gt;- [{\u0026#34;from\u0026#34;:{\u0026#34;kind\u0026#34;:\u0026#34;ImageStreamTag\u0026#34;,\u0026#34;name\u0026#34;:\u0026#34;workshop2:latest\u0026#34;,\u0026#34;namespace\u0026#34;:\u0026#34;workshop-int\u0026#34;},\u0026#34;fieldPath\u0026#34;:\u0026#34;spec.template.spec.containers[?(@.name==\\\u0026#34;workshop2\\\u0026#34;)].image\u0026#34;,\u0026#34;pause\u0026#34;:\u0026#34;false\u0026#34; This way we made sure that a new image won\u0026rsquo;t be deployed automatically \u0026ldquo;outside\u0026rdquo; of a pipeline run.\nCreate a Scan Task You are now ready to create a new pipeline task that will use roxctl to scan the image build in your pipeline before the deploy step:\n In the OCP UI, make sure you are still in the project with your pipeline and the secret roxsecrets Go to Pipelines-\u0026gt;Tasks Click Create-\u0026gt; ClusterTask Replace the YAML displayed with this: apiVersion: tekton.dev/v1beta1 kind: ClusterTask metadata: name: rox-image-check spec: params: - description: \u0026gt;- Secret containing the address:port tuple for StackRox Central (example - rox.stackrox.io:443) name: rox_central_endpoint type: string - description: Secret containing the StackRox API token with CI permissions name: rox_api_token type: string - description: \u0026#34;Full name of image to scan (example -- gcr.io/rox/sample:5.0-rc1)\u0026#34; name: image type: string - description: Use image digest result from s2i-java build task name: image_digest type: string results: - description: Output of `roxctl image check` name: check_output steps: - env: - name: ROX_API_TOKEN valueFrom: secretKeyRef: key: rox_api_token name: $(params.rox_api_token) - name: ROX_CENTRAL_ENDPOINT valueFrom: secretKeyRef: key: rox_central_endpoint name: $(params.rox_central_endpoint) image: registry.access.redhat.com/ubi8/ubi-minimal:latest name: rox-image-check resources: {} script: \u0026gt; #!/usr/bin/env bash set +x curl -k -L -H \u0026#34;Authorization: Bearer $ROX_API_TOKEN\u0026#34; https://$ROX_CENTRAL_ENDPOINT/api/cli/download/roxctl-linux --output ./roxctl \u0026gt; /dev/null; echo \u0026#34;Getting roxctl\u0026#34; chmod +x ./roxctl \u0026gt; /dev/null ./roxctl image check -c Workshop --insecure-skip-tls-verify -e $ROX_CENTRAL_ENDPOINT --image $(params.image)@$(params.image_digest) Take your time to understand the Tekton task definition:\n First some parameters are defined, it\u0026rsquo;s important to understand some of these are taken or depend on the build task that run before. The script action pulls the roxctl binary into the pipeline workspace so you\u0026rsquo;ll always have a version compatible with your ACS version. The most important bit is the roxctl execution, of course. it executes the image check command only checks against policies from category Workshop that was created above. This way you can check against a subset of policies! defines the image to check and it\u0026rsquo;s digest Add the Task to the Pipeline Now add the rox-image-check task to your pipeline between the build and deploy steps.\n In the Pipelines view of your project click the three dots to the right and the Edit Pipeline Remember how we edited the pipeline directly in yaml before? OpenShift comes with a graphical Pipeline editor that we will use this time.\n Hover your mouse over build task and click the + at the right side side of it, to add a task\n This will open a task selector where you can choose your rox-image-check task and double-click it to add to the pipeline\n To add the required parameters from the pipeline for the task, click the rox-image-check task.\n A form with the parameters will open, fill it in:\n rox_central_endpoint: roxsecrets rox_api_token: roxsecrets image: quay-quay-quay.apps.\u0026lt;DOMAIN\u0026gt;/openshift_workshop-int/workshop (if the DOMAIN placeholder hasn\u0026rsquo;t been replaced automatically, do it manually) Adapt the Project name if you changed it image_digest: $(tasks.build.results.IMAGE_DIGEST)\n This variable takes the result of the build task and uses it in the scan task. Click Save\n Click image to enlarge Test the Scan Task With our custom System Policy still not set to enforce we first are going to test the pipeline integration. Go to Pipelines and next to your pipeline click on the three dots and then Start. Now in the pipeline startform enter java-old-image in the Version field.\n Expected Result: The rox-image-check task should succeed, but if you have a look at the output (click the task in the visual representation) you should see that the build violated our policy! Enforce the Policy The last step is to enforce the System Policy. If the policy is violated the pipeline should be stopped and the application should not be deployed.\n Edit your custom System Policy Workshop RHSA-2021:4904 in ACS Portal and set Response Method to Inform and enforce and then switch on Build and Deploy below. Run the pipeline again, first with Version java-old-image and then with Version openjdk-11-el7 (default) Expected results: We are sure you know by now what to expect! The pipeline should fail with the old image version and succeed with the latest image version! Make sure you run the pipeline once, otherwise your application will not have valid image tag when you kill the running pod in the next chapter Click image to enlarge Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/15-runtime-security/",
"title": "Securing Runtime Events",
"tags": [],
"description": "",
"content": "So far you\u0026rsquo;ve seen how ACS can handle security issues concerning Build and Deploy stages. But ACS is also able to detect and secure container runtime behaviour. Let\u0026rsquo;s have a look\u0026hellip;\nHandling Security Issues at Runtime As a scenario let\u0026rsquo;s assume you want to protect container workloads against attackers who are trying to install software. ACS comes with pre-configured policies for Ubuntu and Red Hat-based containers to detect if a package management tool is installed, this can be used in the Build and Deploy stages:\n Red Hat Package Manager in Image And, more important for this section about runtime security, a policy to detect the execution of a package manager as a runtime violation, using Kernel instrumentation:\n Red Hat Package Manager Execution In the ACS Portal, go to Platform Configuration-\u0026gt;Policy Management, search for the policies by e.g. typing policy and then red hat into the filter. Open the policy detail view by clicking it and have a look at what they do.\nYou can use the included policies as they are but you can always e.g. clone and adapt them to your needs or write completely new ones.\n As you can see the Red Hat Package Manager Execution policy will alert as soon as a process rpm or dnf or yum is executed.\nLike with most included policies it is not set to enforce!\n Test the Runtime Policy To see how the alert would look like, we have to trigger the condition:\n You should have a namespace with your Quarkus application runnning In the OpenShift Web Console navigate to the pod and open a terminal into the container Run yum search test Go to the Violations view in the ACS Portal. You should see a violation of the policy, if you click it, you\u0026rsquo;ll get the details. Run several yum commands in the terminal and check back with the Violations view: As long as you stay in the same deployment, there won\u0026rsquo;t be a new violation but you will see the details for every new violation of the same type in the details. Enforce Runtime Protection But the real fun starts when you enforce the policy. Using the included policy, it\u0026rsquo;s easy to just \u0026ldquo;switch it on\u0026rdquo;:\n In the ACS Portal bring up the Red Hat Package Manager Execution Policy again. Click the Edit Policy button in the Actions drop-down to the upper right. Click Next until you arrive at the Policy behaviour page. Under Response Method select Inform and enforce Set Configure enforcement behaviour for Runtime to Enforce on Runtime Click Next until you arrive at the last page and click Save Now trigger the policy again by opening a terminal into the pod in the OpenShift Web Console and executing yum. See what happens:\n Runtime enforcement will kill the pod immediately (via k8s). OpenShift will scale it up again automatically This is to be expected and allows to contain a potential compromise while not causing a production outage. Architecture recap Click image to enlarge "
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/16-acm/",
"title": "Advanced Cluster Management",
"tags": [],
"description": "",
"content": "Advanced Cluster Management Overview Red Hat Advanced Cluster Management for Kubernetes (ACM) provides management, visibility and control for your OpenShift and Kubernetes environments. It provides management capabilities for:\n cluster creation application lifecycle security and compliance All across hybrid cloud environments.\nClusters and applications are visible and managed from a single console, with built-in security policies. Run your operations from anywhere that Red Hat OpenShift runs, and manage any Kubernetes cluster in your fleet.\nInstall Advanced Cluster Managagement Before you can start using ACM, you have to install it using an Operator on your OpenShift cluster.\n Login to the OpenShift Webconsole with you cluster admin credentials In the Web Console, go to Operators \u0026gt; OperatorHub and search for the Advanced Cluster Management for Kubernetes operator. Install the operator with default settings It will install into a new Project open-cluster-management by default. After the operator has been installed it will inform you to create a MultiClusterHub, the central component of ACM.\n Click image to enlarge Click the Create MultiClusterHub button and have a look at the available installation parameters, but don\u0026rsquo;t change anything.\nClick Create.\nAt some point you will be asked to refresh the web console. Do this, you\u0026rsquo;ll notice a new drop-down menu at the top of the left menu bar. If left set to local-cluster you get the standard console view, switching to All Clusters takes you to a view provided by ACM covering all your clusters.\nOkay, right now you\u0026rsquo;ll only see one, your local-cluster listend here.\nA first look at Advanced Cluster Management Now let\u0026rsquo;s change to the full ACM console:\n Switch back to the local-clusters view Go to Operators-\u0026gt;Installed operators and click the Advanced Cluster Management for Kubernetes operator In the operator overview page choose the MultiClusterHub tab. The multiclusterhub instance you deployed should be in Status Running by now. Look up the route for the multicloud-console and access it. Click the Log in with OpenShift button and login with your OpenShift account. You are now in your ACM dashboard!\n Click image to enlarge Have a look around:\n Go to Infrastructure-\u0026gt;Clusters You\u0026rsquo;ll see your lab OpenShift cluster here, the infrastructure it\u0026rsquo;s running on and the version. There might be a version update available, don\u0026rsquo;t run it please\u0026hellip; ;) If you click the cluster name, you\u0026rsquo;ll get even more information, explore! Manage Cluster Lifecycle One of the main features of Advanced Cluster Management is cluster lifecycle management. ACM can help to:\n manage credentials deploy clusters to different cloud providers and on-premise import existing clusters use labels on clusters for management purposes Let\u0026rsquo;s give this a try!\nDeploy an OpenShift Cluster Okay, to not overstress our cloud ressources and for the fun of it we\u0026rsquo;ll deploy a Single Node OpenShift to the same AWS account your lab cluster is running in.\nCreate Cloud Credentials The first step is to create credentials in ACM to deploy to the Amazon Web Services account.\nYou\u0026rsquo;ll get the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY needed to deploy to AWS from your facilitators.\n On your OpenShift cluster, create a new namespace sno In the ACM web console, navigate to Credentials and click Add credential: As Credential type select AWS Credential name: sno Namespace: Choose the sno namespace Base DNS domain: sandbox\u0026lt;NNNN\u0026gt;.opentlc.com, replace \u0026lt;NNNN\u0026gt; with your id, you can find it e.g. in the URL Click Next Now you need to enter the AWS credentials, enter the Access key ID and Secret access key as provided. Click Next Click Next again for proxy settings Now you need to enter an OpenShift Pull Secret, copy it from your OCP cluster: Switch to the project openshift-config and copy the content of the secret pull-secret To connect to the managed SNO you need to enter a SSH private key ($HOME/.ssh/\u0026lt;LABID\u0026gt;key.pem) and public key ($HOME/.ssh/\u0026lt;LABID\u0026gt;key.pub). Use the respective keys from your lab environments bastion host, the access details will be provided. The \u0026lt;LABID\u0026gt; can be found in the URL, e.g. multicloud-console.apps.cluster-z48z9.z48z9.sandbox910.opentlc.com Click Next Click Add You have created a new set of credentials to deploy to the AWS account you are using.\nDeploy Single Node OpenShift Now you\u0026rsquo;ll deploy a new OpenShift instance:\n In the ACM console, navigate to Infrastructure -\u0026gt; Clusters and click Create cluster. As provider choose Amazon Web Services Infrastructure provider credential: Select the sno credential you created. Cluster name: aws-sno Cluster set: Leave empty \u0026hellip; Base DNS Domain: Set automatically from the credentials Release name: Use the latest 4.10 release available Additional Label: sno=true Click Next On the Node pools view leave the Region set to us-east-1 Architecture: amd64 Expand Control plane pool read the information for Zones (and leave the setting empty) change Instance Type to m5.2xlarge. Expand Worker pool 1: Set Node count to 0 (we want a single node OCP\u0026hellip;). Click Next Have a look at the network screen but don\u0026rsquo;t change anything Now click Next until you arrive at the Review. Do the following:\n Set YAML: On In the cluster YAML editor select the install-config tab In the controlPlane section change the replicas field to 1. It\u0026rsquo;s time to deploy your cluster, click Create!\nACM monitors the installation of the new cluster and finally imports it. Click View logs under Cluster install to follow the installation log.\nInstallation of a SNO take around 30 minutes in our lab environment.\n After installation has finished, access the Clusters section in the ACM portal again.\n Click image to enlarge Explore the information ACM is providing, including the Console URL and the access credentials of your shiny new SNO instance. Use them to login to the SNO console.\nApplication Lifecycle Management In the previous lab, you explored the Cluster Lifecycle functionality of RHACM by deploying a new OpenShift single-node instance to AWS. Now let\u0026rsquo;s have a look at another capability, Application Lifecycle management.\nApplication Lifecycle management is used to manage applications on your clusters. This allows you to define a single or multi-cluster application using Kubernetes specifications, but with additional automation of the deployment and lifecycle management of resources to individual clusters. An application designed to run on a single cluster is straightforward and something you ought to be familiar with from working with OpenShift fundamentals. A multi-cluster application allows you to orchestrate the deployment of these same resources to multiple clusters, based on a set of rules you define for which clusters run the application components.\nThe naming of the different components of the Application Lifecycle model in RHACM is as follows:\n Channel: Defines a place where deployable resources are stored, such as an object store, Kubernetes namespace, Helm repository, or GitHub repository. Subscription: Definitions that identify deployable resources available in a Channel resource that are to be deployed to a target cluster. PlacementRule: Defines the target clusters where subscriptions deploy and maintain the application. It is composed of Kubernetes resources identified by the Subscription resource and pulled from the location defined in the Channel resource. Application: A way to group the components here into a more easily viewable single resource. An Application resource typically references a Subscription resource. Creating a Simple Application with ACM Start with adding labels to your two OpenShift clusters in your ACM console:\n On the local cluster add a label: environment=prod On the new SNO deployment add label: environment=dev Click image to enlarge Now it\u0026rsquo;s time to actually deploy the application. But first have a look at the manifest definitions ACM will use as deployables at https://github.com/devsecops-workshop/book-import/tree/master/book-import.\nThen in the ACM console navigate to Applications:\n Click Create application, select Subscription Make sure the view is set to YAML Name: book-import Namespace: book-import Under Repository location for resources -\u0026gt; Repository types, select GIT URL: https://github.com/devsecops-workshop/book-import.git Branch: master Path: book-import Select Deploy application resources only on clusters matching specified labels Label: environment Value: dev Click image to enlarge Click Create, after a few minutes you will see the application available in ACM. Click the application and have a look at the topology view:\n Click image to enlarge Select Cluster, the application should have been deployed to the SNO cluster because of the label environment=dev Select the Route and click on the URL, this should take you to the Book Import application Explore the other objects Now edit the application in the ACM console and change the label to environment=prod. What happens?\nIn this simple example you have seen how to deploy an application to an OpenShift cluster using ACM. All manifests defining the application where kept in a Git repo, ACM then used the manifests to deploy the required objects into the target cluster.\nPre/Post Tasks with Ansible Automation Platform 2 You can integrate Ansible Automation Platform and the Automation Controller (formerly known as Ansible Tower) with ACM to perform pre / post tasks within the application lifecycle engine. The prehook and posthook task allows you to trigger an Ansible playbook before and after the application is deployed, respectively.\nInstall Automation Controller To give this a try you need an Automation Controller instance. So let\u0026rsquo;s deploy one on your cluster using the AAP Operator:\n In OperatorHub search for the Ansible Automation Platform operator and install it using the default settings. After installation has finished create an Automation Controller instance using the Operator, name it automationcontroller When the instance is ready, look up automationcontroller-admin-password secret Then look up the automationcontroller route, access it and login as user admin using the password from the secret Apply a manifest or use the username/password login to the Red Hat Customer Portal and add a subscription You are now set with a shiny new Ansible Automation Platform Controller!\nAdd Auth Token In the Automation Controller web UI, generate a token for the admin user:\n Go to Users Click admin and select Tokens Click the Add button As description add Token for use by ACM Update the Scope to Write Click Save Save the token value to a text file, you will need this token later!\nConfigure Template in Automation Controller For Automation Controller to run something we must configure a Project and a Template first.\nCreate an Ansible Project:\n Select Projects in the left menu Click Add Name: ACM Test Organization: Default SCM Type: Git SCM URL: https://github.com/devsecops-workshop/ansible-acm.git Click Save Create an Ansible Job Template:\n Select Templates in the left menu. Click Add then Add Job Template Name: acm-test Inventory: Demo Inventory Project: ACM Test Playbook: message.yml Check Prompt on launch for Variables Click Save Click Launch Verify that the Job run by going to Jobs and looking for an acm-test job showing a successful Playbook run.\nCreate AAP credentials in ACM Set up the credential which is going to allow ACM to interact with your AAP instance in your ACM Portal:\n Click on Credentials on the left menu and select Add Credential button. Credential type: Red Hat Ansible Automation Platform Credential name: appaccess Namespace: open-cluster-management Click Next Ansible Tower Host: Ansible Tower token: Click Next and Add Use the ACM - Ansible integration And now let\u0026rsquo;s configure the ACM integration with Ansible Automation Platform to kick off a job in Automation Controller. In this case the Ansible job will just run our simple playbook that will only output a message.\nIn the ACM Portal:\n Go to the Applications menu on the left and click Create application → Subscription Enter the following information: Name: book-import2 Namespace: book-import2 Under repository types, select GIT repository URL: https://github.com/devsecops-workshop/book-import.git Branch: prehook Path: book-import Expand the Configure automation for prehook and posthook dropdown menu Ansible Automation Platform credential: appaccess Select Deploy application resources only on clusters matching specified labels Label: environment Value: dev Click Create Give this a few minutes. The application will complete and in the application topology view you will see the Ansible prehook. In Automation Controller go to Jobs and verify the Automation Job run.\n"
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/20-appendix/",
"title": "Appendix",
"tags": [],
"description": "",
"content": "Create a serviceaccount to scan the internal OpenShift registry The integrations to the internal registry were created automatically. But to enable scanning of images in the internal registry, you\u0026rsquo;ll have to configure valid credentials, so this is what you\u0026rsquo;ll do:\n add a serviceaccount assign it the needed privileges configure the Integrations in ACS with the new credentials But the first step is to disable the auto-generate mechanism, otherwise your updated credentials would be set back automatically:\n In the OpenShift Web Console, switch to the project stackrox, go to Installed Operators-\u0026gt;Advanced Cluster Security for Kubernetes Open your Central instance stackrox-central-services Switch to the YAML view, under spec: add the following YAML snippet (one indent): customize: envVars: - name: ROX_DISABLE_AUTOGENERATED_REGISTRIES value: 'true' Click Save Create ServiceAccount to read images from Registry\n In the OpenShift Web Console make sure you are still in the stackrox Project User Management -\u0026gt; ServiceAccounts -\u0026gt; Create ServiceAccount Replace the example name in the YAML with acs-registry-reader and click Create In the new ServiceAccount, under Secrets click one of the acs-registry-reader-token-... secrets Under Data copy the Token Using oc give the ServiceAccount the right to read images from all projects: oc adm policy add-cluster-role-to-user 'system:image-puller' system:serviceaccount:stackrox:acs-registry-reader -n stackrox Configure Registry Integrations in ACS\nAccess the ACS Portal and configure the already existing integrations of type Generic Docker Registry. Go to Platform Configuration -\u0026gt; Integrations -\u0026gt; Generic Docker Registry. You should see a number of autogenerated (from existing pull-secrets) entries.\nYou have to change four entries pointing to the internal registry, you can easily recognize them by the placeholder Username serviceaccount.\nFor each of the four local registry integrations click Edit integration using the three dots at the right:\n Put in acs-registry-reader as Username Paste the token you copied from the secret into the Password field Select Disable TLS certificate validation Press the Test button to validate the connection and press Save when the test is successful. ACS is now able to scan images in the internal registry!\n"
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/",
"title": "",
"tags": [],
"description": "",
"content": "DevSecOps Workshop What is it about This workshop will introduce you to the application development cycle leveraging OpenShift\u0026rsquo;s tooling \u0026amp; features with a special focus on securing your environment using Advanced Cluster Security for Kubernetes (ACS). You will get a brief introduction in several OpenShift features like OpenShift Pipelines, OpenShift GitOps, OpenShift DevSpaces. And all in a fun way.\nArchitecture overview Click image to enlarge Collaboration This workshop was created by Daniel Brintzinger, Goetz Rieger and Sebastian Dehn. Feel free to create a pull request or an issue in GitHub\n"
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/categories/",
"title": "Categories",
"tags": [],
"description": "",
"content": ""
},
{
"uri": "https://devsecops-workshop.github.io/devsecops-workshop.github.io-dev/tags/",
"title": "Tags",
"tags": [],
"description": "",
"content": ""
}]