Search This Blog

Monday 14 June 2021

Basic Pipeline using Snyk Container, OCI Images, Azure DevOps all part of Cloud Native Application Security

Snyk Container will find vulnerabilities in containers and Kubernetes workloads throughout the SDLC by scanning any compliant OCI image which includes those created by Cloud Native Buildpacks or other build tools that create OCI images.

So what could an Azure DevOps Pipeline look like that incorporates the following using Snyk?

Running a Snyk Scan against the project repository

Here we run a "snyk test" from the root folder of the repository itself and that report is then



Building your Artifact

Here we use a Maven task which packages the application Artifact as a JAR file ready to run




Creating an OCI compliant container image from the Artifact itself

There are various ways to create a OCI compliant image but by the far the simplest is using Cloud Native Buildpacks and for this we use the pack CLI which in turns using the Java Buildpack from our JAR file directly avoid a compilation step from the source code given we already did that on the step above.



Running a Snyk Scan against the container image directly on the Container Registry

With our container image now in our Container Registry we can use "snyk container" to check for issues directly from the registry and also check for application security issues from the open source dependancies as well.





The finished Pipeline ...







azure-pipeline.yml Pipeline used in Azure DevOps

# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml

trigger:
- master

pool:
vmImage: ubuntu-latest

steps:

- task: Maven@3
inputs:
mavenPomFile: 'pom.xml'
mavenOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: true
options: "-Dskiptests -Dsnyk.skip"
goals: 'package'
displayName: "Build artifact JAR"
- task: SnykSecurityScan@0
inputs:
serviceConnectionEndpoint: 'snyk-token'
testType: 'app'
monitorOnBuild: false
failOnIssues: false
displayName: "snyk test from source"
- task: Docker@2
inputs:
containerRegistry: 'docker-pasapples-connection'
command: 'login'
displayName: "Login to DockerHub"

- script: |
curl -sSL "https://github.com/buildpacks/pack/releases/download/v0.19.0/pack-v0.19.0-linux.tgz" | tar -C ./ --no-same-owner -xzv pack
./pack build pasapples/springbootemployee:cnb-paketo-base --builder paketobuildpacks/builder:base --publish --path ./target/springbootemployee-0.0.1-SNAPSHOT.jar
displayName: 'Build Container with Pack'

- task: SnykSecurityScan@0
inputs:
serviceConnectionEndpoint: 'snyk-token'
testType: 'container'
dockerImageName: 'pasapples/springbootemployee:cnb-paketo-base'
severityThreshold: 'low'
monitorOnBuild: false
failOnIssues: false
additionalArguments: "--app-vulns"
displayName: "snyk container scan from image"


More Information

So, for Container and Kubernetes security, designed to help developers find and fix vulnerabilities in cloud native applications, click the links below to learn more and get started today.

Snyk Container

Snyk Platform

Thursday 3 June 2021

Installing Snyk Controller into a k3d kubernetes cluster to enable runtime container scanning with the Snyk Platform

Snyk integrates with Kubernetes, enabling you to import and test your running workloads and identify vulnerabilities in their associated images and configurations that might make those workloads less secure. Once imported, Snyk continues to monitor those workloads, identifying additional security issues as new images are deployed and the workload configuration changes

In the example below we show you how easy it is to integrate the Snyk Platform with any K8s distribution in this case k3d running on my laptop.

Steps 

1. Install k3d using the instructions from the link below.

https://k3d.io/

2. Create a cluster as shown below.

pasapicella@192-168-1-113:~/snyk/demos/kubernetes/k3d$ k3d cluster create snyk-k3d --servers 1 --agents 2
INFO[0000] Prep: Network
INFO[0003] Created network 'k3d-snyk-k3d'
INFO[0003] Created volume 'k3d-snyk-k3d-images'
INFO[0004] Creating node 'k3d-snyk-k3d-server-0'
INFO[0005] Creating node 'k3d-snyk-k3d-agent-0'
INFO[0005] Creating node 'k3d-snyk-k3d-agent-1'
INFO[0005] Creating LoadBalancer 'k3d-snyk-k3d-serverlb'
INFO[0005] Starting cluster 'snyk-k3d'
INFO[0005] Starting servers...
INFO[0005] Starting Node 'k3d-snyk-k3d-server-0'
INFO[0012] Starting agents...
INFO[0012] Starting Node 'k3d-snyk-k3d-agent-0'
INFO[0023] Starting Node 'k3d-snyk-k3d-agent-1'
INFO[0031] Starting helpers...
INFO[0031] Starting Node 'k3d-snyk-k3d-serverlb'
INFO[0033] (Optional) Trying to get IP of the docker host and inject it into the cluster as 'host.k3d.internal' for easy access
INFO[0036] Successfully added host record to /etc/hosts in 4/4 nodes and to the CoreDNS ConfigMap
INFO[0036] Cluster 'snyk-k3d' created successfully!
INFO[0036] --kubeconfig-update-default=false --> sets --kubeconfig-switch-context=false
INFO[0036] You can now use it like this:
kubectl config use-context k3d-snyk-k3d
kubectl cluster-info

3. View the Kubernetes nodes.

$ kubectl get nodes
NAME                    STATUS   ROLES                  AGE   VERSION
k3d-snyk-k3d-server-0   Ready    control-plane,master   21h   v1.20.5+k3s1
k3d-snyk-k3d-agent-0    Ready    <none>                 21h   v1.20.5+k3s1
k3d-snyk-k3d-agent-1    Ready    <none>                 21h   v1.20.5+k3s1

4.  Run the following command in order to add the Snyk Charts repository to Helm.

$ helm repo add snyk-charts https://snyk.github.io/kubernetes-monitor/
"snyk-charts" already exists with the same configuration, skipping

5. Once the repository is added, create a unique namespace for the Snyk controller:

$ kubectl create namespace snyk-monitor

6. Now, log in to your Snyk account and navigate to Integrations. Search for and click Kubernetes. Click Connect from the page that loads, copy the Integration ID. The Snyk Integration ID is a UUID, similar to this format: abcd1234-abcd-1234-abcd-1234abcd1234. Save it for use from your Kubernetes environment in the next step

Instructions link : https://support.snyk.io/hc/en-us/articles/360006368657-Viewing-your-Kubernetes-integration-settings

7. Snyk monitor runs by using your Snyk Integration ID, and using a dockercfg file. If you are not using any private registries which we are not in this demo, create a Kubernetes secret called snyk-monitor containing the Snyk Integration ID from the previous step and run the following command:

$ kubectl create secret generic snyk-monitor -n snyk-monitor \
         --from-literal=dockercfg.json={} \
         --from-literal=integrationId=INTEGRATION_TOKEN_FROM_STEP_6
secret/snyk-monitor created

8. Install the Snyk Helm chart as follows:

$ helm upgrade --install snyk-monitor snyk-charts/snyk-monitor \
                          --namespace snyk-monitor \
                          --set clusterName="k3d Dev cluster"
Release "snyk-monitor" does not exist. Installing it now.
NAME: snyk-monitor
LAST DEPLOYED: Wed Jun  2 17:47:13 2021
NAMESPACE: snyk-monitor
STATUS: deployed
REVISION: 1
TEST SUITE: None

9. Verify the Snyk Controller is running using either 

$ kubectl get pods -n snyk-monitor
NAME                           READY   STATUS    RESTARTS   AGE
snyk-monitor-64c94685b-fwpvx   1/1     Running   3          21h

10. At this point we can create some workloads as follows let's just add a single POD to the cluster for a basic Spring Boot application.

$ kubectl run springboot-app --image=pasapples/spring-boot-jib --port=8080
pod/springboot-app created

11. Head back to the Snyk Dashboard and click on your Kubernetes Integration Tile and you should see a list of applicable workloads to monitor in our case  we just have the single app called "springboot-app".



12. Add the selected workload and your done!!!



More Information


Install the Snyk controller with Helm