In my previous blog I wrote about continuous integration (CI), continuous delivery (CD), continuous deployment (CD), and the challenges organizations may undergo while aspiring to adopt a DevOps practice. The shift toward agile development has forced business owners to be more exploratory and innovative, emphasizing speed in the application development workflows.

The new type of applications is also known as adopting microservices; run them as cloud-native applications.

 

Testing code in an iterative manner improves code quality by identifying bugs in the code in the early stages. Multiple instances of the code can be developed, built, and deployed in containers. CloudBees Enterprise Jenkins is one of the most popular CI tools that are commonly used by developers. Customers choose to run different services, including Jenkins, in containers, which provides homogeneity in the development and deployment environments along with horizontal scalability. This means applications developed in one platform should run on another. Containers are ephemeral in nature but still require persistent storage for resiliency, data recovery, and scalability. While CloudBees Jenkins is also being widely consumed for CD, in this blog we focus on the CI pipeline with CloudBees Enterprise Jenkins and ONTAP 9.

During the code development and deployment process, data is generated, stored, processed, and managed on NetApp storage solutions. NetApp offerings provide persistent storage for Docker containers with the NetApp Docker Volume Plug-In (nDVP). NetApp also jointly worked with CloudBees to develop a plug-in that reduces developer code checkout from source code repositories such as GIT, Perforce, and Continuous build and test cycle time and developer workspace creation and at the same time improves storage space efficiency and reduces storage costs. Native NetApp® technologies such as thin-provisioned FlexVol® volumes, FlexClone® volumes, and Snapshot® copies seamlessly integrate with CloudBees Enterprise Jenkins builder templates using RESTful APIs. CI pipeline with CloudBees Enterprise Jenkins and NetApp improves overall customer or user experience through automation, iterative testing, and data resiliency.

 

The primary reasons for the NetApp and CloudBees joint activity are to abstract and integrate NetApp technologies and empower the CI admins and the developers in seamlessly integrating the CI workflow using RESTful APIs. The CI team and developers no longer have to depend on storage admins to configure and expose the functionalities that accelerate the development process. This integration also leads to additional benefits to the business and the application owners in the development environments. For more information, refer to TR-4547. Following are some of the benefits with this approach:

  • Improve developer productivity. This allows instantaneous user workspaces and dev/test environments for databases. These database environments may be used for patch testing, database changes, unit testing by developers, and QA during staging without risking the source codebase or the production database.
  • Provide faster time to market. Reduced checkout/build times and iterative testing (fail fast, fix fast) reduce the errors and thus reduce the technical debt. This also improves the code quality.
  • Improve storage space efficiency. User workspaces and databases created for dev/test do not take any additional storage space from their parent production volumes. This reduces the storage costs in cloud/platform 3 environments and yet provides total ownership and control over the data.
  • Enable developers to use native NetApp technologies in the development workflow by using APIs in a consumable model.

Figure 1) CI pipeline with Jenkins and Docker using ONTAP APIs..jpg

 

The CI and development environments should adhere to some best practices for better code quality and manageability. As illustrated in Figure 1, having a local SCM repository on NetApp storage is recommended. The source code can be cloned from a private or public repository, or new code can be created for development.
Separate development branches or CI code branches can be created on different NetApp volumes. If the code branch is small, then the entire source code is pulled in a single development branch or CI code branch volume. This development branch or CI code branch volume is used as a location to sync up with all the dependencies such as tools, RPMs, libraries, compilers, and so on to perform a full build.

After a successful full build in the CI code branch volume, a NetApp Snapshot copy is taken on the volume. The CI code branch volume now consists of source code, all the dependencies, .jar files (if this is Java code), and all the prebuild artifacts. Now the CI environment is complete. This process reduces a considerable amount of traffic to the SCM volume. Only code changes are submitted or checked into the SCM volume. The builds (developer, CI, or nightly) are performed in the CI code branch volumes.

 

The developer logs in and checks the latest NetApp Snapshot copy and creates an instant clone of the CI code branch volume. This clone is storage space efficient and is prepackaged with everything that the developer would need to write and make changes to the code. This clone is used as a workspace for the developer. After proper code changes are submitted, reviewed, and checked by Gerrit, unit tests, or some kind of pre-check-in analysis tool, the changes are pushed and committed to the SCM volume.

 

The changes submitted in the SCM are propagated into the respective CI code branch volume, and an incremental build is performed followed by a NetApp Snapshot copy. Every Snapshot copy taken after an update to the CI environment provides the developer with the most recent cloned copy of the code changes. This is an iterative and important phase of the CI pipeline.

 

A predefined set of scheduled CI tests is performed on successful developer builds to further harden the code changes by identifying any errors in the code. Depending on the requirement and the development scenario, a nightly build may be scheduled at the end of the day. Upon successful completion of the CI or nightly build, the contents of the CI code branch volume are zipped and copied in the build artifact volume. The copy of the build can be now promoted to QA for additional testing and further deploying it into production from the build artifact volume.

 

In the CI pipeline setup, the Jenkins master runs in a container. All the components such as the local SCM repository (GIT), development branches or CI code branches, user workspaces, and build artifact illustrated in Figure 1) are mounted on Docker containers. These components run as a Jenkins slaves ties to the Jenkins master. The Docker containers use the nDVP to mount the NetApp volumes to provide persistent storage.

 

This entire workflow, which uses NetApp volumes, Snapshot copies, and FlexClone volumes, is stitched in the CloudBees Jenkins CI pipeline using Docker containers and ONTAP® RESTful APIs. The Jenkins master runs in a Docker container on a physical host or a VM. The Jenkins slaves also run in Docker containers in sibling mode.

 

Figure 2) Docker on Docker: Jenkins master-slave in a sibling setup.

 

The architecture of the NetApp and Jenkins plug-in using a Docker container is as shown in Figure 2. The Docker engine runs on the VM or the physical host and passes the Docker socket from the host to the container and runs nDVP on the host VM as well as the Jenkins master container. The main purpose of nDVP is to attach NetApp volumes to containers while spinning them off in order to leverage features such as storage efficiency and resiliency that NetApp has to offer. For more information, refer to TR-4547. To download the Jenkins plug-in, visit https://github.com/netapp.

 

For an opportunity to chat with NetApp subject matter experts, stop by our booth K8 at Jenkins World, Santa Clara Convention Center, September 14-15. To see a demo of the Jenkins plug-in, join me at a Birds of a Feather session on Wednesday, September 14, 5:00 PM – 6:00 PM, Room Great America J.

mm

Bikash Roy Choudhury

Bikash Roy Choudhury is a Principal Architect at NetApp. He is responsible for designing and architecting solutions for DevOps workflows relevant across industry verticals including high tech, financial services, gaming, social media, and web-based development organizations that address customer business requirements in these markets. He also works on validating solutions with Red Hat (RHOSP-IaaS), Apprenda (PaaS), Docker Containers, CloudBees Jenkins, IBM Bleuemix PaaS and Perforce Helix using RESTful APIs and integrating them with NetApp ONTAP software in private, hybrid, and public clouds. In his current role, Bikash drives integrations with strategic DevOps partners, including Red Hat, CloudBees, Perforce, Apprenda,
JFrog Artifactory, IBM, and Iron.io.

Bikash has over 16 years of data management platform experience. Before joining NetApp, he worked for eight years at key accounts as a professional services engineer. For three years he was a systems administrator working on various UNIX platforms. Bikash received an MSCIS from the University of Phoenix, San Jose, and a BSc in computer science engineering from a distinguished engineering college in India.