The future of software releases is clear. Continuous delivery is here to stay. But does that mean that traditional build, release and Continuous integration systems and infrastructure must be altogether abandoned? Traditional applications carry with them databases and backend that are built on technologies that don’t lend themselves to instant deployment models, but there are tools and technologies that can bridge those gaps.

 

For example, to transform your existing Software build system into a futuristic DevOps model, you can combine NetApp® FlexClone® technology and using popular CI/CD tools i.e. Jenkins Pipeline. And for disaster recovery (DR) and build replication, NetApp SnapMirror® technology gives you superior data protection during DR, and you can replicate your build and test artifacts at different locations without having to re-create it.

The Challenges of a Traditional Continuous integration (CI)

The following diagram shows the process on a traditional system. It has many pitfalls along the way that complicate and significantly slow down the process.

To test a traditional system, we ran an experimental build. For our build size of 500GB, we had a build time of 10 or 11 hours. This build time involved synchronization of the source from the repository, source compilation, code coverage, build optimization, build size calculation, locking the build after completion, and updating build results in the database.

Long Poles in a Traditional CI Build System

In our experimental build on a legacy system, we encountered the following hurdles:

  • Sequential build stages. The existing build infrastructure didn’t allow us to experiment on the build flow.
  • Software Configuration Management (SCM) checkout from scratch. We had to perform SCM checkout from scratch for the latest change in every build.
  • Time-consuming UNIX commands for build size calculation and locking. We had to use UNIX commands to calculate the build size (du) and to lock the build (chmod) to read-only after completion. These tasks were time-consuming for our build size of 500GB.
  • Lengthy promote process for a nightly build. This step was another daunting task. The entire build had to be copied to another location for the promote configuration. The task of copying itself took about 7 hours.
  • Load-balancing scripts that use traditional UNIX commands. We had to use traditional UNIX commands to move and to delete build workspaces (mv, cp, rm) as required by the purge policies.
  • Developer time to rebuild. We developers had to spend an equal amount of time to build and to re-create the Sandbox  in a local environment to reproduce the bug reported by the internal QA teams or customers
  • Extensive UI pages. We had to maintain numerous UI pages for the build and release process.

Efficient Continuous integration with NetApp FlexClone Volumes and Jenkins Pipeline

How FlexClone Technology Is Integrated into a Continuous integration

To compare with the traditional system, we then performed another build by using NetApp FlexClone cloning and Jenkins Pipeline. To start, we used NetApp FlexVol® volumes to replace directories that had acted as build workspaces in the legacy system.

 

FlexVol volumes give you the flexibility to clone workspaces instantly (called FlexClone volumes) without relying on manual copying. This feature is of great use to promote a build and to replicate a build workspace in your developer environment instantly. The following figure shows the relationship between the various NetApp components in a build system.

Orchestrating Stages by Using Jenkins Pipeline and Parallelism

Jenkins Pipeline (or simply Pipeline, with a capital “P”) is a suite of plug-ins that supports implementation and integration of continuous delivery pipelines into Jenkins. For our comparison , we articulated and rearranged the steps in the Continuous integration stages in parallel to get the benefit of the Jenkins parallelism feature. The following diagram shows our CI process with FlexClone cloning and Jenkins Pipeline and parallelism.

The combination of NetApp FlexClone volumes and Jenkins parallelism in Pipeline reduced our completed end to end scratch CI pipeline from 10 or 11 hours to 5 or 6 hours. That’s an impressive 50% reduction!

The Advantages of FlexClone Volumes and Jenkins Pipeline

In comparing the two builds, we found that FlexClone technology combined with Pipeline provided many benefits over the legacy system, including:

  • Shorter build time (combination of FlexClone volumes and Jenkins parallelism). Our build time was reduced from 10 or 11 hours to just 5 or 6 hours.
  • Easy-to-identify build failures (Jenkins Pipeline). The build flow is modularized and is fit into Pipeline for a better view of build flow and for easy debugging. In the preceding Jenkins Workflow figure, the highlighted red square on the top left shows clearly where a build failed.
  • 50% space savings (FlexClone volumes on NetApp AFF nodes). FlexClone volumes are very effective in deduplication and compression on AFF nodes. Each build volume size is reduced by 50% compared with builds on legacy systems.

In our build, we deployed AFF A700s nodes to take advantage of the effective deduplication and compression. You can find two models of the NetApp AFF A700s system:

– RTPCBR01/02, with forty-eight 3.8TB solid-state drives ([SSDs] 24 internal, 24 external in a shelf)

– RTPCBR05/06, with forty-eight 3.8TB SSDs (24 internal, 24 external in a shelf)

 

  • Easy-to-replicate nightly builds (FlexClone volumes). Developers can replicate nightly builds in 2 to 3 minutes, compared with rerunning entire nightly commands in a local workspace with a legacy system.
  • Faster promote process (FlexClone volumes). With the simplified promote process, our latest promote took 87 minutes in the new build system, compared with 240 minutes in the traditional system.
  • Efficient load balancing (FlexClone volumes). It’s easy to move FlexClone volumes across aggregates instead of using the traditional approach of copying and deleting directories. This significantly help tiering the storage and preserving the releases shipped to customers.
  • Simplified build and release management process. We had to use just a “one-stop” Jenkins page for release and management jobs. Numerous UI pages in the legacy system were replaced by a Jenkins page integrated with NetApp Storage features for special_build, preserve_build, promote_build, and other management jobs.

Disaster Recovery and Build Artifacts Replication by Using SnapMirror

With the SnapMirror feature of NetApp ONTAP® software, you can replicate data from specified source volumes to specified destination (mirrored) volumes. The following figure shows the SnapMirror replication process. There are significant improvements to this features in latest releases of ONTAP, make the data replication faster, consistent and reliable.

In this blog post, I talk about SnapMirror in the context of the build process. For more information about SnapMirror replication for other uses, check out our NetApp Tech OnTap® article.

Incremental SnapMirror Mirroring

Incremental SnapMirror mirroring is an easy way for you to replicate build artifacts at a second destination in real time with your primary build. You don’t have to re-create your build at the destination site for running the tests.

 

To showcase the benefits that you get with SnapMirror, I present the following use case:

 

  • Build size: 500GB
  • Source location: Research Triangle Park, North Carolina (RTP)
  • Destination: Sunnyvale, California (SVL)

Key Configurations

Policy type: async-mirror

In asynchronous mode, Snapshot copies of the volume are created periodically on the source. Only blocks that have changed or that have been newly created since the last replication cycle are transferred to the target, making this method very efficient in terms of storage system overhead and network bandwidth.

 

Transfer priority: Normal

If there is a dedicated bandwidth for SnapMirror mirroring, the priority can be set to high.

 

Schedule: 5 minutes

The schedule determines how often Snapshot copies must be created for SnapMirror mirroring.

 

The following diagram shows the SnapMirror mirroring process from RTP to SVL.

SnapMirror Steps

The following steps and their respective commands correspond to the steps that are shown in the preceding figure.

 

Note that all the commands must be executed at destination site SVL except Step 7.

 

Step 1. Create a mirrored volume at the destination site

vol create -vserver svlnightly -volume rtpbuild_devN_180515_0746_1805150748 -aggregate cbr10sas2 -size 600GB -space-guarantee none -type DP

Step 2. Establish a SnapMirror relationship at the destination site

snapmirror create -vserver svlnightly -source-cluster rtpcbr -source-vserver nightly -source-volume rtpbuild_devN_180515_0746_1805150748 -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748 -schedule 5min -policy svlnightly_policy -type XDP

Step 3. Initialize scheduled SnapMirror mirroring

snapmirror initialize -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748

Then post build completion.

 

Step 4. Modify the schedule to NULL

snapmirror modify -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748 -schedule ""

Step 5. Abort the current SnapMirror mirroring

snapmirror abort -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748

Step 6. Final push to post build completion: Run a SnapMirror update

snapmirror update -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748

Step 7. Release the SnapMirror relationship at the source location

snapmirror release -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748

Step 8. Delete the relationship at the destination site

snapmirror delete -destination-path svlnightly:rtpbuild_devN_180515_0746_1805150748

Step 9. Mount the volume at the destination site

vol mount -vserver svlnightly -volume rtpbuild_devN_180515_0746_1805150748 -junction-path /share/bammbamm/svl/builds/DOT/devNightly/devN_180515_0746

Start Your Continuous integration Transformation Today

One of the goals of DevOps is to establish an environment that enables developers like you to release reliable applications faster and more frequently. NetApp FlexClone technology promotes such productivity by making your build system easier to use and by speeding up the build process itself.

 

And with NetApp SnapMirror technology, you get an important DR and general-purpose replication tool. You can use SnapMirror to establish a centralized build farm and to replicate builds in another location across WAN, without the bottleneck of long transfer times.

 

Start building better, smarter, and faster today. Learn more about how NetApp FlexClone and SnapMirror technologies can help transform your Continuous integration environment to increase innovation and business.

Sudhir Sonnad

Sudhir is an experienced Senior technical Staff with a demonstrated history of creating fully automated CI build and deployment infrastructure and processes for multiple projects. In his current role, he designed and developed continuous deployment pipeline, integrating Netapp technologies for transformative DevOps. His current passion is to use a wide variety of open source technologies and tools and promote the benefits of DevOps by identifying and quantifying the business benefits that come from the greater agility DevOps delivers. Before, he worked in automotive industry as a modularity tools engineer where he published patent on wired remote switch for smartphone.