In terms of modern-day IT, the amount and type of data received is different from the past. Historically, data has been structured, transaction-oriented, and constantly updated.  At the same time, the way applications interact with data has been stateful, atomic (database) transactions. The data growth that many companies are seeing today, however, is a different type of data: unstructured data that sprawls, it’s kept forever, seldomly updated, and is meant to be accessed by multiple application services across geographical boundaries.

Increased Appetite for Object Storage 

While growth is happening with all kinds of data, NetApp IT has witnessed an increased appetite for object storage. Our consumption of StorageGRID storage has grown 400% over the past 18 months and it’s on track to become one of the primary storage solutions within our organization. While StorageGRID is not a replacement for traditional storage protocols such as NFS/iSCSI/FC, it is a great complement to these technologies and plays an important role as it can be replicated across different data centers and offers simple web services interfaces.

Why I Like StorageGRID

I am a big fan of object storage. I recently worked on a project to streamline the way customers upload core dump files to NetApp technical support for troubleshooting. Since the core files contain images of what was in memory at the time of failure, the size of the files has grown exponentially over time. It is not uncommon to have a 200GB+ core file.

 

When combining the growing size of core files with the growth of NetApp systems in the field, it results in a large unstructured data pool.  This made the new upload process for core files an ideal candidate for StorageGRID.

  • Eliminates multiple methods of uploading files via a single StorageGRID platform
  • Utilizes StorageGRID multipart upload API capability for performance and the restartability of large files.  If a transmission of any part fails, we retransmit that part without affecting other parts―and without involving the customer
  • Uploads of large objects through a URL (i.e. http/https browser with no other plug-ins or programs required)
  • Handles large files of static (unchanging) data with minimal data management via StorageGRID ILM rules (i.e. data retention policies)
  • Uses metadata to automatically cleanup any orphans inadvertently lost in the transmission process

Other Use Cases for StorageGrid

Another important reason for the rise of object storage within NetApp is the rise of cloud-aware applications.  Cloud-aware applications tend to require large data repositories of unstructured data accessed via stateless protocols (e.g. https).  As our application teams develop cloud-aware applications they are coding these applications to natively use object storage by default. StorageGRID provides the platform with which cloud-aware apps can interact more naturally, becoming an important player in NetApp IT’s journey to the cloud.

 

Data archiving is another driver for object storage.  Today we use StorageGRID as the repository for artifacts such as Docker image repositories, application backup data, and archiving of data analysis results. StorageGRID’s information lifecycle management policies allow administrators to provide different retention policies for different types of data; it also dictates how this data is replicated across a StorageGRID global deployment.

 

It seems like every day we discover new use cases for object storage inside NetApp IT.  As we move into the future, the focus on developing cloud applications and services will continue to demand stateless and unstructured data repositories. For these reasons, StorageGRID is an attractive solution.

 

Want to know more about StorageGRID? Discover why IDC named StorageGRID a leader in the 2018 IDC MarketScape:WorldWide Object-Based Storage (OBS) Vendor Assessment.

mm

Ken Lee

As one of the senior storage engineers on the Customer-1 team inside IT, Ken plans, engineers, builds and runs NetApp products and services in support of the corporation’s enterprise applications. Ken has over 20 years of experience across a wide range of disciplines including DBA, SAP Basis Admin, AIX System Admin, and Enterprise Data Protection and Disaster Recovery.