AI IPOD architecture – AI Platform as a ServiceAI development and production – services from the public cloud, the private cloud or in hybrid mode?

 

More and more companies are launching AI initiatives to exploit previously undreamt-of innovation and optimization opportunities. The technology enables them to process huge data sets at high speed, analyze images, audio and text and thus improve processes or develop new services and products. While AI projects usually start with smaller test scenarios, converting successful tests into a scalable model can be costly, with AI being a resource-hungry technology.

 

More companies are therefore using ‘AI Platform as a Service’ (AI PaaS) offerings from the cloud to keep expenditure on IT resources and time to a minimum. Such services combine functions to collect and prepare data and train the AI on one platform. They support the application of mature AI models in a holistic approach, mostly based on container technology. In the top trends of the Gartner Hype Cycle for Artificial Intelligence 2019, AI PaaS is recognized as an emerging trend. The three most in-demand AI services and applications are:

 

  • Machine Learning: ready-made algorithms, Machine Learning Model Training, pre-labeled data 
  • Computer Vision: object, face, motion, emotion and text recognition
  • Speech processing: automatic speech recognition, text-to-speech conversion, etc.

Hybrid architectures are trending

 

In countries with stringent data protection requirements, the trend is to integrate on-premise architectures. This is where the hybrid architecture of NetApp POD technology comes into play, enabling initial test scenarios to be trained and refined in the cloud. As a result, companies can seamlessly switch between NetApp file and volume services at public cloud providers and local NetApp data services (software defined). In this way, AI-trained customer, research, and production data models always have the highest possible flexibility, performance reserve, and data sovereignty.

The technology platform

 

For example, the on-premise platform FlexPod AI has the built-in ability to connect multi-cloud scenarios. The provision of the AI/ML/DL software platforms via container architectures allows maximum flexibility. NetApp, Cisco, and Nvidia thus combine the advantages of an on-premise AI architecture with those of AI PaaS:

 

  • Data sovereignty and security even in sensitive data environments 
  • Exponential scalable analytics performance through Nvidia GPU and Nvidia container-based software integration 
  • Lowest operating costs even for large data volumes
  • Dynamic flexibility, depending on the starting point or productive scenario of the AI
  • Extremely short setup times for Data Science projects through pre-validated architecture for well-known AI platforms and tools such as PyTorch, Tensorflow, Caffe2, MXNet, Theano etc.
  • Quick and easy ways to integrate various AI frameworks and services of the Nvidia GPU cloud into the local framework via the NetApp Cloud Kubernetes services 

 

In addition to the FlexPod AI, there are four public cloud AI toolchain providers with similar services, which can be considered integrated platforms: Amazon, IBM, Microsoft and Google. Their combination with FlexPod AI in a hybrid approach promises maximum benefits.

Factors to pay attention to during implementation

 

  • Data quality: The success of an AI project always depends on the quality of the data and its source. The NetApp data fabric enables an unimpeded flow of data and helps to reduce the amount of work involved.
  • Compatibility: The container-based set of validated tools, services, frameworks and programming languages supported by Nvidia makes the work of data scientists and architects easier.
  • APIs: AI service providers should always provide standardized APIs for integrating AI capabilities into applications to speed integration.

Conclusion

 

Many companies have to drive digitization in short project times but don’t have the financial flexibility to do so. For them, AI PaaS is an alternative to conventional operating models. 

 

NetApp Nvidia Pod architectures such as FlexPod enable a hybrid infrastructure that combines the advantages of the public cloud with the security of on-premise architecture. Thanks to the integrated connection to public cloud platforms, companies can quickly and easily access a wide range of modern AI services offered by public cloud providers and their repositories, e.g. Nvidia NGC (Nvidia GPU Cloud).

Hermann Wedlich

Hermann focuses on developing IT business solutions and models for the digital transformation built on NetApp and eco partner offerings within the EMEA platform and solutions group. His mission is to generate and enable a momentum of positive change for NetApp customers, partners and their own sales force, towards new IT architectures and smart data services. Hermann has over 20 years of experience in IT, engineering and business development and is part of the NetApp EMEA AI and ML expert team, where he drives technologies and use cases for these new IT domains passionately.

Add comment