There are multiple dimensions to co-innovation. If you follow my COIL blogs you know that I touch on a number of them like tacit knowledge exchange, knowledge brokering, intellectual property management, multi-partner developed solutions and the practical value derived from recombinant innovation. We observe varying degrees of these dimensions across a spectrum of projects focusing upon cloud, mobility, big data analytics, security, sustainability and UI. There is also a variety of focal points- feature and function development, performance characterization, integration, inter-operability, validation and proofs of concepts. All of the aforementioned is experienced every day in our COIL Labs running in 10+ locations around the world.
It is in our lab in Palo Alto however, with its immediate proximity and persistent connection to Silicon Valley where I sometimes think that the projects run here with SAP partners who are also longtime residents in the valley, are somehow stitched together with a type of thread that binds our co-innovation practitioners in a way that is purely unique to the region. It’s a quixotic mix of history, legacy, dreams and aspirations underscored by a steady state of resilience and the attitude that a failure is nothing more than another step forward. I love the fact that despite being a German born company, SAP is deeply threaded into what makes this place so incredible.
I’m an innovation fan boy to the core and love all our projects but flipping through past blog posts suggest that I have a bias towards data analytics. I’ll go ahead and admit it given the fact that its fueled even more by the fact that SAP HANA flipped a bit that has forever changed the world and in a bold way. Silicon Valley Bold.
So while I try to fairly cover as many of our COIL projects as I can, I cannot help but to come back to what we've had going on with SAPNS2 and the incredible work the team has done here to develop a number of engineered solutions built “inside” SAP HANA, taking advantage of all the processing engines (geospatial, graph, and text analysis, predictive analytics, XS and event stream) in the area of Geospatial Situational Awareness, Geospatial Immersive Analysis and Visualization and Multi-Source data fusion and analysis. When the team first took up residence in our lab, its original implementations of its many engineered solutions, were limited by a single 256GB server as its core HANA development environment where we needed to “split” the server into two 128GB SAP HANA instances.
Beginning last year, this inside the beltway team got its chance to dive head first into the Silicon Valley vibe when it teamed up with SGI to take its first steps to break away from its original hardware moorings to now take its situational awareness solutions into a scale up architecture.
With the SGI UV-300 environment, purpose built for SAP HANA, many of the scaling limitations that the team faced has now been resolved. As many of you know, the SAP HANA design is based on the concept of Massive Parallel Processing (MPP) which takes advantage of memory, CPUs and CPU cores to provide a High Performance Computing platform for massive data sets. The SGI UV-300 provides tremendous vertical scalability without the need to cluster smaller commodity servers.
With the UV-300 configured with 24TB of memory, SAPNS2 can now enjoy an environment that has a memory capacity of at least 96TB when taking into account a minimum 4X data compression. This vertical scalability allows for more simultaneous development environments, and the ability to have multiple instances of HANA for each of the Engineered Solutions (4) as well as a Hadoop/HDFS configuration to support “cold” data storage and data tiering across each of its HANA instances. Vertical ]scalability means the team can work with large single node data sets without having to cluster multiple nodes unlike before when using commodity blade servers. This type of scalability enables a Smart Content Exploration solution, to grow from a 10 million article repository to a 1.2 Billion article repository using only half of the 24TB physical memory space of a maximum UV-300 configuration.
Performance is outstanding considering a max configured UV-300 will accommodate 32 sockets and 480 cores. The result for this project is to realize HPC performance in a single 96 inch rack; 1 single node.
I chatted with Brian Freed, SGI VP of In Memory Architecture about this to gain some further perspective. He was quick to mention how SGI has a long history with respect to geospatial text analysis, from early on when nearly all analytics and processes were separated. What is so cool today is how analytics engines (geospatial, graph, XS, text analytics, streaming etc.), is now converged to become a single unified process.
Single node is proven superior to scale out over cluster largely because the latter uses a different partitioning schema which can only create barriers or limitations to multi-engine analysis. As a single example, graph likes to lie out across the cluster differently than text analysis so for each processing engine there is a need to configure and tune the multi-node cluster architecture to meet these disparate processes unique requirements to deliver the desired performance. Similarly, transactional efforts between clusters also will undermine or limit performance. For instance with OLAP where a user triggers a query, gets a results, queries again, then again but if the data updates mid query, performance is clipped by the need to update nodes across the cluster. Single node systems using a large data set do not become impeded in the same way. We are talking a single node system running multiple applications against the same data set where the user(s) do not experience the lag time due to multi-node updating. The latency is essentially around <500 nanoseconds between any of the UV 300’s 4 socket chassis'. Together with SAP HANA, that’s serious HPC for commercial applications.
There are some firms who will easily boast that hardware and software engineered together from them alone is what works best. Yet in light of what is possible to forge through diligent co-innovation work done through partnerships (especially here in the valley) is something which offers up a degree of openness not so present from engineered solutions spawned from a single source. SGI made a decision to engineer its UV 300 from the ground up to fully exploit SAP HANA’s defining capabilities. In turn, SAP HANA has been tuned to take full advantage of the SGI single node architecture. An example of two companies working in the in the heart of Silicon Valley recognizing purposeful collaboration and the exploitation of tribal knowledge best amplifies the innovation signal that in the end, resonates best with customers looking for a more evolved way to make informed decisions from gaining real time insights from the multiple gigabytes to soon multiple zetabytes of data coming not only from systems of record but from data now streaming from scores devices and sensors connected to the cloud.
Engaging in real co-innovation work unleashes an incredible force multiplier when leading innovation companies and the exceptional human talent found here in Silicon Valley bring their resources, ideas and their innovation spirit together to do something exceptional. It further accelerates the art of the possible to the delight of customers who can only benefit from such a dynamic mix of competition and collaboration not so easily found anywhere else.