WARNING: IoT Is Not a Buzz Word

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Photo credit: www.control4.com/

Unless you have been living at the bottom of the ocean enjoying everything Atlantis has to offer, you are probably at least mildly aware of something called the Internet of Things (IoT). While it may seem easy to dismiss IoT as just another tech buzz word, we warn you…DONT. IoT, once fully evolved, will be a paradigm shift not just in technology but a complete change to the reality in which we live. It represents one of the most exciting breakthroughs of our lifetime. And for those forward-thinking companies and entrepreneurs who capitalize on this opportunity, they will be on the ground floor of one of the biggest shifts in modern technology, bigger than when telephones connected us, airplanes lifted us, and the internet put information at our fingertips. In a way, IoT is the complete and awesome evolution of all the technological advances within the last century towards a grand singularity, an interconnected, insight-driven and highly-automated world of which science fiction has dared to dream.

But before we get too far, lets start with the basics.

What is IoT?

The Internet of Things is used to describe consumer and industrial objects, often embedded with sensors, that use an internet connection to transmit data to a cloud-based infrastructure for real-time insights and actions.

When people think of IoT devices they typically think of:

  • Smart devices

  • Wearables

  • Smart Hubs (like Echo by Amazon or Google Home)

  • Industrial components (like jet engines, delivery trucks or farming equipment embedded with sensors)

But the full potential of IoT is an exercise of the imagination. Think of smart cities with a smart traffic grid where the lights can respond and adjust to optimize traffic flow in real time. Add in autonomous cars that can interact with the sensors of the traffic grid to get you to your destination in the quickest time possible, while avoiding accidents and roadblocks as they happen. Think about athletes training using immersive VR simulations that put them in game situations with extreme analytics of their mechanics and performance. Think about predictive health monitoring where your clothes and gadgets can track your daily activities to build a real-time medical record and preemptively alert you of injury or disease. Think about walking by a store in a mall and rather than mannequins, you see a digital representation of what you would look like in clothes that are your exact style and fit.

Because heres the crazy thing. IoT = Everything Is Possible. And if you are reading this article, youre lucky–theres still time to get in the game.

IoT is a whole new landscape of opportunity, creating new industries, products, services and jobs that dont even exist today. It will bring an entirely new dimension to the way we live, limited only by vision and imagination. The true potential of IoT lies in how we can connect data from any and every source, integrate it multi-dimensionally, and gain actionable, real-time insight in a way weve never had before. The more data sources, the more dynamic the data integration, the more dimensional and actionable the insights.

The IoT market is currently estimated to reach $14.4 TRILLION by 2022, a bullish estimate due to its pervasive market implications. While most of the general publicity has centered around sensors and devices, the real heart and soul of IoT is how the cloud and data analytics functions working seamlessly. In essence, IoT in its maturity will only be as strong and as smart as its cloud backbone.

Therefore, the key question will be, with all the available devices and available analytics applications set to explode, will the infrastructure be ready?

Currently, the majority of data/metadata generated by IoT devices comes in the form of numbers, words, characters, etc. which are not large files (think, a few kilobytes), but this is poised to change quickly. To give you an idea of future scope and scale, take the aviation industry as a case study. At last years Paris Air Show, Bombardier showcased its C Series jetliner featuring Pratt & Whitneys Geared Turbo Fan (GTF) engine. The state-of-the-art GTF engine is fitted with 5,000 sensors that can produce up to 844TB of data per 12-hour flight. In comparison, according to Aviation Week Network, at the end of 2014, Facebook generated approximately 600TB of data per day. With all the GTF engines scheduled to be deployed into the field multiplied by thousands of flights per day, the sensor data generated by the aviation industry alone could easily surpass data accumulated by social media. And were not even counting IoT applications such as automotive, manufacturing, precision agriculture, smart cities, smart buildings, smart appliances and all the data these innovations will generate that require real-time cloud-based analytics and storage.

There are 3 major growth factors to take into account when looking at the future of IoT and the infrastructure needed to support it:

  1. The growing pool of connected devices will generate an aggressive uptick in data/metadata that must be transmitted, analyzed and stored.

  2. Software companies are building more powerful software tools that are executing a higher number of analytics processes with sophisticated complexity, thus demanding more compute resources.

  3. Larger datasets and media files, such as photos, videos and soon, VR, will represent challenges to real-time analytics and efficient storage in a cloud environment.

These factors place the bottleneck in the realm of hardware infrastructure, which must be high performance, highly available and highly efficient, and must scale sensibly (from both a footprint and cost perspective as well as minimal complexity). The truth is, companies who find that they cant efficiently support, process and store the data that they are responsible for may simply find themselves out of business once IoT becomes the interconnected reality in which we live. So for the best chance of survival and better yet, success in the upcoming insight-driven future, the planning for modernizing data center/IT infrastructure must start TODAY.

With the scale of data centers rapidly increasing, certain resources such as geographical footprint and power/cooling (ie energy, water) become constrained. Lead by hyper scale companies like Facebook and Google, IT organizations are looking for ways to make their data center footprint more efficient, such as with the introduction of the Open Compute initiative to build modular and hyper-efficient data centers.

To decrease overhead costs, many forward-looking companies planning their IoT strategy should consider moving away from legacy, application-specific hardware to a more streamlined, efficient, and virtualized infrastructure built using modular platforms. These platforms scale sensibly and can be easily configured and/or repurposed for efficiency or high-performance requirements. This allows for unprecedented flexibility and utilization of resources. Companies currently using the public cloud should consider calculating what IoT growth will mean in terms of data and cost of computing and storage, and consider shifting to an on-premise data center platform, or at least a hybrid approach–keeping metadata in the cloud and leaving the heavy computations and secure storage to the on-premise hardware may potentially be a better cost-optimization model.

Hardware providers also have to get better. Companies like Intel, ARM, IBMs Open Power Foundation, NVIDIA, and server manufacturers, have created powerful chips, GPU cards, and server platforms that achieve greater processing speed and performance at higher efficiency and lower cost. For open-architecture solution providers like AMAX who utilize all available leading technologies to build racks to cluster solutions optimized for performance and efficiency, this approach can help companies plan their IoT strategy to support powerful analytics while scaling with optimal flexibility and efficiency, so their infrastructure is never the bottleneck.

From there, everything is possible.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Internet of Things | Leave a comment

The Knight Has Landed

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Enterprises looking to integrate GPU-like parallel computing into their HPC hardware without having to make any significant changes to their IT infrastructure should look no further–the Knight has landed. Intel just released the highly-anticipated next-generation Xeon Phi processor, previously code-named, Knights Landing. The most notable feature of the Xeon Phi compared to PCIe GPU cards is that it can self-boot and can use an operating system as the native processor. Additionally, each CPU can handle up to four threads and has 3x per-thread performance improvement compared to earlier Phi products, while featuring 16GB of MCDRAM. It can also be configured to support multiple memory configurations, using both the onboard memory and the available DDR4 memory channels.

But what makes the Phi such a big deal? While on the surface, it seems like a business as usual approach with the standard hardware upgrades that are attributed to Moores Law, the Phi actually delivers the beginnings of supercomputing performance in an integrated, smaller package.

This allows your business to immediately scale up without having to completely redesign your hardware architecture. Meaning, if you want to continuously run theoretical models, create large scale predictive projects, or simply process data at the speed of light, the Phi will support that. And with AMAXs designed-to-order server and rack solutions as well as full menu of value added services, your exact high performance computing needs can be satisfied with as little compromise as possible. Click here to learn more.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in AMAX News, Data Center, Deep Learning, Enterprise Computing, HPC Computing, Total Computing Solutions, converged infrastructure | Leave a comment

Best Deep Learning Performance By An NVIDIA GPU Card: The Winner Is

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Deep Learning

Theres been much industry debate over which NVIDIA GPU card is best-suited for deep learning and machine learning applications. At GTC 2015, NVIDIA CEO and co-founder Jen-Hsun Huang announced the release of the GeForce Titan X, touting it as the most powerful processor ever built for training deep neural networks. Within months, NVIDIA proclaimed the Tesla K80 is the ideal choice for enterprise-level deep learning applications due to enterprise-grade reliability through ECC protection and GPU Direct for clustering, better than Titan X which is technically a consumer-grade card. Then in November of 2015, NVIDIA released the Tesla M40. At 5x the price point of the Titan X, the Tesla M40 was marketed as The Worlds Fastest Deep Learning Training Accelerator.

With this many worlds fastests and most powerfuls in such a short period of time, people were understandably confused. Therefore, as a leader in high-performance computing technologies and deep learning solutions, AMAXs engineering team endeavored to benchmark the various deep learning cards to determine which NVIDIA card performed the best for deep learning.

In a whitepaper titled, Basic Performance Analysis of NVIDIA GPU Accelerator Cards for Deep Learning Applications, AMAXs team analyzed NVIDIA K40, K80, and M40 Enterprise GPU cards along with GeForce GTX Titan X and GTX 980 Ti (water-cooled) consumer grade cards running 256×256 pixel image recognition training using Caffe software. Systems used in the benchmark tests were AMAXs DL-E400 (4xGPU workstation), DL-E380 (3U 8xGPU server) and the DL-E800 (4U 8xGPU server).

The study included:

- Card specific performance analysis

- Performance scaling from single GPU system to up to 8x GPU nodes

- Performance impact of the CPU

- Single and dual CPU solutions

- Platform-specific performance differences

The Results

GeForce TitanX

The study found that increasing the number of cards scaled the performance linearly, and cards based on the Maxwell architecture (Titan X, 980 Ti, M40) outperformed the Kepler cards (K40 and K80).

Most interesting was how poorly the K80 performed despite having the highest single-precision TFLOP performance spec.

So which card performed the best in our deep learning benchmark testing? Surprisingly, the water-cooled GTX 980 Ti. The Titan X and M40 came in second, displaying near neck-to-neck performance. Since the GTX 980 Ti may not be suitable for server integration, our recommendation would be the Titan X card and M40s for deep learning applications, with Titan X providing the best performance to cost ratio.

Machine Learning Applications

It remains to be seen how the Pascal-based GTX 1080 (replacement for GTX 980, to be released on May 27th, 2016) will perform in comparison, but early feedback is that the 2x better performance than Titan X statistic relates to VR applications, not deep learning applications.

In the meantime, to learn more about the results of AMAXs deep learning benchmark testing, you can download the white paper here. AMAXs full line of built-to-order deep learning solutions can be found here.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in AMAX News, Deep Learning, Enterprise Computing, GPU Computing, HPC Computing | Tagged , , , , | Leave a comment

100 YearsThe Movie Youll Never See

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

This month at the Cannes Film Festival, the new movie starring John Malkovich and directed by Robert Rodriguez (Sin City, Machete) will be showcasedif by showcased, you mean the vault in which the physical film is sealed will be displayed in the by-invitation only Louis XIII Suite at the H担tel Le Majestic Barri竪re Cannes.

The movie is literally titled, 100 YearsThe Movie You Will Never See. Its a never done before concept that envisions Earth 100 years from now, and is set to be released on November 18th, 2115, a release date out of the current range of the average human lifespan.

WTF, you may be saying to yourself, especially once youve been tantalized with the Blade-Runneresque teasers. However, the project, written by Malkovich, is designed to be a cinematic time capsule to see how closely the filmmakers vision of future reality compares to actual reality in the year 2115.

The films storyline has been a closely guarded secret. All we know is that the film stars Malkovich, Marko Zaror, and Shya Chang and is set in the year 2115. The custom-built safe that holds the physical film reel utilizes a time-release technology and can only be opened once the 100 year countdown is completed on November 18, 2115.

Even those involved with the film, will have to wait until its release date.

Its the first time Ive done anything like this, said Rodriguez, in an interview with IndieWire. I was intrigued by the whole concept of working on a film that would be locked away for a hundred years. They even gave me silver tickets for my descendants to be at the premiere in Cognac in 2115. How cool is that? What John and I wanted it to be was a work of timeless art that can be enjoyed in 100 years. Im very proud of it even if only my great grandkids and hopefully my clone will be around to watch.

Some have dismissed the project as an elaborate publicity stunt for Louis XIII Cognac, with the movie being a tribute to the century of careful craftsmanship required to create each decanter of the luxury liquor. Seeking a work of art that could speak to the brands commitment to time-aged quality, the company brought on Malkovich and Rodriguez to create the vehicle.

Louis XIII is a true testament to the mastery of time and we sought to create a proactive piece of art that explores the dynamic relationship of the past, present and future, said global executive director of Louis XIII, Ludovic du Plessis.

“There were several options when the project was first presented of what [the future] would be, said Malkovich. An incredibly high tech, beyond computerized version of the world, a post-Chernoybl, back to nature, semi-collapsed civilization and then there was a retro future which was how the future was imagined in science fiction of the 1940s or 50s.”

All three have been represented in a series of teasers:

While most of us will not be around to see how closely the film nails our future reality, theres no doubt that the world is poised for incredible change in the near future due to all the technology advances around us, particularly developments in artificial intelligence, machine learning and automation. Self driving cars are a reality, smarter devices and machines mean more integrated city and global infrastructures. Better data analytics mean more predictive technologies in all industries including health care, security and social sciences. And hopefully, bots will one day be much more helpful after they learn to stay away from the bad kids. AMAX is highly invested in pushing our world towards a smart technology tipping point by assisting developers in AI and machine learning to develop intelligent technologies through its Deep Learning Solutions. Beyond that, 100 years from now, its anyones guess. But as Rodriguez said, with any luck, our clones will be there to decide if 100 Years was just a glorified publicity stunt, or well worth the wait.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Deep Learning | Leave a comment

NVIDIA DGX-1: The Game Changer That Took Deep Learning To Ludicrous Speed

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

HPCLast week, NVIDIA made a groundbreaking announcement, launching their mega-powerful NVIDIA DGX-1 Deep Learning System at the GPU Technology Conference in San Jose. As a purpose-built Deep Learning solution featuring 7TB of SSD storage and a whopping 170 TFlops performance, the DGX-1 was rightfully marketed as the worlds first and most powerful Deep Learning Supercomputer-in-a-Box, 12x faster than any previous GPU-accelerated solution that has come before.

The NVIDIA DGX-1 is not a configurable server, or a component that must be integrated into a larger Deep Learning system. It is a turnkey, plug-and-play solution featuring eight Pascal-based Tesla P100 cards installed in a hybrid mesh cube configuration, interconnected with NVIDIA NVLink. The system comes fully integrated with hardware and software designed specifically for deep learning development. It even comes with NVIDIA -backed support, software upgrades and a cloud management portal, so that companies have all the tools they need at their fingertips to quickly train neural networks with the processing power necessary to create viable Deep Learning applications.

Lord Dark HelmetWith the DGX-1, NVIDIA has now given developers a powerful engine with which to radically reduce training and inference time, fast tracking new products and features based on AI or machine learning to market at the speed of innovation. This is critical for a technology on the brink of changing our world and the way machines interact with and enrich it. Like the gold rush and space race, companies are racing to develop applications to take advantage of a wide open opportunity to create smarter and smarter applications.

That is why AMAX was chosen as an official NVIDIA Technology Partner authorized to take pre-orders for the DGX-1 Deep Learning System. The DGX-1 was purpose-built to help researchers and data scientists achieve new milestones in creating AI applications, said Jim McHugh, vice president and general manager of GRID and DGX-1 at NVIDIA. AMAXs extensive expertise in delivering deep learning solutions will be of considerable value to customers incorporating this one-of-a-kind supercomputing platform into their data centers to power their most demanding deep learning workloads.

During his keynote opening GTC 2016, Jen-Hsun Huang saw deep learning as not just a niche application, but a world-changing computing platform that one day every application will depend on. The number of companies involved in deep learning has just exploded, Huang said. Every internet service provider, every major computing company, the type of applications for deep learning to enhance the smartness of our applications, to enhance the greater insight that we can derive from large data is really crazy. Intelligent video analysis, surveillance will never be the same. Intelligent video tagging, image tagging, recognizing images, image search, voice, translation, a universal translatorapplications like Twitter, Uber and all these other amazing applications are all now powered by deep learning. The recommendations engines of movies and Amazon are going to go through a whole new phase of renaissance.

Tesla GP100 GPUEvery industry is touched by deep learning development. Exciting projects include self-driving cars, social media, AI and robotics, medical imaging and diagnosis, personalized online retail experiences and many more that seem to break ground every day. With the entrance of the DGX-1, this could provide the tipping point for building applications which are, if not as smart as humans, smart enough to cater to real human needs and desires.

The DGX-1 software stack includes all major Deep Learning frameworks, the NVIDIA Deep Learning SDK, the DIGITS GPU training system, drivers, and CUDA. This allows Deep Learning developers to construct deep neural networks (DNN) in their preferred machine learning framework, backed by the diagnostics and support offered by NVIDIA. No less important, the DGX-1 has been designed so that Xeon compute, Tesla compute and networking options can be upgraded independently. This transforms the DGX-1 into a total solution that can be deployed in-house within a matter of minutes.

Deep Learning not just the IT buzzword of 2016, but something quickly becoming the key to an entire paradigm shift in technology and the world in which we live. Because it has real world application in almost every industry and facet of life, from enterprise to academia to consumer, the development to date has only scratched the surface of its game-changing potential. But regardless of how you choose to utilize Deep Learning/Machine Learning in your application or development, there is no doubt that DGX-1 can get you there, like a bullet train. With the DGX-1, developers now have access to a deep-learning system with 12X higher application performance than any previous GPU-accelerated solution. That would be the equivalent of a years worth of development completed in a single month! If the glory (and the money) goes to the one who reaches the breakthrough first, the DGX-1 may well have just leveled the playing field.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Deep Learning, GPU Computing, HPC Computing, Product Development, Tradeshow/Events | Tagged , , , , , , | Leave a comment