[SMART]DC Data Center Manager: The Brains Behind the Next Generation of Data Center Infrastructure

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Today, AMAX launched its eagerly anticipated [SMART]DC Data Center Manager, a robust DCIM software appliance that can manage hardware from all major server brands, including OCP (Open Compute) technology through a single pane of glass. With its dynamic features including software-defined policies and advanced analytics, [SMART]DC is the key to a highly efficient, modern data center design, giving enterprises the ability to run their data centers up to 30% more efficiently for major cost savings.

To understand what led up to the development of [SMART]DC and why we think this product is a game changer, we sat down with Dr. Rene Meyer, Director of Product Development at AMAX.

AMAX: So to start off, what is [SMART]DC?

Dr. Meyer: [SMART]DC is a turnkey out of band management solution for the next generation of energy and very cost efficient, highly scalable, and heterogeneous data centers. Its deployed as an on-premise server appliance to be plug-and-play solution with minimal installation and setup time, and can manage thousands of servers per appliance. Speaking for the entire AMAX team, we are very excited and proud to announce the official launch of [SMART]DC today, and believe its going to solve a lot of data center management pain points that we have been hearing from our customers over the past few years.

AMAX: Can you tell us about some of those pain points?

Dr. Meyer: Certainly. AMAXs business model has historically revolved around leveraging our strong engineering background to design and manufacture data center and computing solutions to meet specific customer needs. We have focused on white box platforms and integrating leading components which allow us the most design flexibility, while helping our customers bypass the brand tax levied by some of the legacy server providers. In recent years, as compute power and density requirements have sharply increased and as data and data analytics have exploded, we see data centers now scaling at an unprecedented rate. This has made controlling the cost and operations of these data centers a serious priority for our customers. Particularly among our large to hyperscale customers who have a global footprint, they are rethinking established practices in terms of how to increase IT efficiency and reduce facility overhead.

AMAX: Thats one of the drivers behind why companies are so interested in OCP.

Dr. Meyer: Exactly. A lot of these enterprises are looking at companies like Facebook and Microsoft to see how they have scaled and are controlling the cost and efficiency of their infrastructures, and how they are managing it all. A lot of the focus in recent years has been on achieving cost savings through decreasing the cost of the hardware, whether thats in moving away from the brand name servers to white box, or looking for more efficient hardware, be it OCP or traditional server architectures with more energy efficient power supplies. And [SMART]DC came from the idea of, beyond just the hardware, how can we help data centers achieve significant cost savings?

AMAX: What about for customers who are standardized on traditional servers, not OCP?

Dr. Meyer: The beauty of [SMART]DC is that we realized the modern data center is not either OCP or traditional servers. Its not one brand, but a heterogeneous mix of brands and platforms and technologies. Even with all the new technology coming out that offer better flexibility, features, performance and efficiency, as companies transitioned to new technologies, they still needed a way to manage existing and expanding infrastructure under one single pane of glass.

AMAX: You had mentioned companies moving away from legacy hardware to whitebox. Is this one way that [SMART]DC eases that transition?

Dr. Meyer: Absolutely. Transitioning from a Dell or HP to a white box solution is very attractive in terms of reducing OPEX, increasing the flexibility of solution design, and escaping vendor lock-in. But white box solutions lack a management layer comparable to OpenManage or OneView, so its a bit like going cold turkey from a manageability standpoint. Not having that Tier 1 management layer can prevent some companies from making the switch all together. [SMART]DC brings Tier 1 management features like virtual KVM, component fault detection, and call home to whitebox platforms, not to mention advanced features such as intelligent power management policies and identifying ghost or over utilized servers which create unnecessary data center cost overhead. So it makes for an easier transition from legacy hardware. Plus, because its compatible with major server brands so you can manage all your hardware using a single pane of glass. Many of the management software from different vendors are designed to only manage their own, making it harder to integrate other platforms into your data center.

AMAX: So who would be an ideal profile for a company who would get the most out of [SMART]DC?

Dr. Meyer: Any company with a large or growing data center footprint, who wants to make their data center more cost effective and energy efficient. In particular, is looking to incorporate white box and/or OCP platforms into their data center.

Here is an example: One of our customers is a large financial company. They were standardized on a legacy hardware provider and wanted to move to a white box solution due to factors such as overall cost, solution flexibility, support and others. They had major concerns about how to integrate new platforms into their data center without disrupting their day to day operations, and frankly, their administrators had gotten used to a certain ease of living, so to speak, when it came to management and maintenance. With [SMART]DC those migration pain points were taken care of. [SMART]DC not only provided continuity for them to manage both their legacy hardware as well as their new white box infrastructure, but as an added bonus, they were able to achieve significant cost savings overall by using the software and data analytics to better maximize resource utilization, identify resource inefficiencies, and decrease operational and maintenance overhead through additional automation.

AMAX: So tell me about some of the features that should make customers excited about [SMART]DC.

Dr. Meyer: Really there are so many, but besides the multi platform compatibility and the features geared towards power savings, we are very proud of our easy to use Web GUI with an intuitive and configurable dashboard.

AMAX: Web GUI sounds great, but what about the IT admins used to scripting?

Dr. Meyer: Of course, we knew that a lot of todays solutions are an either or situation when it came to Web GUI or IPMI scripting. We had different users in mind, and we accommodated them by making every function [SMART]DC function available with the Command Line Interface (CLI). For companies who have already developed an in-house management solution and do not want to reinvent the wheel but also want to benefit from the increasing number of advanced features of [SMART]DC, we enabled easy integration via SOAP API and command line.

AMAX: Can you talk a little more about Virtual KVM?

Dr. Meyer: Yes, this is one of our essential features. Weve built in the ability to access servers remotely via laptop or mobile device with the same functionality as though you were sitting in front of it. With so companies operating global data centers, giving admins this remote capability was imperative.

AMAX: How is [SMART]DC deployed?

Dr. Meyer: It is deployed as a turnkey solution via an out of band, on-premise server appliance. These appliances are designed to be plug and play with minimal setup time. Each appliance can manage thousands of servers, and you can easily add licenses as you scale your data center. Of course, if you are buying AMAX servers or integrated racks, if you already have the appliance deployed, AMAX servers come with the licenses, for extra value add.

AMAX: When will [SMART]DC ship?

Dr. Meyer: We launched today and are currently taking pre-orders, with units due to begin shipping in September on a first come first serve basis. If you want a taste of the interface, we are currently offering a test drive.

AMAX: Alright, thanks so much for your time, Dr. Meyer. We at AMAX are very excited about this product launch, and if you have a data center and you like saving money, we think you should be, too! For more information about [SMART]DC, please visit the [SMART]DC webpage or email us at sales@amax.com.

Dr. Rene Meyer is the Director of Product Development at AMAX. He is a technology pioneer with a PhD in Electrical Engineering, and holds over 10 patents.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Uncategorized | Leave a comment

Artificial Intelligences Next Task: Defend Our Networks

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

It’s that time of year when the biggest names in hacking and cyber security gather in Las Vegas for the Black Hat USA convention. If the presentations at last years show are any indication, this weeks sessions on machine learning will be among the hottest at the event.Machine Learning and other forms of Artificial Intelligence continue to wow the general public as human levels of skill are achieved in activities ranging from beating world-class Go players to navigating the chaos of traffic. Thought leaders from Elon Musk to Stephen Hawking have even gone so far as to issue warnings about the existential threat Artificial Intelligence poses to mankind should the technological genie get out of its bottle.

However, while futurists debate whether or not our algorithms will someday replace us as the dominant beings on earth, it is useful to keep in mind the powerful and practical benefits that machine learning and other forms of AI can provide to us today. One such benefit is the potential for helping skilled security analysts to protect our networks from increasingly sophisticated cyber attacks.

An Answer to a Growing Issue

Despite the media awareness of cyber security issues and the salary premiums offered to security specialists, industry leaders make yearly predictions of growing labor shortfalls in cyber security. The problem is typically attributed to the increasing complexity and prevalence of cyber threats. Part of the problem comes from the rapid growth of connected technologies. More networked devices present new opportunities for attackers while further adding unknowns for defenders. Another part of the problem, well known by those in cyber security, is that cyber attacks have become much more profitable for the perpetrators. Because of the increased payout, the perpetrators are able to afford personnel and tools to reverse engineer and defeat traditional forms of threat detection. The rise of these determined attackers, often referred to as Advanced Persistent Threats (APTs), has been a major driver of innovation in cyber security for the past several years.

Machine Learning Brings New Hope…and Problems

One trend gaining momentum among cyber security vendors is the use of machine learning for threat detection. Traditional methods of cyber security focused on the use of heuristics and rules to efficiently and accurately intercept known threats. However, with the rise of APTs came a significant rise in customized attacks designed specifically to bypass a given organizations threat defense. Traditional methods of defense failed as even trivial customizations to malware code enabled it to bypass the sensors. The industry began looking to machine learning for its ability to generate algorithms that generalize from known data in order to properly classify new and unknown data. The application of machine learning is not limited to malware detection. As evidenced by Splunks acquisition of Caspida, a behavioral analytics company, the industry is seeing success in the use of algorithms to effectively classify and visualize the behavior of network elements. These developments give tier 1 security analysts the tools to perform at a higher level of skill.

With Great Potential Comes Great Challenges

Despite its great potential, getting machine learning to produce useful algorithms is no easy task. One major challenge involves feature engineering. Before the math can be let loose on a problem, data scientists need to determine the features of the problem that the model will analyze. This is a naturally arduous process that requires an appropriate level of domain knowledge to be brought to task. Domain knowledge in cyber security is varied and complex, requiring the data scientist to work closely with a network security expert. In this case, feature design depends on the openness of communication between two types of individuals, both in short supply and each engaged in deep levels of thought within disparate technical disciplines.

Machine Learning Goes Deeper


Cyber security product vendors are starting to look toward a particular branch of machine learning that has been making impressive advances in recent years. Deep learning is credited with giving computers the ability to correctly identify objects in photographic images, and the ability to parse meaning from natural human speech.

These deep learning models are based on neural network architectures, so called because they draw inspiration from models of the human brain. One of the processes replicated by deep learning is the automatic discovery of features significant to classifying data. In other words, deep learning methods remove the need for feature engineering. This does not quite mean a free lunch since these methods have their own challenges and limitations. However, they can provide novel approaches to security problems that are better suited to a development teams resources. Already, companies are working with deep learning to identify unknown protocols or discover malware on enterprise networks. Cyber security firm, Deep Instinct, prominently advertises its use of deep learning for endpoint protection. At last years Black Hat conference, another endpoint protection provider, Cylance, demonstrated its research in converting code to bitmaps so that it could be analyzed by deep learning models.

The momentum in the field of deep learning should be a reassurance to innovators still looking to get involved. Growth of the field can be measured by one of its most prominent benchmark competitions, the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). The contest draws some of the biggest names in technology including IBMs Watson Research Center, Google, Microsoft Research, and MIT, as well as many up and coming startups, competing to demonstrate deep-learnings ability to surpass human capability in image and object recognition across several categories. Many of the category winners in last years competition were aided by deep-learning optimized platforms that featured NVIDIA GPUs to greatly reduce the time needed to develop and train deep learning neural networks. Three of the winning teams at ILSVRC 2015, including Microsoft Research and SenseTime, a leader in facial recognition for video surveillance applications, placed 1st in their respective categories supported by Deep Learning platforms developed by AMAX. By collaborating with AMAX, developers gained world class platforms both for use in their own ground breaking research and as security appliances to be deployed at customer sites in order to thwart the next generation of cyber threats. For more information on deep learning platforms or OEM services geared towards cyber security companies looking to bring an integrated security appliance to market, please contact AMAX to fast-track your development.

Rob Lundy is an independent technology and product marketing consultant with a decade of experience in hardware and software solutions for national defense and cyber security. He can be reached at rwlundy@gmail.com.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Cyber Security, Deep Learning, Server Appliance Manufacturing | Tagged , | Leave a comment

WARNING: IoT Is Not a Buzz Word

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Photo credit: www.control4.com/

Unless you have been living at the bottom of the ocean enjoying everything Atlantis has to offer, you are probably at least mildly aware of something called the Internet of Things (IoT). While it may seem easy to dismiss IoT as just another tech buzz word, we warn you…DONT. IoT, once fully evolved, will be a paradigm shift not just in technology but a complete change to the reality in which we live. It represents one of the most exciting breakthroughs of our lifetime. And for those forward-thinking companies and entrepreneurs who capitalize on this opportunity, they will be on the ground floor of one of the biggest shifts in modern technology, bigger than when telephones connected us, airplanes lifted us, and the internet put information at our fingertips. In a way, IoT is the complete and awesome evolution of all the technological advances within the last century towards a grand singularity, an interconnected, insight-driven and highly-automated world of which science fiction has dared to dream.

But before we get too far, lets start with the basics.

What is IoT?

The Internet of Things is used to describe consumer and industrial objects, often embedded with sensors, that use an internet connection to transmit data to a cloud-based infrastructure for real-time insights and actions.

When people think of IoT devices they typically think of:

  • Smart devices

  • Wearables

  • Smart Hubs (like Echo by Amazon or Google Home)

  • Industrial components (like jet engines, delivery trucks or farming equipment embedded with sensors)

But the full potential of IoT is an exercise of the imagination. Think of smart cities with a smart traffic grid where the lights can respond and adjust to optimize traffic flow in real time. Add in autonomous cars that can interact with the sensors of the traffic grid to get you to your destination in the quickest time possible, while avoiding accidents and roadblocks as they happen. Think about athletes training using immersive VR simulations that put them in game situations with extreme analytics of their mechanics and performance. Think about predictive health monitoring where your clothes and gadgets can track your daily activities to build a real-time medical record and preemptively alert you of injury or disease. Think about walking by a store in a mall and rather than mannequins, you see a digital representation of what you would look like in clothes that are your exact style and fit.

Because heres the crazy thing. IoT = Everything Is Possible. And if you are reading this article, youre lucky–theres still time to get in the game.

IoT is a whole new landscape of opportunity, creating new industries, products, services and jobs that dont even exist today. It will bring an entirely new dimension to the way we live, limited only by vision and imagination. The true potential of IoT lies in how we can connect data from any and every source, integrate it multi-dimensionally, and gain actionable, real-time insight in a way weve never had before. The more data sources, the more dynamic the data integration, the more dimensional and actionable the insights.

The IoT market is currently estimated to reach $14.4 TRILLION by 2022, a bullish estimate due to its pervasive market implications. While most of the general publicity has centered around sensors and devices, the real heart and soul of IoT is how the cloud and data analytics functions working seamlessly. In essence, IoT in its maturity will only be as strong and as smart as its cloud backbone.

Therefore, the key question will be, with all the available devices and available analytics applications set to explode, will the infrastructure be ready?

Currently, the majority of data/metadata generated by IoT devices comes in the form of numbers, words, characters, etc. which are not large files (think, a few kilobytes), but this is poised to change quickly. To give you an idea of future scope and scale, take the aviation industry as a case study. At last years Paris Air Show, Bombardier showcased its C Series jetliner featuring Pratt & Whitneys Geared Turbo Fan (GTF) engine. The state-of-the-art GTF engine is fitted with 5,000 sensors that can produce up to 844TB of data per 12-hour flight. In comparison, according to Aviation Week Network, at the end of 2014, Facebook generated approximately 600TB of data per day. With all the GTF engines scheduled to be deployed into the field multiplied by thousands of flights per day, the sensor data generated by the aviation industry alone could easily surpass data accumulated by social media. And were not even counting IoT applications such as automotive, manufacturing, precision agriculture, smart cities, smart buildings, smart appliances and all the data these innovations will generate that require real-time cloud-based analytics and storage.

There are 3 major growth factors to take into account when looking at the future of IoT and the infrastructure needed to support it:

  1. The growing pool of connected devices will generate an aggressive uptick in data/metadata that must be transmitted, analyzed and stored.

  2. Software companies are building more powerful software tools that are executing a higher number of analytics processes with sophisticated complexity, thus demanding more compute resources.

  3. Larger datasets and media files, such as photos, videos and soon, VR, will represent challenges to real-time analytics and efficient storage in a cloud environment.

These factors place the bottleneck in the realm of hardware infrastructure, which must be high performance, highly available and highly efficient, and must scale sensibly (from both a footprint and cost perspective as well as minimal complexity). The truth is, companies who find that they cant efficiently support, process and store the data that they are responsible for may simply find themselves out of business once IoT becomes the interconnected reality in which we live. So for the best chance of survival and better yet, success in the upcoming insight-driven future, the planning for modernizing data center/IT infrastructure must start TODAY.

With the scale of data centers rapidly increasing, certain resources such as geographical footprint and power/cooling (ie energy, water) become constrained. Lead by hyper scale companies like Facebook and Google, IT organizations are looking for ways to make their data center footprint more efficient, such as with the introduction of the Open Compute initiative to build modular and hyper-efficient data centers.

To decrease overhead costs, many forward-looking companies planning their IoT strategy should consider moving away from legacy, application-specific hardware to a more streamlined, efficient, and virtualized infrastructure built using modular platforms. These platforms scale sensibly and can be easily configured and/or repurposed for efficiency or high-performance requirements. This allows for unprecedented flexibility and utilization of resources. Companies currently using the public cloud should consider calculating what IoT growth will mean in terms of data and cost of computing and storage, and consider shifting to an on-premise data center platform, or at least a hybrid approach–keeping metadata in the cloud and leaving the heavy computations and secure storage to the on-premise hardware may potentially be a better cost-optimization model.

Hardware providers also have to get better. Companies like Intel, ARM, IBMs Open Power Foundation, NVIDIA, and server manufacturers, have created powerful chips, GPU cards, and server platforms that achieve greater processing speed and performance at higher efficiency and lower cost. For open-architecture solution providers like AMAX who utilize all available leading technologies to build racks to cluster solutions optimized for performance and efficiency, this approach can help companies plan their IoT strategy to support powerful analytics while scaling with optimal flexibility and efficiency, so their infrastructure is never the bottleneck.

From there, everything is possible.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in Internet of Things | Leave a comment

The Knight Has Landed

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Enterprises looking to integrate GPU-like parallel computing into their HPC hardware without having to make any significant changes to their IT infrastructure should look no further–the Knight has landed. Intel just released the highly-anticipated next-generation Xeon Phi processor, previously code-named, Knights Landing. The most notable feature of the Xeon Phi compared to PCIe GPU cards is that it can self-boot and can use an operating system as the native processor. Additionally, each CPU can handle up to four threads and has 3x per-thread performance improvement compared to earlier Phi products, while featuring 16GB of MCDRAM. It can also be configured to support multiple memory configurations, using both the onboard memory and the available DDR4 memory channels.

But what makes the Phi such a big deal? While on the surface, it seems like a business as usual approach with the standard hardware upgrades that are attributed to Moores Law, the Phi actually delivers the beginnings of supercomputing performance in an integrated, smaller package.

This allows your business to immediately scale up without having to completely redesign your hardware architecture. Meaning, if you want to continuously run theoretical models, create large scale predictive projects, or simply process data at the speed of light, the Phi will support that. And with AMAXs designed-to-order server and rack solutions as well as full menu of value added services, your exact high performance computing needs can be satisfied with as little compromise as possible. Click here to learn more.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in AMAX News, Data Center, Deep Learning, Enterprise Computing, HPC Computing, Total Computing Solutions, converged infrastructure | Leave a comment

Best Deep Learning Performance By An NVIDIA GPU Card: The Winner Is

facebooktwittergoogle_pluslinkedinyoutubeflickrby feather

Deep Learning

Theres been much industry debate over which NVIDIA GPU card is best-suited for deep learning and machine learning applications. At GTC 2015, NVIDIA CEO and co-founder Jen-Hsun Huang announced the release of the GeForce Titan X, touting it as the most powerful processor ever built for training deep neural networks. Within months, NVIDIA proclaimed the Tesla K80 is the ideal choice for enterprise-level deep learning applications due to enterprise-grade reliability through ECC protection and GPU Direct for clustering, better than Titan X which is technically a consumer-grade card. Then in November of 2015, NVIDIA released the Tesla M40. At 5x the price point of the Titan X, the Tesla M40 was marketed as The Worlds Fastest Deep Learning Training Accelerator.

With this many worlds fastests and most powerfuls in such a short period of time, people were understandably confused. Therefore, as a leader in high-performance computing technologies and deep learning solutions, AMAXs engineering team endeavored to benchmark the various deep learning cards to determine which NVIDIA card performed the best for deep learning.

In a whitepaper titled, Basic Performance Analysis of NVIDIA GPU Accelerator Cards for Deep Learning Applications, AMAXs team analyzed NVIDIA K40, K80, and M40 Enterprise GPU cards along with GeForce GTX Titan X and GTX 980 Ti (water-cooled) consumer grade cards running 256×256 pixel image recognition training using Caffe software. Systems used in the benchmark tests were AMAXs DL-E400 (4xGPU workstation), DL-E380 (3U 8xGPU server) and the DL-E800 (4U 8xGPU server).

The study included:

- Card specific performance analysis

- Performance scaling from single GPU system to up to 8x GPU nodes

- Performance impact of the CPU

- Single and dual CPU solutions

- Platform-specific performance differences

The Results

GeForce TitanX

The study found that increasing the number of cards scaled the performance linearly, and cards based on the Maxwell architecture (Titan X, 980 Ti, M40) outperformed the Kepler cards (K40 and K80).

Most interesting was how poorly the K80 performed despite having the highest single-precision TFLOP performance spec.

So which card performed the best in our deep learning benchmark testing? Surprisingly, the water-cooled GTX 980 Ti. The Titan X and M40 came in second, displaying near neck-to-neck performance. Since the GTX 980 Ti may not be suitable for server integration, our recommendation would be the Titan X card and M40s for deep learning applications, with Titan X providing the best performance to cost ratio.

Machine Learning Applications

It remains to be seen how the Pascal-based GTX 1080 (replacement for GTX 980, to be released on May 27th, 2016) will perform in comparison, but early feedback is that the 2x better performance than Titan X statistic relates to VR applications, not deep learning applications.

In the meantime, to learn more about the results of AMAXs deep learning benchmark testing, you can download the white paper here. AMAXs full line of built-to-order deep learning solutions can be found here.

facebooktwittergoogle_pluslinkedinmailby feather
Posted in AMAX News, Deep Learning, Enterprise Computing, GPU Computing, HPC Computing | Tagged , , , , | Leave a comment