Investigator finds no evidence of spy chips on Super Micro motherboards

An investigation by an outside firm that specializes in all manner of corporate investigations has found no evidence that motherboards sold by Super Micro Computer but made in China had secret chips implanted in them for spying or backdoor access.

Like every other OEM, Super Micro, based in San Jose, California, sources many of its components from China. There have been issues raised in the past about Chinese-owned hardware companies. IBM faced some initial resistance when it sold its x86 server business to Lenovo, especially since many government agencies — including the Defense Department — used IBM hardware.

But Super Micro was rocked last October when Bloomberg BusinessWeek ran a lengthy feature article alleging that tiny chips were being secretly stashed on Super Micro motherboards for the purpose of providing backdoors for hackers to illegally access the servers.

To read this article in full, please click here

Source: Network World

BrandPost: Top Ten Reasons to Think Outside the Router #5: Manual CLI-based Configuration and Management

We’re now more than half-way through our homage to the iconic David Letterman Top Ten List segment from his former Late Show, as Silver Peak counts down the Top Ten Reasons to Think Outside the Router. Click for the #6, #7, #8, #9 and #10 reasons to retire traditional branch routers.

To read this article in full, please click here

Source: Network World

Computers could soon run cold, no heat generated

It’s pretty much just simple energy loss that causes heat build-up in electronics. That ostensibly innocuous warming up, though, causes a two-fold problem:

Firstly, the loss of energy, manifested as heat, reduces the machine’s computational power — much of the purposefully created and needed, high-power energy disappears into thin air instead of crunching numbers. And secondly, as data center managers know, to add insult to injury, it costs money to cool all that waste heat.

For both of those reasons (and some others, such as ecologically related ones, and equipment longevity—the tech breaks down with temperature), there’s an increasing effort underway to build computers in such a way that heat is eliminated — completely. Transistors, superconductors, and chip design are three areas where major conceptual breakthroughs were announced in 2018. They’re significant developments, and consequently it might not be too long before we see the ultimate in efficiency: the cold-running computer.

To read this article in full, please click here

Source: Network World

IBM and Nvidia announce turnkey AI system

IBM and Nvidia further enhanced their hardware relationship with the announcement of a new turnkey AI solution that combines IBM Spectrum Scale scale-out file storage with Nvidia’s GPU-based AI server.

The name is a mouthful: IBM SpectrumAI with Nvidia DGX. It combines Spectrum Scale, a high performance Flash-based storage system, with Nvidia’s DGX-1 server, which is designed specifically for AI. In addition to the regular GPU cores, the V100 processor comes with special AI chips called Tensor Cores optimized to run machine learning workloads. The box comes with a rack of nine Nvidia DGX-1 servers, with a total of with 72 Nvidia V100 Tensor Core GPUs.

To read this article in full, please click here

Source: Network World

Qualcomm makes it official; no more data center chip

A layoff of 269 people in a company of 33,000 usually isn’t noteworthy, but given where the layoffs hit, it’s notable. Qualcomm has signaled the end of the road for Centriq, its ARM-based server processor, which never got out of the starting gate.

U.S. companies have to notify their state employment of layoffs 60 days before they happen, making these events less of a surprise as reporters get wind of them. A letter from Qualcomm to its home city of San Diego said 125 people would be let go on February 6, while a note to officials in Raleigh, North Carolina, says 144 people also will be cut loose.

The news is a repeat of what happened last June, right down to the number of people let go and cities impacted. The cuts target several divisions, one of which is the company’s data center division, which was barely staffed to begin with. The Information, which first reported on the layoffs, says the data center group will be down to just 50 people after a peak of more than 1,000. That includes the head of the group, Anand Chandrasekher, a former Intel executive.

To read this article in full, please click here

Source: Network World

Juniper CTO talks cloud, high-speed networking

Cloud computing is changing everything – just ask Juniper CTO Bikash Koley.

Along with that notion Koley says that there are a number of certainties about the future of building out large cloud infrastructures: Multicloud is a real inflection point for enterprises and service providers; there will be private cloud;s and that the way all infrastructure will be built going forward will be different from the way things are done today.

bikash koley smalljuniper

Bikash Koley

To read this article in full, please click here

Source: Network World

How we selected 10 hot data-center virtualization startups to watch

The selection process for our roundup of 10 data-center virtualization startups to watch began with 33 recommendations and nominations that were sent via HARO, LinkedIn, Twitter, and subscribers to the Startup50 email newsletter.

Several of those startups had to be eliminated right off the bat not because they wouldn’t be a good fit for this roundup – they would be – but because they had already been covered in previous roundups, including those focused on storage, hybrid cloud and business continuity.

To read this article in full, please click here

Source: Network World

How Java has stood the test of time

Java has survived for more than two decades and continues to be one of the top programming languages in use today. What accounts for the language’s success and how has it changed to accommodate more modern technology?

Java’s rise to power

Java initially appeared in 1995, evolving from a 1991 innovation called “Oak”. It was apparently the right time for engineers looking to grow distributed systems. Some of the more popular languages back then — C, C++, and even Cobol for some efforts — involved steep learning curves. Java’s multi-threading, allowing the concurrent execution of two or more parts of a program, ended the struggle to get multi-tasking working. Java quickly became the de facto language for mission-critical systems. Since that time, new languages have come and gone, but java has remained entrenched and hard to replace. In fact, Java has stood as one of the top two computing languages practically since its initial appearance as this Top Programming Languages article suggests.

To read this article in full, please click here

Source: Network World

BrandPost: 802.11ax means more IoT. Now, how do I secure it?

Like the teenager with no driving experience who takes the family SUV on the open highway, even the simplest devices that are connecting to corporate networks have the power to participate in an attack and cause serious damage.

Courtesy of Moore’s Law, anything with an IP address must be now considered a potential threat. Ironically, 802.11ax introduces terrific new security features such as WPA3 and OWE. But, it also makes the WLAN even more IoT-friendly, given the support for dense concentrations of clients in environments such as smart buildings, where devices like lighting controls are as likely to be connected wirelessly as wired.

To read this article in full, please click here

Source: Network World