Intel is acknowledging the changing face and internal functions of the
data center with a new initiative designed to re-architect the
underlying infrastructure, allowing companies and end users to adapt
their data centers to a more services- and mobility-oriented
environment.
The strategy, as laid out at its Datacenter Day event held in late July
and hosted by several Intel executives, is for automation and speed to
replace manual, time-consuming and often fixed functions, each with
their own independent configuration, said Diane Bryant, senior vice
president and general manager of the Datacenter and Connected Systems
Group at Intel.
"Today, the network is still manually configured. The process to
reconfigure a new network to support the service is a manual process
that can take weeks," said Bryant. With the virtualized,
software-defined network, the time to provision software and hardware on
a new service can be reduced to just minutes.
She also noted that data storage continues to grow at a 40% compound
annual growth rate, with 90% of it unstructured. She cited an IBM study
that found businesses store on average 18 copies of the same piece of
data. "That actually sounds quite logical having run IT for four years,"
joked Bryant, who was previously CIO of Intel.
Finally, she noted that even with virtualization, server capacity is
barely at 50%. "That means 50% of server capacity is unused, which is a
true crime," said Bryant.
We see traction around software-defined data centers and Intel has
plenty of software capabilities in terms of programming into the chips.
They can help to define software definitions and that underlying layer
in the data center.
Christian Perry, senior analyst for data centers, Technology Business Research
To solve this, Intel is looking to rearchitect not just servers but the
network for cloud services, and plans to do this via software-defined
networks. "Software-defined networks allow us to extract the control
function out of the switch, run it globally, run it on standard high
volume Intel hardware as just another app running on your Intel
architecture. That drives up the utilization and drives down the capital
expense through the movement off proprietary servers," said Bryant.
Intel wants to help companies move beyond the standard generation of
data centers by offering what it calls the Rack Scale Architecture
(RSA), which will virtualize the whole network and every component in
it. An application will assemble the CPU, memory, storage and networking
it needs from the pool of assembled hardware and build its own virtual
server, storage and network.
Why is Intel doing this?
Intel is a chip company yet it's taking charge of a component of
downstream technology, the server architecture. For Intel to get into
server, network and storage architecture would like Qualcomm, the
dominant player in mobile phone chips, deciding the cellular networks
are not being well managed by Verizon, AT&T, etc., and it was going
to do something about it.
The problem is the big server vendors aren't taking charge. They are
distracted to some degree or another. CEO Meg Whitman is slowly righting
the ship of HP but that company took a severe body blow in recent
years. Dell is in even worse shape with its lingering privatization
plans, and IBM isn't interested in the x86 business because it tried to
sell the System x business to Lenovo but the two couldn't reach a price
agreement.
Nature and business abhors a vacuum and Intel is stepping in. And as
Nathan Brookwood, research fellow with Insight64 noted, it's happened
before with great success.
"Intel is acting as a leader and there is nothing wrong with Intel
picking up the leadership mantle and moving forward. They did it in the
past. Who drove WiFi into mass market? It was Intel with Centrino.
Before that, WiFi was a curiosity. It took Intel putting Centrino with
WiFi in every laptop to make it popular. They also did it with USB and
PCI Express," Brookwood said.
Christian Perry, senior analyst for data centers at Technology Business
Research, agreed. "They see an opportunity, it's theirs for the taking
to define that leadership role. They've probably had that opportunity in
the past few years, but why now? We see traction around
software-defined data centers and Intel has plenty of software
capabilities in terms of programming into the chips. They can help to
define software definitions and that underlying layer in the data
center," Perry said.
Not only is Intel taking a role that should fall to hardware vendors, it
is also taking on the job of defining a software-defined data center,
something you'd expect from VMware, Citrix or Microsoft, noted Perry.
"They have a very well-orchestrated definition behind software-defined
anything or everything. I never heard a more sensible approach in terms
of explaining how everything can work in a software-defined data center.
We hear things from EMC and VMware, but Intel really sees the big
picture and they should see the big picture because their chips are
running the big picture," Perry said.
Quietly building the infrastructure
So it remains to be seen if Intel can do for software-defined networks
what it did for WiFi, but it sure will try. Without a lot of hoopla,
Intel has made some major moves into networking infrastructure. Intel
has introduced Open Network Platform reference designs to help OEMs
build and deploy a new generation of networks that it says will maximize
bandwidth, reduce costs, and offer flexibility to support new services.
In April 2013, Intel introduced three platforms for software defined
networking and network function virtualization: the Open Network
Platform Switch Reference Design (ONPS); the Data Plane Development Kit
(DPDK) Accelerated Open vSwitch; and the Intel Open Network Platform
Server Reference Design.
ONPS allows for automated network management, and coordination between
the server switching elements and network switches. The DPDK will
improve small packet throughput and workload performance, while the Open
Network Platform Server Reference Design, previously codenamed "Sunrise
Trail," is based on Intel chips and Wind River software. Intel acquired
Wind River in 2009.
These are not new efforts. Intel has been working on them for years and
claims customers for this technology include including HP, NEC, NTT
Data, Quanta, Super Micro, VMware and Vyatta (a Brocade company).
New edge servers (a recap)
In a departure from traditional Intel operating procedure, the company
plans to offer custom chips to big customers. Already Facebook and
eBay will get custom low-end Xeon E3 processors in a system-on-a-chip
(SoC) design, and there will be more. Expect more of this, said Jason
Waxman, general manager of the Cloud Computing Platforms Group at Intel.
At the event, Waxman introduced the Atom C2000, an eight-core processor,
known by its codename "Avoton." It will come with an encryption
acceleration network device codenamed "Rangeley" as well as with many
Xeon features, such as error code correction (ECC), Intel Virtualization
Technology and 64GB of memory capacity.
In short, it looks a lot like a Xeon, and that's no accident. "People
want consistency. They want 64 bits and software compatibility and error
correction code even on the low end," said Waxman.
He also announced a future chip based on the Xeon E3 design but will use
a system-on-a-chip (SoC) design instead of the usual discrete design
for Xeon servers, which involves several chips on the server board. A
Xeon SoC means much lower power consumption and smaller motherboards,
since fewer chips are used. The Xeon SoC will be introduced next year
with the Broadwell generation of processors. Broadwell will be the
current Haswell architecture, built on a 22nm design process, shrunk to
14nm.
That the first customers for this chip are Facebook and eBay is no
coincidence. Consumer networks like those use thousands of edge servers
to handle their millions of visitors at any given moment and to serve up
HTML pages generated on the back end. It has slowly dawned on server
vendors that using a powerful processor such as Xeon E5, or even an E3,
for a server that simply handles Web connections and spits out HTML
pages is overkill. The Atom (and ARM) is more than adequate for that
task.
That was the appeal of using Atom and ARM processors in ultra-dense edge
servers such as HP's new Project Moonshot and the 10u server from
startup SeaMicro, which AMD acquired in 2012. Both make the same point:
save your Xeons for the database work and let a chip that uses a
fraction of the power to pump out HTML pages.
Perry says that edge servers are a new market for Intel and the industry at large. Not a huge one but a new one nonetheless.
"Microservers won't be huge for revenue but Intel will play in that
market. It will be a while before that market grows out. Customers are
still in a wait and see mode on microservers," said Perry.
Customers may be waiting but the vendors are not. AMD made its move with
the acquisition of SeaMicro, which makes ultra-dense servers using
Intel's Atom processors. AMD will eventually put its own chips in those
servers. HP has Project Moonshot servers using an ARM-based processor
designed by Calxeda.
So Intel doesn't want to be left out. "Intel wants to be a part of that
market but it's more of a 'We'll provide anything you need for your
workload environments' strategy," said Perry.
No comments:
Post a Comment