8 Steps to Building a Modern Datacenter
Today’s enterprises are beginning to hit a wall with their old-school datacenters. As noted in a recent article by IO’s Patrick Flynn on why the datacenter is broken, datacenters have become too big and too slow when they really need to be more cost-effective, efficient and responsive. As it is, enterprise IT architects are struggling to keep pace with accelerating business demands for more storage and compute resources, and are unable to take full advantage of new technologies designed to improve infrastructure performance, scale and economics. No more building bigger and more expensive silos of proprietary hardware. What is needed is a complete rethinking of how datacenters are designed and managed.
Here are eight fundamental steps to creating a more efficient, manageable and scalable datacenter that evolves with your organization’s needs:
1. Be Modular
Datacenter infrastructure gets more complex each year as new technologies get added, creating a mishmash of incompatible frameworks and consoles across network, server and storage silos. Switching to a modular design can afford enterprises far more simplicity and flexibility, allowing enterprise IT architects to add or remove building blocks as needed.
Over the years, “modularization” has evolved from 40-foot shipping containers filled with racks of equipment to much smaller and compact single rack solutions. For example, Virtual Computing Environment’s (VCE) vBlock is a pre-engineered, fully cabled rack containing servers, network switches and storage devices. But for many companies, those devices are too pricey at $500,000 or more. They also incorporate fixed, vendor-defined ratios for computing resources and storage capacity, and are built with legacy components from multiple vendors that make overall management unnecessarily complex.
However, when building blocks can be quickly added to or removed from an infrastructure so you can have resources on-demand and avoid over-provisioning, you get true modularization. An increasingly popular approach is to use a single appliance that consolidates the compute and storage tiers. The modules are not only scalable on demand, but they’re interoperable and streamline overall datacenter management with a single console, greatly reducing the headaches for overworked datacenter admins.
2. Converge When Possible
Enterprise IT managers have been moving to converged datacenter infrastructure because it uses fewer dedicated resources and is, therefore, more economical and more efficient. Storage convergence started more than a decade ago with hard disk drives migrating from servers to centralized shared storage arrays, connected via high-speed networks. More recently, flash memory has been added to enterprise storage devices to create hybrid storage solutions with up to 100 times faster than legacy architectures.
Rather than having specialized devices for computing and storage, the functions can be combined into one appliance. The datacenter is then built with a single resource tier containing all of the server and storage resources needed to power any application or workload. This improves scalability without the need to spend more on additional hardware or high-speed, dedicated networking equipment.
3. Let Software Drive
The days of expensive, specialized hardware in datacenters are ending. They aren’t flexible or portable, and many are powered by field-programmable gate arrays (FPGAs) or application-specific integrated circuits (ASICs) that don’t support new software capabilities that datacenters or cloud infrastructures today demand. Separating policy intelligence and runtime logic from the underlying hardware and abstracting it to a distributed software layer allows it to be automated and centrally controlled. This enables datacenter admins to provision new services without adding hardware, which saves on cost and offers more agility. And distributed applications can improve uptime, global scalability and service continuity during site failures.
4. Embrace Commodity Hardware
Google grew its Web search and other cloud services on the back of low-cost commodity hardware running distributed software. This innovative approach allowed it to scale fast with minimum investment. Traditional enterprises have been caught in an expensive cycle of upgrading datacenter hardware every three to five years, replacing it with newer, more expensive equipment. Today, they can reap the same benefits from commodity hardware that cloud providers do. A distributed software layer abstracts all resources across clusters of commodity nodes, delivering aggregate capacity that surpasses even the most powerful monolithic approaches. The value is in the software that powers low-cost hardware.
5. Empower End Users
Datacenters today need to be more resilient and reliable than ever. They must continue to handle traditional enterprise data needs, but also meet the growing demands from applications ranging from virtual desktop infrastructures (VDI) to employees toting handheld devices with them everywhere. To deal with the “consumerization” of IT, admins are moving to end-user computing models in which desktops, applications and data are centralized within the datacenter and accessed by employees via any device from anywhere. Modernizing the datacenter will enable datacenter managers to better address the wide range of workload demands brought on by the new “consumerization,” as well as deal with compute-intensive VDI systems, storage-intensive enterprise data services (like Dropbox) or existing virtualized enterprise applications.
6. Break Down Silos
The increasing complexity and functionality of datacenters has led to the formation of technology silos, with each managed by a team of specialists. For example, one team might handle the data management and information archive in the storage silo, while other teams oversee the networking, server and virtualization silos. Using combined appliances means you don’t need separate teams of specialists for each technology. Integrating the technologies into a single scalable unit, or datacenter building block, reduces the need for highly specialized staff.
7. Go Hybrid
Many enterprises want to be able to use the public cloud for some things but still keep business-critical applications involving confidential data safe within the confines of the private datacenter. To meet these dual needs, corporations are using hybrid cloud environments. Public clouds offered by Amazon Web Services and others offer on-demand provisioning and resource-sharing across multiple tenants. Private clouds can do that too, but the difference is they remain under the management of the datacenter team and allow more control over of security, performance and service level agreements. Hybrid environments offer the best of both worlds.
8. Focus on Service Continuity
Enterprise disaster recovery strategies tend to be reactive. Consumerization, however, has radically altered user expectations. If there are interruptions or latency problems, users will go around enterprise IT and use unauthorized cloud-based services. To provide near 100 percent availability, admins have to be more proactive and focus on service continuity rather than disaster recovery. This means re-architecting datacenters to be highly available, which means having a lot of bandwidth and low round-trip times. Also, enterprises should re-architect their applications to be distributed. By distributing applications architectures across multiple sites, regions or datacenters they can better scale globally, perform well and increase uptime. Facebook, Amazon and Google have seen great success with this model.
Enterprises are learning that to remain competitive they have to adapt to the rapidly changing business environment. They need to be able to increase data computing and storage capacity quickly and add new capabilities, but without spending a lot of extra money. With datacenters, it’s not about building out anymore; it’s about building smart.
Greg Smith is Senior Director of Product Marketing at Nutanix.
Source: WIRED Magazine
0 Comment