Traditionally, enterprise data centers have focused on data storage and disaster recovery and have not always been able to meet the demand for real-time multi-user data recovery. Data centers must adapt to meet the demands of ever-growing data traffic and ever-changing technologies like blockchain, as well as to manage their environmental impact.
The shift to more data-centric business models and the rise of blockchain technology have shifted data centers from asynchronous storage and usage to real-time analytics and on-demand data processing. Co-located data centers offer the best combination of scalability, availability, and environmental controls for blockchain operations. However, many data centers are beginning to adopt architectures capable of meeting the data processing requirements of blockchain operations and other data-centric enterprises.
Adopt new architectures such as leaf-spine, move to virtualized servers, and replace old, inefficient hardware to ensure data centers can meet business needs. Facility owners can scale their operations more easily, reduce their carbon footprint and maximize efficiency.
Traditional data center infrastructure and design challenges with blockchain
Private networks hosted in a data center use an infrastructure and architecture optimized for north-south communication between clients and servers through core, distribution, aggregation, and access switches. Virtualization and other complex server applications increase server-to-server communications, or east-west communications, which can overload communication traffic. This leads to bandwidth bottlenecks and unexpected latency due to oversubscribed switches.
Decentralized networks, such as blockchain and internet connections, rely on an uninterrupted network wherever they are, much like the data stores used by big data applications. These resources must always remain highly available, and the infrastructure they rely on must be efficient enough to handle routine traffic spikes.
Data centers handling blockchain and data-heavy processes require appropriate infrastructure to handle concurrent high intermittent bandwidth. The core, aggregation, and access layers of the traditional data center cannot effectively handle these types of workloads. They use spanning tree protocols to prevent traffic loops, which detect loops and block the connections or links that form them. With nowhere to go in the traditional north-south traffic pattern, data bottlenecks at that switch, causing downtime or latency.
Changing traffic patterns in the data center
The leaf-spine architecture transforms a traditional data center into one capable of handling omnidirectional traffic. It overcomes the limitations of the traditional traffic model by taking a horizontal approach to network design and includes one or two tiers of switch meshes that connect to multiple layers of traffic.
In the leaf-spine architecture, each switch aggregates and consolidates client traffic and passes it through a central trunk to the server. This configuration consolidates network traffic into packets and moves it more efficiently between servers because devices no longer need to wait for open server connections.
Horizontal — or east-west — hosts are equidistant. Any individual host can talk to another host on the same leaf switch. Traffic moves predictably through the leaf-spine, a key requirement for high-performance computing clusters, multi-tier web applications, and any other service or application dependent on real-time activity and low latency. Network monitoring tools can help optimize traffic patterns across the leaf-spine and further reduce latency.
Additional technologies, such as Shortest Path Bridging and Transparent Interconnection of Lots of Links, allow all links between leaf and spine to pass traffic. This means that the network can scale automatically as traffic increases. With these technologies, facility owners can collect accurate statistics for cost estimation.
Reduce technology spend
Traditional data center networks use many connection options between switching layers, which can increase costs. New architectures reduce costs by increasing the number of connections each switch can handle. Data centers require less hardware because new switches can carry and accept traffic from any direction so organizations don’t waste connections and bandwidth.
Facility owners can realize even more savings with the leaf-spine architecture through reduced deployment and maintenance costs. Leaf-spine’s scalability and efficiency means fewer devices to deploy and maintain, so this architecture requires fewer resources.
Reduced carbon footprint and energy requirements
Modern businesses have hardware that works harder, longer, and hotter than before. As a result, traditional data centers require more cooling and consume more energy overall.
Compared to traditional architectures, the new architectures are cooler because they move data faster, optimize traffic patterns, and use hardware that generates less heat overall. So even virtualized servers that run hotter than non-virtual servers generate less heat overall because they can run the same workload with fewer of them.
Leaf-Spine designs are more environmentally friendly because they reduce the number of hops in the design of a network. They require fewer aggregation switches and redundant paths between access and interconnect switches. Latency and power requirements decrease with leaf spine, and it requires less cooling overall.