Juniper Networks QFX10000-30C - Expansion module - 100 Gigabit QSFP28 / 40 Gigabit QSFP+ x 30
- Industry-leading line-rate 100 GbE port density with up to 480 100 GbE ports in a single chassis
- Up to 96 Tbps Layer 2 and Layer 3 performance, scalable to over 200 Tbps in the future
- Unparalleled investment protection with high density 10 GbE, 40 GbE, and 100 GbE
- System longevity with midplane-less orthogonal interconnect architecture
- High logical Layer 2 / Layer 3 scale
- Deep buffers with up to 50 ms delay bandwidth buffer
- No head-of-line blocking with virtual output Queue (VoQ)-based architecture
- Flexible network architectures including Layer 3 fabric, Junos
- Fusion, and Juniper MC-LAG for Layer 2 and Layer 3 networks
- Scalable, Plug-and-Play Ethernet fabric with Junos Fusion
- Juniper Virtualized Open Network Operating System framework for programmability through APIs
- High availability with Topology-Independent In-Service Software Upgrade (TISSU)
- Next-generation analytics with Cloud Analytics Engine
- Advanced Junos OS features such as BGP add-path, VXLAN routing, MPLS, and FCoE
- Rich automation capabilities with operations and event scripts, Python, Chef and Puppet
The QFX10000 line of modular data center spine and core Ethernet switches delivers industry-leading scale, flexibility and openness, with a design that enables the seamless transition from 10 GbE and 40 GbE interface speeds to 100 GbE and beyond. These high-performance, forward-looking switches are designed to help cloud and data center operators extract maximum value and intelligence from their network infrastructure well into the future.
-
High availability
QFX10000 modular spine and core switches deliver a number of high availability features that ensure uninterrupted, carrier-class performance. Each QFX10000 chassis includes an extra slot to accommodate a redundant RE module that serves as a backup in hot-standby mode, ready to take over in the event of a master Routing Engine failure. If the master fails, the integrated Layer 2 and Layer 3 graceful Routing Engine switchover (GRES) feature of Junos OS, working in conjunction with the nonstop active routing (NSR) and nonstop bridging (NSB) features, ensures a seamless transfer of control to the backup, maintaining uninterrupted access to applications, services, and IP communications. The QFX10000 modular switches also support Topology-Independent In-Service Software Upgrade (TISSU) that enables them to seamlessly move to a newer software version while maintaining data plane traffic intact.
-
Virtual Output Queue (VOQ)
The QFX10000 switches support a virtual output queue (VOQ)-based architecture designed for very large deployments. VOQ refers to a queue on the egress port that is maintained by the ingress PFE. With VOQ architecture, packets are queued and dropped on ingress during congestion with no head-of-line blocking.
-
Automation
The QFX10000 switches support a number of features for network automation and plug-and-play operations. Features include operations and event scripts, automatic rollback, and Python scripting. The switches also support integration with VMware NSX, OpenContrail, Puppet, OpenStack, and CloudStack.
-
MPLS
QFX10000 switches support a broad set of MPLS features, including L3 VPN, IPv6 provider edge router (6PE), RSVP traffic engineering, and LDP to allow standards-based network segmentation and virtualization.
-
VXLAN
The QFX10000 supports Layer 2 and Layer 3 gateway services that enable VXLAN-to-VLAN connectivity at any tier of the data center network, from server access to the edge. The QFX10000 integrates with NSX through data plane (VXLAN) and control and management plane (OVSDB) protocols to centrally automate and orchestrate the data center network.
-
FCoE
As Fiber Channel over Ethernet (FCoE) transit switches, the QFX10000 line provides an IEEE data center bridging (DCB) converged network between FCoE-enabled servers and an FCoE-enabled Fiber Channel storage area network (SAN). The QFX10000 offers a full-featured DCB implementation that provides strong monitoring capabilities, helping SAN and LAN administration teams maintain clear management separation. FCoE link aggregation group (LAG) active/active support is available to achieve resilient (dual-rail) FCoE connectivity.