The goal of network modernization is to drive greater classes of service innovation and timely enablement. Following are key objectives that CSPs are considering as they transform their networks and design for new business and operational models.
- Fixed Mobile Convergence
-
As networks evolved through 2G and 3G generations, the voice and data network architectures were separated, particularly circuit-switched, and packet-switched networks. As networks evolved, the CSPs went towards an all IP network, therefore the convergence of Fixed and Mobile networking. The environments for voice mostly share the core networking components with different access networks. The scale, performance, and management of such converged networks are more critical than before.
- Data Intensive Workload Acceleration
-
The demand for throughput has increased exponentially with smart devices and immersive media services. The networking and compute expenditures continue to grow to meet such demands in traffic throughput. Acceleration technologies like DPDK, VPP, hardware offload, for example, are at the forefront to reduce OpEx for data intensive applications.
- Cloud-Native Environments
-
Cloud-Native approaches are dictating a new NFV paradigm and micro services VNF architectures. Container technology is the new light-weight execution environment for such micro services and delivery. While the fine-grained abstraction of applications might be a good fit for control plane functions in the next-generation architecture, the user plane functions are expected to execute as native VM functions. This requires the cloud infrastructure environment to be heterogeneous enabling such hybrid execution environments for native VM and containerized applications.
- Distributed Clouds
-
To meet the increased bandwidth and low-latency requirements, network designs are expanding the centralized compute models to distributed edge computing models. Certain level of distribution exists in regional and core data centers, however further edge distribution will be necessary to control traffic backhauling and to improve latencies. In conjunction, VNFs are disaggregating to distribute data plane functions at the edges of the network whereas control functions are centralized. Service distribution and elasticity will be vital part of the network design consideration.
- Network Slicing
-
Network slicing is a way for cloud infrastructure to isolate resources and networking to control the performance and security for workloads that are executing on the same shared pool of physical infrastructure. With distributed topologies, the concept of network slicing furthermore stretches across multiple cloud infrastructures, including access, edge, and core virtual and physical infrastructures. Multi-tenancy leverages such resource isolation to deploy and optimize VNFs to meet customer SLAs.
- Dynamic Operational Intelligence
-
The cloud infrastructures will have to become adaptive to meet the needs of workloads. Right-sizing the environment and dynamic workload optimizations, including initial placement, will be part of the continuous automation orchestration. The cloud infrastructure environments will require integrated operational intelligence to continuously monitor, report, and action in a timely manner with prescriptive and predictive analytics.
- Policy-Based Consistency and Management
-
Model driven approaches will play a key role in the modern cloud infrastructures. Resource modeling, runtime operational policies, or security profiles, declarative policies and movement of policies with workloads, onboarding, and so on, will ensure consistency and ease of management.
- Carrier Grade Platform
-
The cloud infrastructure environment will have to meet the strict requirements for availability, fault tolerance, scale, and performance. Security will be necessary across the transport, data, and workload dimensions. The mobility of workloads across distributed clouds introduces a new challenge for its authenticity and integrity.