Before beginning the test, the specifications of all components of the software-defined power system infrastructure must be individually evaluated to ensure that they can collectively meet the intended purposes.

These specifications are discussed in the Deployment Specification section. All external requirements of these systems must be planned appropriately.

Table 1. External System Components

Component

Attribute

Hardware/Server

Meets specific standards or certifications (for example, IEC 61850-3, IEEE 1613).

Includes at least the required number of on-board interfaces (NICs, SFPs, serial ports).

Features at least minimum levels of compute (CPU, memory, storage, GPU).

Incorporates power supply.

At or below acceptable levels of noise for the surrounding environment.

Features industry standard or common rack mounting options (2-post against 4-post, number of rack units).

External Power Supply

Facilitates the voltage levels or ranges accepted by all hardware.

Accommodation for peak power draw (summed for all equipment).

Signal Generation

Availability from actual or simulated devices (for example, merging units or other digital converters, software, or test sets) using standard protocols, which might also require conventional secondary signal (such as currents, potentials) injection.

Serial signals might be necessary (standard protocol types, for example, C37.94), but represent point-to-point data links.

External Network

Required number of physical switches or routers (station bus against process bus) available, including interface types to accommodate media (copper or fiber), modes (single against multi), and data transfer rates of connected devices.

Physical switches or server NICs to accommodate PRP as required (dually or singly attached nodes).

High bandwidth requirements to be handled for specific protocols (for example, sampled values, vSAN/vMotion between servers).

Devices provide logical segmentation, prioritization, and traffic shaping means.

Facilitation of specific time synchronization protocols (such as PTP power profile) for each network component.

GNSS Timing

Satellite clocks producing specific network time synchronization protocols and participating in PRP (if required).

Keyboard-Video-Mouse

Local means to interact with the system for initialization (if required).

Initial verification of the architecture must begin after the required devices and all interconnecting wiring and cables are installed, and workloads that must be tested are at least minimally configured for local system interaction and deployed.

Network components must also be configured to allow the required communications paths. Review block and network diagrams (VVS Reference Architecture) as required.

  1. Verify that each component is powered on and has reached steady state (after booting).

  2. Review each component interface for initial state health checks and address any alarms.

    1. External system components are listed in External System Components. Internal system components include applications and VMware software products, and for the purposes of vPAC Ready Infrastructure, those products are vSphere (ESXi and vCenter Server Appliance) and vSAN. For details, see System Monitoring.

  3. Ensure that each networked component can communicate, as appropriate within the system.

    1. A ping test is a good place to start, considering that some end device types can block the request. Pinging can be done from a computer or VM with access to the local area network (such as a management VM).

    2. A trace route test can help verify the path that data takes between two components.

    3. A network analyzer tool can also be helpful to verify the appropriate data is present within the environment. Typical forms include hardware (for example, Omicron Daneo) that can be attached to the physical network, or software (for example, Wireshark) that can be installed as a virtual machine or container and attached to the virtual network.

The generation of test signals is the simplest method for evaluating much of the infrastructure. Depending on the applications employed, these signals (I/O) include digitized equipment statuses, controls, and telemetry. IEC 61850 protocols for SV, GOOSE, and MMS can facilitate much of the requirements of the end devices.

However, other common utility protocols (for example, DNP, Modbus) might also be necessary, due to the surrounding legacy equipment to be supported from the substation to the control center.

There are specialty tools that can be leveraged to reliably generate the traffic. A brief listing of commercially available tools is provided here:

Table 2. Test Signal Generation

Manufacturer

Products

Capabilities

Doble

Hardware/Software

Primary and secondary signal injection, network analysis, IEC 61850 protocol signal generation

Omicron Energy

Hardware/Software

Primary and secondary signal injection, network analysis, IEC 61850 protocol signal generation.

RTDS Technologies

Hardware/Software

Hardware-based simulation of several protocols (Modbus, IEC 61850 and 60870-5, PMU, DNP3).

Triangle Microworks

Hardware/Software

Simulation of several different protocols (Modbus, IEC 61850 and 60870-5, DNP3, ICCP/TASE 2).

ASE-Systems

Hardware/Software

Simulation of dozens of utility-related protocols.

CDOAN

Software

Simulation of DNP3, IEC 60870-5 traffic.

INFOTECH

Software

Simulation of IEC 61850, 60870-5, and DNP3 traffic.

The following diagram outlines a high level vPAC architecture.
Figure 1. High Level Networking


This diagram can be used as an example, dissecting the physical and virtual networks to determine what configurations and checks are required to initialize the system. Beginning with the PTP clock in the bottom left of the diagram, this device must first be interrogated to ensure it has the appropriate satellite signal locks and is publishing PTP as a grandmaster with high accuracy for the associated workloads (for example, for vPR, less than or equal to ± 1µs).

Ideally, each component in line with a workload requiring PTP synchronization is participating as a type of PTP clock (see the Glossary of Terms for definitions).

Therefore, the physical switch, PTP NIC, and vPR application must also be checked to ensure they are receiving an accurate PTP signal. Repeat this check for any similar, parallel paths, merging units (or simulation software), the ESXi PTP NIC, and any additional workloads it is supporting.

Continuing with the Merging Units (MUs) in the bottom center of the diagram, the input signals must be verified. These may be secondaries from grid equipment (or otherwise simulated), which can be verified through interrogation of the MUs or by directly measuring with a physical multimeter. The output of the merging unit must route through the physical switch, to the vPR (GOOSE or SV) or additionally through the vSwitch to another type of workload.

For the vPR, verification of signal receipt can be as simple as a metering check, or the correct interpretation of current or voltage sampled values. The speed, accuracy, redundancy mechanisms, and scaling capabilities of these signals are verified in a subsequent testing section. You must check each application to ensure a sample of data is correctly received from signal translating devices, through the virtual infrastructure, to every endpoint.

Each physical and virtual component management interface, a secured connection from an external device must be verified. For High Level Networking, the components include the clock, physical switches, MUs, server LOM, ESXi hypervisor (likely using vCSA), NICs, vPRs, and all remaining applications.

If server-to-server connections exist for mobility and storage (such as vMotion, vSAN), then VMware management software inherently verifies communications during setup (continuous monitoring through communications heartbeat).

Ultimately, there are three key criteria to test, including network latency, jitter, and packet loss.