This topic describes the architectural view of Application Live View and its components. This system can be deployed on a Kubernetes stack and can monitor containerized applications on hosted cloud platforms or on-premises.

Architecture Diagram

Component Overview

Application Live View includes the following components as shown in the diagram above:

  • Application Live View Server

    Application Live View Server is the central server component that contains a list of registered applications. It is responsible for proxying the request to fetch the actuator information related to the application.

  • Application Live View Connector

    Application Live View Connector is the component responsible for discovering the application pods running on the Kubernetes cluster and register the instances to the Application Live View Server for it to be observed. The Application Live View Connector is also responsible for proxying the actuator queries to the application pods running in the Kubernetes cluster.

    Application Live View Connector can be deployed in 2 modes:

    • Cluster access: Application Live View Connector can be deployed as Kubernetes DaemonSet to discover applications across all the namespaces running in a worker node of a Kubernetes cluster. This is the default mode of Application Live View Connector.

    • Namespace scoped: Application Live View Connector can be deployed as Kubernetes Deployment to discover applications running within a namespace across worker nodes of Kubernetes cluster.

  • Sidecar

    The Sidecar is run alongside the running application and is responsible for registering the application to Application Live View backend. Each application has a sidecar associated with it which proxies the actuator endpoint data of the application to the Application Live View Server.

  • Application Live View CRD Controller

    Application Live View Custom Resource Definition Controller defines custom resources that return a list of registered application instances in Application Live View. It also returns metric metadata (cpu, memory) associated with the instance. The Kubernetes API server handles the storage of custom resource using etcd.

Design flow

As illustrated in the architecture diagram, the App Live View namespace contains all the Application Live View components and the Apps namespace contains all the applications to be registered with Application Live View Server.

The applications run by the user are registered with Application Live View Server via Application Live View Connector or Sidecar.

Application Live View Connector which is a lean model uses specific labels to discover applications across cluster or namespace. Application Live View Connector talks to K8 API server asking for events for pod creation and termination and then filters out the events to find pod of interest (through labels). Once identified, then Application Live View connector will register those filtered application instances with Application Live View server. Application Live View server will proxy the call to the connector for querying actuator endpoint information.

In contrast, the Sidecar shares the pod alongside a single application and registers the application with Application Live View Server. Application Live View server will proxy the call to the sidecar for querying actuator endpoint information.

The Application Live View CRD Controller fetches the list of application instances registered with Application Live View Server and registers them as custom resources with Kubernetes API server. The Application Live View CRD Controller listens to events from the Application Live View server and creates/updates/deletes custom resources in Kubernetes server.

The Application Live View Server fetches the actuator data of the application by proxying the request to Application Live View Connector or Sidecar via RSocket connection. The Application Live View CRD Controller fetches events from Application Live View Server via HTTP connection.

check-circle-line exclamation-circle-line close-line
Scroll to top icon