LogoLogo
Ops IntelligenceAsset IntelligenceObservabilityRobotic Data
  • Introduction
  • How it Works
  • Getting Started
  • Glossary
  • Implementer Guide
    • cfxDimensions Installation
      • Hardware and Software
      • cfxDimenions on VMware vSphere
        • Post cfxDimensions VM Installation
      • SSL Certificates Installation
      • cfxDimensions Setup & Install
        • Known Issues
      • cfxDimensions High Availability
        • GlusterFS Operations
        • Minio Operations
        • MariaDB Operations
      • cfxDimensions Start, Stop order
      • Macaw CLI
        • macaw CLI Installation
          • macaw CLI v2.1.17
        • macaw setup
        • macaw infra
        • macaw platform
        • macaw user
        • macaw application
        • macaw status
        • macaw services
        • macaw clambda
        • macaw techsupport
        • macaw backup
        • macaw restore
        • macaw reset
      • Release Notes
        • cfxDimensions v2.0.3
        • cfxDimensions v2.1.17
        • cfxDimensions v2.2.20
    • cfxDimensions Backup & Restore
    • cfxOIA Installation
    • cfxOIA Application Services
    • cfxOIA Release Notes
      • cfxOIA v5.1.5
      • cfxOIA v5.1.5.2
      • cfxOIA v5.1.5.3
      • cfxOIA v6.0.0
      • cfxOIA v6.1.0
  • KEY FEATURES GUIDE
    • Incident Management
      • Incidents Overview
      • Create Incident
      • Incident States
      • Accessing Incident
        • Stack
        • Alerts
        • Metrics & Logs
        • Insights
        • Collaboration
        • Diagnostics
        • Remediation
        • Attachments
        • Activities
      • Incident Actions
    • Alert Management
      • Alerts Overview
      • Alert Analytics
      • Alert States
      • Alert Sources
    • Advanced Alert Configuration
      • Alert Mappings
      • Alert Enrichment
      • Alert Correlation & Suppression
        • Creating and Updating Correlation Policies
        • Creating and Updating Suppression Policies
        • Correlation Recommendations
    • ML Driven Operations
    • Data Exploration
    • RDA (Robotic Data Automation)
      • Accessing UI
      • Sources Addition and Configuration
      • Check Connectivity
      • Proxy Settings
      • Explore
        • Bots
        • Pipelines
        • Schedules
        • Jobs
    • Analytics
  • UI & PORTAL FEATURES GUIDE
    • Filters Management
    • Customizing Table Views
    • Exporting Data
  • Administrator Guide
    • User Roles & RBAC
    • Collaboration
    • Projects
      • How to add Project
      • Configure Project
        • Stacks
        • Incidents
        • Alerts
        • Messages
          • Message Endpoints
            • Rest Data Consumer
            • Kafka Message Consumer
            • ServiceNow SaaS
            • Webhook with Basic Authentication
          • Message Mappings
        • Teams
        • Datasources
        • Resolution Codes
  • INTEGRATIONS GUIDE
    • Integrations Overview
    • Featured Integrations
      • AppDynamics
      • Dynatrace
      • Microsoft Teams
      • NetApp Cluster Mode
      • NetApp 7 Mode
      • Prometheus
      • ServiceNow
      • Slack
      • Splunk Enterprise
      • VMware vCenter
      • Zabbix
      • NodePing
      • Nagios XI
      • Check MK
      • VMware vRealize Operations
      • PRTG Network Monitor
      • Grafana
      • AWS Cloudwatch
      • ManageEngine OpManager
      • PagerDuty
Powered by GitBook
On this page
  • 1. Details
  • 2. Execute
  • 3. Deactivate
  1. KEY FEATURES GUIDE
  2. RDA (Robotic Data Automation)
  3. Explore

Pipelines

A pipeline is a set of processing elements (bots) connected in series, where the output of one element is the input of the next one. The elements of a pipeline are often executed in sequence or in a time-sliced fashion based on the user-designed logic.

Pipelines are a commonly used concept in everyday life. Car assembly line, CI/CD environment, Big data environments to name a few.

In cfxOIA RDA context, the pipeline mechanism is supported via RDA bots which were explained in the previous section. RDA bots are designed, implemented, and logically connected to solve real-world problems. This section covers few following example pipelines that are part of cfxOIA RDA.

  1. Simple example pipelines

  2. Nagios pipeline

  3. cmdb_file_load example pipeline

Note: The above example pipelines explained in this section are for reference and users are free to modify, extend and/or add new pipelines to solve other use-cases based off-of the above examples.

Simple example pipeline

Example 1

The below pipeline code snippet explains a very basic and simple pipeline that. In the above example, the pipeline consists of bots that are logically arranged or connected in series to perform the following tasks:

@dm:empty
    --> @dm:addrow d = 'Hello World'
    --> @dm:save name = 'hello-world-pipeline'

    --> @c:new-block
    --> @dm:recall name = 'hello-world-pipeline'
    --> *dm:filter * 

Adding a simple row ('Hello World') and saving into a logical dataset (named 'hello-world-pipeline), using new-block (bot), calling back that dataset, and printing to the console using another filter bot. This is similar to reading, saving, and printing the 'Hello World' string using pipelines and bot libraries in RDA.

The above pipeline snippet represented using bots can also be represented using YAML format as shown below:

name: Test-Hello-World
sequence:
- tag: '@dm:empty'
- tag: '@dm:addrow'
  query: d = 'Hello " World'
- tag: '@dm:save'
  query: 'name = ''hello-world-pipeline''

    @c:new-block'
- tag: dm:recall
  query: name = 'hello-world-pipeline'
- tag: '*dm:filter'
  query: '*'

Example 2

The above pipeline code snippet explains another simple basic pipeline of creating three columns ip_address, hostname, id, adding data to those three columns, and adding a simple logic using a bot to create a new column and filling the data. All the above-mentioned steps are put together via bots arranged in a series to solve a logical problem.

@dm:empty
    --> @dm:addrow ip_address = '10.10.1.1' & hostname = 'host-1-1' & id = 'a1'
    --> @dm:addrow ip_address = '10.10.1.2' & id = 'a2'
    --> @dm:addrow hostname = 'host-1-4' & id = 'a4'
    --> @dm:addrow  id = 'a5'
    --> @dm:map from = 'ip_address,hostname' & to = 'label' & func = 'any_non_null' & default = 'No Label
    --> *dm:filter * get ip_address,hostname,id

The above pipeline snippet represented using bots can also be represented using YAML format as shown below:

name: add-new-column
sequence:
- tag: '@dm:empty'
- tag: '@dm:addrow'
  query: ip_address = '10.10.1.1' & hostname = 'host-1-1' & id = 'a1'
- tag: '@dm:addrow'
  query: ip_address = '10.10.1.2' & id = 'a2'
- tag: '@dm:addrow'
  query: hostname = 'host-1-4' & id = 'a4'
- tag: '@dm:addrow'
  query: id = 'a5'
- tag: '@dm:map'
  query: from = 'ip_address,hostname' & to = 'label' & func = 'any_non_null' & default
    = 'No Label
- tag: '*dm:filter'
  query: '* get ip_address,hostname,id'

From the above two examples, users are free to create pipelines using bots convention and/or using YAML convention.

Note: Standalone RDA AIOps studio can be used to develop, test, and publish new pipelines using the above two methods and reuse those pipelines in cfxOIA RDA for applicable use cases. Few such use cases are explained below. Refer to Standalone RDA AIOps studio documentation for details.

Nagios pipeline

The following screenshot provides the 'Nagios' pipeline that is used in the cfxOIA RDA environment for Nagios alert enrichment.

Step 1

Step 2

The following screenshot provides a UI view of Nagios alert enrichment.

In addition to the above screenshot, the following depicts Nagios alert enrichment pipeline code snippet.

description: null
name: nagios
sequence:
- limit: 0
  query: '*'
  tag: '*nagios:nagios_host_group_members'
- query: name = 'nagios-host-group-members'
  tag: '@dm:save'
  

- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'nagios-host-group-members' & condition = "(host_name == '$assetName')"
    & enrichcolumns = 'group_id,hostgroup_name'
  tag: '@dm:addrow'
- query: name = 'alert-enrichment-definition-nagios'
  tag: '@dm:save'
- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'alert-enrichment-definition-nagios'
  tag: '@dm:addrow'
- query: name = "alert-enrichment-pipeline-changes"
  tag: '@dn:write-stream'

The above Nagios alert enrichment pipeline contains various logical blocks (e.g. reading the 'nagios_host_group_members' dataset, and querying the dataset with a logical condition to match for host_name to match 'assetName' via incoming payload, enrich the alert, and saving into another dataset alert_enrichment_definition_nagios' and finally sending to upstream for further alert processing.

The above pipeline code snippet provides a powerful capability of RDA wherein the incoming raw alert payload is enriched by the pipeline (reading, applying logic, enriching alert with additional details) and passed onto the alert module for further processing. This enriched data can be used for ML processing and other RCA analysis.

CMDB example pipeline

This section provides an additional CMD example pipeline codebase snippet.

In addition to the above screenshot, following provides example pipeline code snippet for users reference.

description: null
name: cmdb_file_load
sequence:
- limit: 0
  query: name='cmdbfiles'
  tag: '@dm:recall'

- tag: '@c:new-block'
- query: name='cmdbfiles_demo'
  tag: '@dm:recall'

- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'cmdbfiles' & condition = "(DEVICE_NAME == '$assetId' or DEVICE_NAME
    == '$assetName' or DEVICE_NAME == '$assetIpAddress')" & enrichcolumns = ''
  tag: '@dm:addrow'
- query: name = 'alert-enrichment-definition-cmdbfiles'
  tag: '@dm:save'

- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'alert-enrichment-definition-cmdbfiles'
  tag: '@dm:addrow'
- query: name = "alert-enrichment-pipeline-changes"
  tag: '@dn:write-stream'
  

- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'cmdbfiles_demo' & condition = "(DEVICE_NAME == '$assetId'
    or DEVICE_NAME == '$assetName' or IP_ADDRESS == '$assetIpAddress' or FQDN == '$assetIpAddress')"
    & enrichcolumns = ''
  tag: '@dm:addrow'
- query: name = 'alert-enrichment-definition-cmdbfiles_demo'
  tag: '@dm:save'

- tag: '@c:new-block'
- tag: '@dm:empty'
- query: datasetname = 'alert-enrichment-definition-cmdbfiles_demo'
  tag: '@dm:addrow'
- query: name = "alert-enrichment-pipeline-changes"
  tag: '@dn:write-stream'

In addition to adding/viewing pipelines, pipelines UI provides the following actions to end-users to work with pipelines - Details, Execute and, Deactivate.

1. Details

The 'Details' action button will navigate users to a new UI that presents various pipeline runs (also called jobs or job run of a pipeline ) and their status as shown in the below screenshot.

Once users click or select the 'Details' action button, UI will navigate users to the status of jobs of that selected pipeline as shown below.

The above screen capture provides a very important aspect of pipeline runtime execution (job status of a pipeline execution). Pipeline Jobs table provides important job status detail columns ( Schedule Name, Pipeline Name, Status, Scheduled time, etc). Successful completion of pipeline job shows 'Completed' status, whereas failed job shows 'Error/Failed' and provides more log details.

In addition to the above, each job run provides further actions on each job runs as shown below screenshot.

Check Status

The 'Check Status' action button provides the status of the pipeline job from end-to-end (as shown below screenshot). In case of any errors, it shows which step of the pipeline errored out and what is the reason.

Summary

The 'Summary' action button shows the overall summary details of pipeline job execution (results summary) as shown below.

View Results

The 'View Results' action button shows the end results as part of the current pipeline job execution as shown below.

2. Execute

The 'Execute' action button will let the users execute the pipeline via a job as shown below. Once, users click the 'Execute' button, UI will pop up the pipeline code along with the confirmation button (to let the users confirm the job execution) as shown below.

Once users click 'Execute', the job will be triggered and executes the pipeline in context. Results can be viewed via 'Summary' or 'View Results' buttons as explained earlier.

3. Deactivate

The 'Deactivate' action button will let the users disable the pipeline as shown below. In addition to deactivating, it will allow users to edit/change the pipeline as per the user's changes or requirements.

Once the pipeline is in 'Inactive' mode, users will be able to perform the following additional actions on the 'Inactive' pipeline as shown in the following (Edit, Delete, Activate)

Users will be able to Edit, Delete and Activate (back after editing) pipeline and perform the operations.

PreviousBotsNextSchedules

Last updated 3 years ago

Navigating to Nagios alert-enrichment pipeline
Select 'View YAML' action button to view the pipeline code snipppet that was used in Nagios alert enrichment.
CMDB load file pipeline in YAML format.
Details view UI that provides detailed status of a pipeline job
Three action buttons are provided for status information of pipeilne job
Check Status UI depicting pipeline job execution flow.
Summary UI depicting status of pipeline job execution summary
View Results UI depicting end-results of a pipeline job execution.
Execute UI for a pipeline job prompting user to confirm the execution
Deactivate UI to disable pipeline job for editing / modifications.
UI depicting inactive state of pipeline
Once pipeline is in inactive state, Edit, Delete and Activate buttons are enabled