SevOne logo
You must be logged into the NMS to search.

Table of Contents (Start)

SevOne Data Publisher

SevOne Data Publisher (SDP) is a SevOne component that listens for new data points for devices and publishes them to Apache Kafka broker or Apache Pulsar broker cluster. SevOne Data Publisher enables you to perform analyses by combining SevOne data with other types of data–for example IT and Business data. It allows you to stream real-time data from SevOne NMS to an external message broker. Any application capable of integrating with Apache Kafka and Pulsar can subscribe to the data published.

SevOne Data Publisher can be configured using the Graphical User Interface (GUI). For details, please refer to Cluster Manager > section SevOne Data Publisher Configuration.

This topic describes how SevOne Data Publisher can be configured using the Command Line Interface (CLI).

SDP exports data from the local peer; it may be necessary to configure SDP on each peer.

SDP Quick Start

  1. Prepare a Kafka broker or a Pulsar broker to which SDP can publish data points. If you are new to Kafka or Pulsar, please refer to the following links for a quick setup.

  2. SSH into your SevOne NMS appliance or cluster as root and it is not necessary to use sudo with the commands. However, if you do not SSH in as root, you must precede your commands with sudo.

    $ ssh root@<NMS appliance>
  3. Change directory to /etc/sevone/sdp.

    $ cd /etc/sevone/sdp
  4. Copy example-config.yml file to config.yml.

    $ cp example-config.yml config.yml

    For reference, the complete /etc/sevone/sdp/example-config.yml file released with SevOne NMS can be found in section Appendix: Configuration File below.

  5. Using the text editor of your choice, edit config.yml.

    Example
    $ vi config.yml
  6. In the config.yml file,

    1. if you are using the Kafka broker,

      • search for bootstrap.servers. Populate the value with <your Kafka IP address>:<port number>.

    2. If you are using the Pulsar broker,

      • search for serviceUrl . Populate the value with pulsar://<your Pulsar IP address>:<port number> .

      • change output.default.type from kafka to pulsar .

  7. Save config.yml file.

  8. Start SDP.

    $ SevOne-act sdp enable
  9. Wait for at most one-minute. You should see data points arriving to your Kafka / Pulsar broker.

  10. Stop SDP.

    $ SevOne-act sdp disable

SDP CLI Operations

Enable / Disable SDP

When /etc/sevone/sdp/config.yml is configured, you can enable SevOne Data Publisher. Execute the following step.

Enable SDP
$ SevOne-act sdp enable

To disable SevOne Data Publisher, execute the following step.

Disable SDP
$ SevOne-act sdp disable

Configure SDP to start on Reboots

  1. After you have configured /etc/sevone/sdp/config.yml, execute the following steps to ensure that SevOne Data Publisher starts on reboot.

  2. Using the text editor of your choice, edit /etc/supervisord.d/SevOne-data-publisher.ini file.

  3. Set variable autostart to true.

    autostart=true
  4. Save the file.

  5. To apply the update to /etc/supervisord.d/SevOne-data-publisher.ini file, execute the following command.

    $ /usr/bin/supervisorctl update

Restart SDP

After the initial configuration, whenever a change is made to SevOne Data Publisher configuration, you must restart SevOne Data Publisher.

$ /usr/bin/supervisorctl restart SevOne-data-publisher

Configuration

The following sections show how to configure various settings present in config.yml file.

version

This is the version for SevOne Data Publisher configuration file. i.e., version of your config.yml file.

Example: Parameter 'version' in config.yml file
version: 1

log

Allows you to select the log level.

  • level - enter the log level from the list of accepted values - error, warn, info, or debug.

Example: Parameter 'log' in config.yml file
log:
# Accepted values: error, warn, info, debug
level: info

cache

  • refreshPeriodSeconds - enter the refresh period in seconds. The default is set to 1800 seconds.

  • mysqldata - by default, it is set to /SevOne/appliance/settings/mysqldata.cnf

Example: Parameter 'cache' in config.yml file
cache:
  refreshPeriodSeconds: 1800
  mysqldata: /SevOne/appliance/settings/mysqldata.cnf

nms

  • kafka

    • url - enter the NMS internal Kafka address from where SDP receives the input. <NMS Kafka IP address>:<port number>. Port is always 9092. For example, 127.0.0.1:9092

    • group - this is the Kafka consumer group name. By default, it is set to sdp_group.

Example: Parameter 'nms' in config.yml file
# Configure the kafka connection information
nms:
  kafka:
    url: 127.0.0.1:9092
    group: sdp_group

sdp

Allows you to configure SDP output format.

  • outputFormat - data output format. It can be set to avro or json. For avro schema setup, please refer to section Datapoint Enrichment Configuration below.

  • clusterName - cluster name and it appears exactly with the same name in the output message.

  • includeDeviceOID - flag that determines whether the Object Identifier (OID) must be displayed in the output message. Field is applicable to json format only.

  • schemaFile - path for the avro schema file. The default path is /etc/sevone/sdp/schema.json.

  • workers - number of workers for live data publishing. If unspecified, it defaults to 10 workers.

Example: Parameter 'sdp' in config.yml file
# Configure the SDP output format
sdp:
outputFormat: avro
clusterName: NMS
includeDeviceOID: false
schemaFile: /etc/sevone/sdp/schema.json
workers: 10

include-filters / exclude-filters

Allows you to configure filters for the SDP output. This is an array of filters, and for each filter, there are rules.

SDP supports both allowlist and blocklist filtering.

  • include-filters for allowlist filter rules.

  • exclude-filters for blocklist filter rules.

Filters only apply to data associated with the local peer.

The following variables are supported by include-filters / exclude-filters.

  • name - name of the filter.

  • rules - can configure the following rules for each filter. You can configure five types of IDs for any number of rows. A value of -1 represents any ID. A data point matches a row if it matches all of the IDs in that row. A data point is included when it matches any of the rows. If you have both allowlist and blocklist, and there are overlapping rules in both list, then the blocklist will take the dominant position. You can get IDs for the items listed below using the SevOne NMS API.

    • devGrpID - device group ID.

    • devID - device ID.

    • objGrpID - object group ID.

    • objID - object ID.

    • pluginID - plugin ID.

Example: Parameters 'include-filters' & 'exclude-filters' in config.yml file
# Filters for the SDP output
# Note: SDP only exports data from the local peer, so these filters only apply to local data.
include-filters:
- name: default
# Specify your filters as different elements in this array
# by specifying an ID that you would like to be included.
# A value of -1 is interpreted as any ID.
# Each column in a given filter is combined with logical AND.
# Multiple filters are combined with logical OR.
rules:
# Example: Include everything
- { devGrpID: -1, devID: -1, objGrpID: -1, objID: -1, pluginID: -1 }
- name: allowlist1
rules:
# Example: Include only devices 5 and 6
- { devGrpID: -1, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
- { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }
# Example: Include device 5 only if it's in device group 2 and device 6
# - { devGrpID: 2, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
# - { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }
# Example: Include only objects 2, 3, and 4 from device 5
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 2, pluginID: -1 }
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 3, pluginID: -1 }
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 4, pluginID: -1 }
exclude-filters:
- name: blocklist1
# Specify your filters as different elements in this array
# by specifying an ID that you would like to be included.
# A value of -1 is interpreted as any ID.
# Each column in a given filter is combined with logical AND.
# Multiple filters are combined with logical OR.
rules:
# Example: Exclude everything
- { devGrpID: -1, devID: -1, objGrpID: -1, objID: -1, pluginID: -1 }
- name: blocklist2
rules:
# Example: Exclude only devices 5 and 6
- { devGrpID: -1, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
- { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }

schema-registry

Allows you to configure the settings for the schema registry server, if needed.

Currently SevOne Data Publisher supports schema registry for Kafka only, not for Pulsar.

SevOne Data Publisher validates and registers the schema with the Confluent schema registry when it starts. To enable this feature, you must add the schema registry server URL to the config.yml file. Add the URL under schema-registry in the config.yml file.

Example: Parameter 'schema-registry' in config.yml file
# Configure the settings for Schema Registry server if needed
schema-registry:
# url: "http://SCHEMA_REGISTRY_SERVER_HOST:SCHEMA_REGISTRY_SERVER_PORT"
 
# Subject for the schemas
subject: sevone-data-publisher
  • Replace SCHEMA_REGISTRY_SERVER_HOST with the server host name or IP address.

  • Replace SCHEMA_REGISTRY_SERVER_PORT with the server port.

To enable this, remove the # that precedes the url at the beginning of the line.

You can also configure the subject name for the schema. Replace the default subject name, sevone-data-publisher, with your new subject name.

Example
schema-registry:
url: "http://123.123.123.123:9999"
subject: sevone-data-publisher

status

To monitor SevOne Data Publisher data processing, configure the following http and/or https settings.

  • metricsLogInterval - periodically prints the statistics in the log /var/log/sdp.log if it is set to a positive integer. If it is set to 0, this feature is disabled. The value is set in seconds.

  • http

    • enabled - checks if the http status page is enabled (true) or disabled (false).

    • port - port that SevOne Data Publisher status page runs on. The default port is 8082.

  • https

    • enabled - checks if the https status page is enabled (true) or disabled (false).

    • secure_port - secure port that SevOne Data Publisher status page runs on. The default port is 8443.

    • server_cert - path to the server certificate.

    • server_key - Path to the server key.

    • private_key_password - Private key password. This is an optional field.

Example: Parameter 'status'
status:
 metricsLogInterval: 300
http:
enabled: true
port: 8082
https:
enabled: false
secure_port: 8443
server_cert: /etc/sevone/certs/server.crt
server_key: /etc/sevone/certs/server.key
private_key_password: SevOne123

output

Allows you to set the output configuration. This section contains two parts.

  • default - holds the common configuration which will be applied over all publishers as common settings.

  • publishers - contains the list of publishers. Each publisher can be configured with its own settings. Default settings will be overridden by an individual one.

default

  • key-fields / key-delimiter - SevOne Data Publisher supports Kafka/Pulsar partitions based on the key field. A key is composed of key fields and a key delimiter. Kafka/Pulsar handles the message distribution to different partitions based on the key and will ensure that messages with the same key go to the same partition.

    Under kafka/pulsar, key-fields is necessary for mapping devices to a specific partition.

    Example: Parameter 'output.default'
    # Default settings for publishers, which can be overwritten by each publisher
    default:
    # Customize the message key format if needed. Available fields include:
    # deviceId, deviceName, deviceIp, peerId, objectId, objectName,
    # objectDesc, pluginId, pluginName, indicatorId, indicatorName,
    # format, value, time, clusterName, peerIp
    # Default format is "deviceId:objectId".
    key-fields:
    - deviceId
    - objectId
    key-delimiter: ":"
  • type - can be kafka or pulsar. This flag determines which type of publisher must operate from the provided list of publishers.

    At a time, either kafka or pulsar is supported. Both types of brokers cannot be run at the same time.

    Parameter 'output.default.type'
    Example: Parameter 'output.default.type'
    # Default to be kafka
    # Allowed values are: kafka, pulsar
    type: kafka


  • kafka-producer - provides the default values for the kafka-producer.

    Example: Parameter 'output.default.kafka-producer'
    kafka-producer:
    acks: -1
    retries: 0
    linger.ms: 10
    batch.size: 1000000
    request.timeout.ms: 60000
    max.in.flight.requests.per.connection: 2


  • pulsar-producer - provides the default values for the pulsar-producer.

    Example: Parameter 'output.default.pulsar-producer'
    pulsar-producer:
    batchingMaxMessages: 1000
    sendTimeoutMs: 30000
    blockIfQueueFull: true

publishers

This field is a list with different publishers.

  • When a publisher corresponds to a kafka broker, valid fields are:

    • name - name of the publisher.

    • type - kafka

    • topic - name of the topic of the publisher.

    • isLive - by default, it is set to true. When set to true, only live data is allowed. When set to false, only historical data is allowed. Both live data and historical data cannot be supported together on the same topic.

    • version (optional) - the version of the Kafka broker. It is highly recommend not to set it so that SDP can automatically reduce an appropriate value for it. It should only be used to ensure compatibility in very rare edge cases, such as if you need to enable a feature that requires a certain Kafka version.

    • producer - for kafka's producer-specific settings. For details, please refer to section Kafka Producer.

    • filters - provide the list of filter names.

  • When a publisher corresponds to a pulsar broker, valid fields are:

    • name - name of the publisher.

    • type - pulsar

    • tenant - name of the tenant. By default, it is set to public.

    • namespace - name of the namespace. By default, it is set to default.

    • topic - name of the topic of the publisher.

    • isLive - by default, it is set to true. When set to true, only live data is allowed. When set to false, only historical data is allowed. Both live data and historical data cannot be supported together on the same topic.

    • client - for pulsar's client-specific settings. For details, please refer to section Pulsar Client below.

    • producer - for producer-specific settings. For details, please refer to section Pulsar Producer below.

    • filters - provide the list of filter names.

You may define both kafka publishers and pulsar publishers in config.yml. However, when SDP runs, it uses all publishers whose type equals output.default.type. The remaining publishers are ignored.


Example: Parameter 'output.publishers'
publishers:
# Kafka producer configuration options.
- name: default-producer
type: kafka
topic: sdp
isLive: true
producer:
bootstrap.servers: 123.123.123.123:9092
# security.protocol: SSL
# ssl.ca.cert.location: server.crt
# ssl.client.cert.location: client.crt
# ssl.client.key.location: client.key
# ssl.client.key.password: key_password
# SASL configuration
# sasl.mechanism: GSSAPI
# sasl.kerberos.service.name: kafka
# sasl.username: sevone
# sasl.password: SevOne123
# sasl.gssapi.useKeyTab: true
# sasl.gssapi.storeKey: true
# sasl.gssapi.keyTab: /tmp/kafka.keytab
# sasl.gssapi.principal: kafka
# sasl.gssapi.realm: example.com
# sasl.gssapi.kerberosconfpath: /etc/krb5.conf
filters:
- allowlist1
- blocklist1
# Pulsar producer configuration options.
- name: default-pulsar-producer
type: pulsar
isLive: true
topic: sdp-pulsar
tenant: public
namespace: default
topic-type: persistent
client:
serviceUrl: pulsar://10.168.132.148:6650
connectionTimeoutMs: 10000
# useTls: true
# tlsTrustCertsFilePath: server.crt
# tlsAllowInsecureConnection : false
# authPluginClassName : org.apache.pulsar.client.impl.auth.AuthenticationTls
# authParams: tlsCertFile:client.crt,tlsKeyFile:client.key
producer:
compressionType: ZLIB
filters:
- allowlist1
- blocklist1

In the example above, you have publishers default-producer and default-pulsar-producer.

Producer Configuration

SDP uses Kafka/Pulsar producers to publish data points to Kafka/Pulsar brokers. The producers are configurable as follows.

Kafka

Producer

For a Kafka producer, configure the following. Please refer to https://kafka.apache.org/36/documentation.html#producerconfigs for available settings.

Any configuration parameter not mentioned in the table below means that it is not supported in SDP.


Kafka Producer

Description

Values

bootstrap.servers

The Kafka server, also known as a broker, and the port number to access the server.

The format is <Kafka server IP address / hostname>:<port number>. For example, 10.129.14.10:9092


security.protocol

The protocol used to communicate with the Kafka broker.

SSL, SASL_SSL

acks

The number of acknowledgments the producer requires the leader to have received before considering a request complete

0,1 and -1 [default]

buffer.memory

The total bytes of memory the producer can use to buffer records waiting to be sent to the server.


compression.type

The compression type for all data generated by the producer.

none [default], gzip, snappy, lz4, or zstd

retries

Setting a value greater than zero will cause the client to resend any record whose send fails with a potentially transient error.

0 [default]

batch.size

The producer will attempt to batch records together into fewer requests whenever multiple records are being sent to the same partition.

1000000 [default]

client.id

An id string to pass to the server when making requests. The purpose of this is to be able to track the source of requests beyond just ip/port by allowing a logical application name to be included in server-side request logging.

sdp [default]

linger.ms

This setting gives the upper bound on the delay for batching: once we get batch.size worth of records for a partition it will be sent immediately regardless of this setting, however if we have fewer than this many bytes accumulated for this partition we will 'linger' for the specified time waiting for more records to show up.

10 [default]

max.request.size

The maximum size of a request in bytes. This setting will limit the number of record batches the producer will send in a single request to avoid sending huge requests.

1000000 [default]

request.timeout.ms

The configuration controls the maximum amount of time the client will wait for the response of a request.

60000 [default]

enable.idempotence

When set to 'true', the producer will ensure that exactly one copy of each message is written in the stream. If 'false', producer retries due to broker failures, etc., may write duplicates of the retried message in the stream.

true or false

If not explicitly passed this will be dynamically set as per other parameters.

max.in.flight.requests.per.connection

The maximum number of unacknowledged requests the client will send on a single connection before blocking.

2 [default]

metadata.max.age.ms

The period of time in milliseconds after which we force a refresh of metadata even if we haven't seen any partition leadership changes to proactively discover any new brokers or partitions.

600000[default]

reconnect.backoff.ms

The maximum amount of time in milliseconds to wait when reconnecting to a broker that has repeatedly failed to connect.

100 [default]

retry.backoff.ms

The base amount of time to wait before attempting to reconnect to a given host. This avoids repeatedly connecting to a host in a tight loop. This backoff applies to all connection attempts by the client to a broker.

10 [default]

Encryption & Authentication

SSL / TLS only Encryption

Kafka Producer

Description

Values

security.protocol

Protocol used to communicate with brokers.

SSL

ssl.ca.cert.location

Location to the CA cert that could be used to verify server side certificates

[string]

ssl.client.cert.location

Location to the client certificate that could be used by server to validate client in mutual authentication

[string]

ssl.client.key.location

Location to the client key that could be used by server to validate client in mutual authentication

[string]

ssl.client.key.password

Password of the key given in ssl.client.key.location, so it could be decrypted internally for data encryption

[string]

SSL + SASL Authentication

SASL (Simple Authentication and Security Layer) is an authentication protocol which has associated mechanisms. It allows the user to authorize SevOne Data Publisher to a Kafka service using credentials. The following list contains the supported mechanisms that SDP supports.

SASL works with or without encryption.

Kafka Producer

Values

security.protocol

SASL_SSL

ssl.ca.cert.location

[string]

ssl.client.cert.location

[string]

ssl.client.key.location

[string]

ssl.client.key.password

[string]

For additional details, please refer to https://kafka.apache.org/36/documentation.html#security_sasl.

GSSAPI Authentication

For GSSAPI (Generic Security Services Application Program Interface) as authentication mechanism, SDP uses the Kerberos service. For details on how to set Kerberos, please refer to https://kafka.apache.org/36/documentation.html#security_sasl_kerberos . The following parameters are supported for SDP configuration.

Kafka Producer

Description

Values

sasl.mechanism

Authentication mechanism

[string](GSSAPI)

sasl.kerberos.service.name

Service name of your kerberos service

[string]

sasl.gssapi.useKeyTab

Bool value to enable use keytab

[Boolean] (true)

sasl.gssapi.keyTab

Path to the keytab file

[string]

sasl.gssapi.principal

Principal value of kerberos setup

[string]

sasl.gssapi.kerberosconfpath

Path to the configuration file of kerberos krb5.conf

[string]

sasl.gssapi.realm

Name of the Realm as set in Kerberos server

[string]

sasl.gssapi.storekey

Boolean value for storekey

[Boolean] (true)

PLAIN Text Authentication

For details on PLAIN Text setup on Kafka broker, please refer to https://kafka.apache.org/36/documentation.html#security_sasl_plain. The following parameters are supported for SDP configuration.

Kafka Producer

Description

Values

sasl.mechanism

Authentication mechanism

[string](PLAIN)

sasl.username

Username

[string]

sasl.password

Password

[string]

SCRAM Authentication

For details on SCRAM (Salted Challenge Response Authentication Mechanism)setup on Kafka broker, please refer to https://kafka.apache.org/36/documentation.html#security_sasl_scram. The following parameters are supported for SDP configuration.

Kafka Producer

Description

Values

sasl.mechanism

Authentication mechanism

[string](SCRAM-SHA-256,SCRAM-SHA-512)

sasl.username

Username

[string]

sasl.password

Password

[string]

OAUTHBEARER Authentication

OAUTHBEARER authentication is not supported by SDP.

Pulsar

Any configuration parameter not mentioned in the tables below means that it is not supported in SDP.

Producer

For Pulsar producer, configure the following.

Pulsar Producer

Description

Values

autoUpdatePartitions

If enabled, partitioned producer will automatically discover new partitions at runtime.

[Boolean]
Default: true

batchingEnabled

Enable Batching.

[Boolean]
Default: true

batchingMaxMessages

Set the maximum number of messages permitted in a batch.

[int]
Default: 1000

batchingMaxPublishDelayMicros

Specifies the time period within which the messages sent will be batched. If batch messages are enabled.

[int]
Default: 10000 µs (microseconds)

blockIfQueueFull

Set whether the send operations should block when the outgoing message queue is full.

[Boolean]
Default: false

chunkingEnabled

Controls whether automatic chunking of messages is enabled for the producer.

[Boolean]
Default: false

compressionType

Set the compression type for the producer.

  • NONE (default)

  • LZ4

  • ZLIB

  • ZSTD

hashingScheme

Change the HashingScheme used to chose the partition on where to publish a particular message.

  • "JavaStringHash"

  • "Murmur3_32Hash"

maxPendingMessages

Set the maximum size of the queue holding the messages pending to receive an acknowledgment from the broker.

[int]
Default: 500000

messageRoutingMode

Set the MessageRoutingMode for a partitioned producer. Please refer to the following link for details.

  • SinglePartition

  • RoundRobinPartition (default)

sendTimeoutMs

The number in milliseconds for which Pulsar will wait to report an error if a message is not acknowledged by the server.

[int]
Default: 30000 ms (milliseconds)

Client

For Pulsar client, configure the following.

Pulsar Client

Description

Values

connectionTimeoutMs

Duration to wait for a connection to a broker to be established.

[int]

Default: 10000 ms (milliseconds)

serviceUrl

The service URL of Pulsar.

For example, pulsar://10.168.132.148:6650

useTls

Enable TLS.

[Boolean]
Default: false

tlsTrustCertsFilePath

Set the path to the trusted TLS certificate file

[String]

tlsAllowInsecureConnection

Configure whether the Pulsar client accept untrusted TLS certificate from broker

[Boolean]
Default: false

tlsHostnameVerificationEnable

Configure whether the Pulsar client verify the validity of the host name from broker

[Boolean]
Default: false

authPluginClassName

Authorized plugin class name

If set, the supported value is org.apache.pulsar.client.impl.auth.AuthenticationTls, which enables TLS client authentication.

authParams

Authorized parameters

When authPluginClassName is set to org.apache.pulsar.client.impl.auth.AuthenticationTls, authParams should be set to

tlsCertFile:client.crt,tlsKeyFile:client.key which provides the client certificate and the client private key.

operationTimeoutMs

Producer-create, subscribe and unsubscribe operations will be retried until this interval, after which the operation will be marked as failed

[int]
Default: 30000 ms (millseconds)

numIoThreads

Max number of connections to a single broker that will kept in the pool.

[int]
Default: 1 connection

keepAliveIntervalSeconds

Configure the ping send and check interval

[int]
Default: 3 s (seconds)

Historical Backfill

This feature enables you to republish historical data for a specified time period.

Script /usr/local/scripts/utilities/republish-historical-data.sh, is provided to start historical backfill. You must provide the start time and end time as parameters to the script in UNIX timestamp format. Once the historical data request is made, SevOne Data Publisher will collect all data points for each indicator present on the peer and start to republish them to historical brokers.

Historical broker is the broker where isLive flag is set to false.

  1. Execute the following command to start historical backfill.

    $ /usr/local/scripts/utilities/republish-historical-data.sh <start time> <end time>
    Example
    $ /usr/local/scripts/utilities/republish-historical-data.sh 1697026113.058791 1697026113.058791
     
    =============================
    | Web Server Status |
    =============================
    | Http Enabled | True
    | Http Port | 8082
    | Https Enabled | False
    | Https Port | 8443
    ============================
    Http Server Enabled, the republish endpoint is http://localhost:8082/status/republish
    Send republishing request through http://localhost:8082/status/republish
    Started to republished data from 1697026113.058791 to 1697026113.058791

    At a time only one republishing request can be executed. If multiple requests are made, you will get a message to wait for the previous request to complete.

    Example
    $ $ /usr/local/scripts/utilities/republish-historical-data.sh 1697026113.058791 1697036113.058791
     
    =============================
    | Web Server Status |
    =============================
    | Http Enabled | True
    | Http Port | 8082
    | Https Enabled | False
    | Https Port | 8443
    ============================
    Http Server Enabled, the republish endpoint is http://localhost:8082/status/republish
    Send republishing request through http://localhost:8082/status/republish
    Can't Process: Republisher is already running, please wait for the previous request to complete
  2. Historical backfill stops automatically after the request has completed. However, to forcefully stop the historical backfill, execute the command below.

    $ /usr/local/scripts/utilities/republish-historical-data.sh STOP
    Example
    $ /usr/local/scripts/utilities/republish-historical-data.sh STOP
     
    =============================
    | Web Server Status |
    =============================
    | Http Enabled | True
    | Http Port | 8082
    | Https Enabled | False
    | Https Port | 8443
    ============================
    Http Server Enabled, the republish endpoint is http://localhost:8082/status/republish
    Stop republishing through http://localhost:8082/status/republish/stop
    republishing has been stopped

APIs

SevOne Data Publisher supports the following GET endpoints.

Endpoint path

Description

Output message format

/api

Used to check the status of daemon. Response is always api.

PLAINTEXT

/api/configs

Returns the configuration setups.

JSON

/api/filters

Returns the statistics of all filters.

JSON

/api/filters/<path>

Response contains the statistics of the filter named by path.

JSON

/api/system

Response contains the statistics of the live_source, uptime, workers, and cache.

JSON

/api/system/cache

Response contains all the cache statistics.

JSON

/api/system/cache/<path>

Response contains the cache statistics for the path.

JSON

/api/system/live_source

Response contains the internal Kafka statistics.

JSON

/api/system/live_workers

Response contains all the live_workers statistics.

JSON

/api/publishers

Response contains statistics of all the publishers.

JSON

/api/publishers/<path>

Response contains the statistics of the publisher named by path.

JSON

/api/system/uptime

Response contains the uptime statistics.

JSON

/stats

Response contains the string SevOne Data Publisher.

PLAINTEXT

/status

Response contains the complete internal statistics of SDP.

PLAINTEXT

Examples: API commands

Here are a few examples on how to obtain the output for the API commands listed in the table above.

Example# 1
$ curl -s http://10.128.11.132:8082/api/publishers | jq
{
"default-hist-producer": {
"dataPointsSentFailureCount": 14,
"dataPointsSentSuccessCount": 7
},
"default-producer": {
"dataPointsSentFailureCount": 1,
"dataPointsSentSuccessCount": 5
}
}
Example# 2
$ curl -s http://10.129.13.198:8082/api/filters | jq
{
"excludeFilters": null,
"includeFilters": [
{
"name": "Everything",
"matchCount": 305
}
]
}
Example# 3
$ curl -s http://10.129.13.198:8082/api/system/uptime | jq
"0h:16m:10s:175ms"

Datapoint Enrichment Configuration

SevOne Data Publisher outputs the data in either avro or json formats.

When using json, the output format is fixed.

When using avro, user can configure the json schema to customize the fields that SevOne Data Publisher exports.

avro output is controlled by a schema file. The schema is in json format and can be found in /etc/sevone/sdp/schema.json.

Using a text editor of your choice, update the schema for the avro output in /etc/sevone/sdp/schema.json file to include / exclude the following supported fields.

Example: /etc/sevone/sdp/schema.json file
{
"fields": [
{ "name": "deviceId", "type": "int"},
{ "name": "deviceName", "type": "string"},
{ "name": "deviceIp", "type": "string"},
{ "name": "peerId", "type": "int"},
{ "name": "objectId", "type": "int"},
{ "name": "objectName", "type": "string"},
{ "name": "objectDesc", "type": "string"},
{ "name": "pluginId", "type": "int"},
{ "name": "pluginName", "type": "string"},
{ "name": "indicatorId", "type": "int"},
{ "name": "indicatorName", "type": "string"},
{ "name": "format", "type": "int"},
{ "name": "value", "type": "string"},
{ "name": "time", "type": "double"},
{ "name": "clusterName", "type": "string"},
{ "name": "peerIp", "type": "string"},
{ "name": "objectType", "type": "string"},
{ "name": "units", "type": "string"}
],
"name": "sevone_msg_schema",
"type": "record"
}

Indicator type units displays the info in data units and not in display units.

Troubleshooting

SevOne Data Publisher aborts or runs in an unexpected behavior.

If SDP aborts or runs in an unexpected behavior, the following may help investigate the cause and resolve most of the commonly found problems.

  • check the log file, /var/log/sdp.log, to see if there are any warnings or errors

  • check the configuration file, /etc/sevone/sdp/config.yml, to see if any settings are misconfigured

  • call the status endpoint, GET /status, to get statistics of SDP internal operations

config.yml file is configured correctly but SDP is unable to parse it.

Use a YAML Linter to validate your config.yml file. YAML is an indentation-sensitive format. There may be an incorrect indentation which is hard to catch by the human-eye.

Have run SevOne-act sdp enable and SDP is running but data points are not arriving to my broker.

Please wait for a minute or so. SDP consumes objects from SevOne-datad. It takes SevOne-datad at most 1 minute to detect that SDP is running and then send objects to SDP.

Pulsar broker is TLS-enabled. SDP unable to connect to pulsar and its memory usage is high.

Check your config.yml to see if you have mistakenly connected to pulsar using PLAINTEXT. If so, the memory usage is expected to be much higher (5GB-6GB) than normal (300MB) due to the official Pulsar Go client that SevOne uses.

HTTPS server is enabled. Although SDP runs, the server does not run and am unable to access the status endpoint.

Check the log file, /var/log/sdp.log, to see if the server certificate is loaded successfully. If not, SDP runs without launching the HTTPS server.

FAQs

Does SevOne Data Publisher publish data points from the entire NMS cluster?

No, SDP works at peer-level. That is, SDP publishes data points associated with the peer it is running on. If you want all data points from the entire NMS cluster, you may need to run SDP on all peers.

How does SevOne Data Publisher support HSA?

SevOne Data Publisher does not currently run on the HSA. When a failover occurs, it needs to be started manually.

Does SevOne Data Publisher streaming affect SevOne NMS's ability to poll SNMP data?

The configuration used by SevOne Data Publisher does not impact SevOne NMS's ability to poll data.

Can flow data (metrics and/or flows) be published via Kafka? If so, how can it be enabled?

Flow data (metrics and/or flows) cannot be published via Kafka. Flows are ingested by DNC whereas metrics are ingested via the PAS. SevOne Data Publisher does not see flows at all.

Due to the nature of flows and DNC scale consideration, it is best to redirect the flows to the receiving system because anything on DNC will likely impact the published scale numbers. DNCs are built for scale ingestion and not for publishing.

What to expect when migrating from SevOne Data Bus (SDB) to SevOne Data Publisher (SDP)?

For fail-fast strategy, you may run into a scenario where configuration may be invalid. One such example may be that an invalid publisher is configured (i.e., a certificate specified does not exist). SDB allowed it to run for a long time, silently skipped the error, and did not publish any data points or inform the user of the error or exit with a message. With SDP, it aborts early, informs the user of the error, and exits right away.

Appendix: Configuration File

From SevOne NMS, example-config.yml file is available from /etc/sevone/sdp.

Example: /etc/sevone/sdp/example-config.yml file
version: 1
 
log:
# Accepted values: error, warn, info, debug
level: info
 
cache:
refreshPeriodSeconds: 1800
mysqldata: /SevOne/appliance/settings/mysqldata.cnf
 
# Configure the kafka connection information
nms:
kafka:
url: 127.0.0.1:9092
group: sdp_group
 
# Configure the SDP output format
sdp:
outputFormat: avro
clusterName: NMS
includeDeviceOID: false
schemaFile: /etc/sevone/sdp/schema.json
# number of workers for live data publishing. Defaults to 10 workers if unspecified
workers: 10
 
# Filters for the SDP output
# Note: SDP only exports data from the local peer, so these filters only apply to local data.
include-filters:
- name: default
# Specify your filters as different elements in this array
# by specifying an ID that you would like to be included.
# A value of -1 is interpreted as any ID.
 
# Each column in a given filter is combined with logical AND.
# Multiple filters are combined with logical OR.
rules:
# Example: Include everything
- { devGrpID: -1, devID: -1, objGrpID: -1, objID: -1, pluginID: -1 }
- name: allowlist1
rules:
# Example: Include only devices 5 and 6
- { devGrpID: -1, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
- { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }
 
# Example: Include device 5 only if it's in device group 2 and device 6
# - { devGrpID: 2, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
# - { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }
 
# Example: Include only objects 2, 3, and 4 from device 5
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 2, pluginID: -1 }
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 3, pluginID: -1 }
# - { devGrpID: -1, devID: 5, objGrpID: -1, objID: 4, pluginID: -1 }
 
exclude-filters:
- name: blocklist1
# Specify your filters as different elements in this array
# by specifying an ID that you would like to be included.
# A value of -1 is interpreted as any ID.
 
# Each column in a given filter is combined with logical AND.
# Multiple filters are combined with logical OR.
rules:
# Example: Exclude everything
- { devGrpID: -1, devID: -1, objGrpID: -1, objID: -1, pluginID: -1 }
- name: blocklist2
rules:
# Example: Exclude only devices 5 and 6
- { devGrpID: -1, devID: 5, objGrpID: -1, objID: -1, pluginID: -1 }
- { devGrpID: -1, devID: 6, objGrpID: -1, objID: -1, pluginID: -1 }
 
# Configure the settings for Schema Registry server if needed
schema-registry:
# url: "http://SCHEMA_REGISTRY_SERVER_HOST:SCHEMA_REGISTRY_SERVER_PORT"
 
# Subject for the schemas
subject: sevone-data-publisher
 
status:
# periodically (in seconds) print stats in the log if it's set to a positive integer. It's disabled by default.
metricsLogInterval: 300
http:
# Configure the status HTTP page created by SDP
# hosted at http://hostname:port/status
enabled: true
port: 8082
https:
# Configure the status HTTPS page created by SDP
# hosted at https://hostname:secure_port/status
enabled: false
secure_port: 8443
server_cert: /etc/sevone/sdp/server.crt
server_key: /etc/sevone/sdp/server.key
# private_key_password is an optional field
private_key_password: password
 
# Output configuration
output:
# Default settings for publishers, which can be overwritten by each publisher
default:
# Customize the message key format if needed. Available fields include:
# deviceId, deviceName, deviceIp, peerId, objectId, objectName,
# objectDesc, pluginId, pluginName, indicatorId, indicatorName,
# format, value, time, clusterName, peerIp
# Default format is "deviceId:objectId".
key-fields:
- deviceId
- objectId
key-delimiter: ":"
# Default to be kafka
# Allowed values are: kafka, pulsar
type: kafka
kafka-producer:
acks: -1
retries: 0
linger.ms: 10
batch.size: 1000000
request.timeout.ms: 60000
max.in.flight.requests.per.connection: 2
pulsar-producer:
batchingMaxMessages: 1000
blockIfQueueFull: true
sendTimeoutMs: 30000
publishers:
- name: default-producer
type: kafka
topic: sdp
isLive: true
# version: 0.10.0.0
# Kafka producer configuration options.
# See https://kafka.apache.org/documentation, section 3.3 Producer Configs
producer:
# If bootstrap.servers is not defined, SDP will look for the bootstrap.servers
# defined in output.default.kafka-producer.
# Example: <your-kafka-ip>:<port>
bootstrap.servers: null
## SSL setup
# security.protocol: SSL
## SSL Server Authentication
# ssl.ca.cert.location: server.crt
## SSL Client Authentication
# ssl.client.cert.location: client.crt
# ssl.client.key.location: client.key
# ssl.client.key.password: password
## SASL configuration
# sasl.mechanism: GSSAPI
# sasl.kerberos.service.name: kafka
# sasl.username: username
# sasl.password: password
# sasl.gssapi.useKeyTab: true
# sasl.gssapi.storeKey: true
# sasl.gssapi.keyTab: /path/to/sdp.keytab
# sasl.gssapi.principal: sdp
# sasl.gssapi.realm: example.com
# sasl.gssapi.kerberosconfpath: /etc/krb5.conf
filters:
- default
# Pulsar producer configuration options.
- name: default-pulsar-producer
type: pulsar
topic: sdp-pulsar
tenant: public
namespace: default
topic-type: persistent
isLive: true
client:
# Example: pulsar://<your-pulsar-ip>:<port>
serviceUrl: null
connectionTimeoutMs: 10000 # Milliseconds
# useTls: true
# tlsTrustCertsFilePath: /path/to/server.crt
# tlsAllowInsecureConnection: false
# authPluginClassName: org.apache.pulsar.client.impl.auth.AuthenticationTls
# authParams: tlsCertFile:client.crt,tlsKeyFile:client.key
# operationTimeoutMs: 30000 # Milliseconds
# numIoThreads: 1
# tlsHostnameVerificationEnable: false
# keepAliveIntervalSeconds: 30 # Seconds
producer:
compressionType: ZLIB
# batchingMaxPublishDelayMicros: 1000 # Microseconds
# chunkingEnabled: false
filters:
- default