Configuring values.yaml

Configure the required parameters in the values.yaml file based on the setup for Intelligent Data Store.

global:
  docker:
    #registry: xp-docker-releases.repo.cci.nokia.net
    registry: xp-docker-candidates.repo.cci.nokia.net
    xp_registry: "xp-docker-candidates.repo.cci.nokia.net"
    #registry: xp-docker-inprogress.repo.cci.nokia.net
    #registry: sandbox-docker-inprogress.repo.cci.nokia.net
    pullPolicy: Always

  #Cassandra Properties
  cassandra:
    #Comma separated list: <host1>:<port>,<host2>:port
    nodes: "cassandra-ccas-dse:9042"
    localDatacenter: "datacenter1"
    keyspace: "impact_ids"
    adminUsername: "casadmin"
    adminPassword: "mfrug"
    userName: "casadmin"
    userPassword: "mfrug"
    passwordEncrypted: 0
    dataEncryptionEnabled: false

    # For single node set the strategy to "SimpleStrategy" and for Multi-node cluster set it as "NetworkTopologyStrategy"
    strategy: "SimpleStrategy"

    # For single node DB
    #     Set the replication factor to the number of nodes in the cluster when using SimpleStrategy.
    #     Ex: replication: "'replication_factor' : '3'"
    # For Multi Node cluster
    #     Get the topology details from the DBA or if it is a CSF deployment,
    #     check the strategy used for demo or system_auth keyspace.
    #     Ex: - Single - DC
    #         replication: "'dc_east': '3'"
    #         - Multi - DC
    #         replication: "'dc_east': '3', 'dc_west': '3'"
    replication: "'replication_factor': '1'"

  # The time zone to be used by the application.
  # Make sure that the timezone used is same as that of the Cassandra.
  timeZone: "Asia/Kolkata"

  # any deployment specific java_opts to be configured
  javaopts: ""

  #IDS uses CDP for authenticating user. The idmUrl should point to CDP service.
  authentication:
    idmUrl: "http://cdp:8080"
  license:
    impactServerUrl: "http://impactapi"
    impactServerPort: "9090"
    impactConnectReleaseName: "impact-connect"

  broker:
    # comma seperated list of RMQ host:port or a service:port.
    hosts: "rabbitmq:5672"
    # username is the username of RMQ
    username: "impact"
    # password is the password of RMQ. Mandatory value to set externally.
    password: "impact123"
    # virtualHost is the vhost of RMQ
    virtualHost: "/"
  #Kafka commons
  kafka:
    # Example bootstrapServers: "PLAINTEXT://kf-impact-connect-headless.default.svc.cluster.local:9092"
    # or "SSL://kf-impact-connect-headless.default.svc.cluster.local:9092"
    bootstrapServers: "PLAINTEXT://kf-impact-connect-headless.default.svc.cluster.local:9092"
  schema-registry:
    # Schema registry service and port
    # e.g. http://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081
    # or https://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081
    url: "http://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081"

provisioning:
  image:
    imageRepo: "xp-provisioning"
    imageTag: ${helm.app.version}
    imagePullPolicy: "IfNotPresent"
  jobs:
    backoffLimit: 6
  cdp:
    config:
      CDP_URL: "http://cdp"
      CDP_PORT: "8080"
  CONNECT_SERVICE_NAME: "ckaf-kc-impact-connect"
  CONNECT_SERVICE_PORT: "8083"
  connector:
    #These kafka connect and schema registry urls values will be automatically populated if the internal kafka connect and schema registry is used
    # No need to fill up if internal kafka connect is used
    CONNECT_SVC_HOST: ""
    CONNECT_PORT: ""
    SCHEMA_REGISTRY_URL: ""
    # The following contains the rabbitmq exchange and queue configuration properties from where the data will be read from impact DC
    rabbitmqExchange: "impact.externalnotifier"
    # defines after which level in the dot separated group hierarchy it is considered as enterprise.
    CONNECTOR_XP_TENANT_GROUP_BY_LEVEL: "0"
    rabbitmqQueue:
      # Note: queue name will be taken CONNECTOR_RABBITMQ_QUEUE below in rabbitmq connector config
      # x_message_ttl refers to time-to-live of a message in the queue. It should be an non-negative integer
      x_message_ttl: "86400000"
      # x_max_length_bytes refers to the maximum number of bytes the queue will hold. It should be an non-negative integer
      x_max_length_bytes: "536870912"
      # x_overflow refers to the management of the queue when the number of bytes in queue reaches the limit in x_max_length_bytes
      # The following are the two options available for the same: 1. drop-head 2. reject-publish
      # drop-head will delete earlier messages and allow new messages to be added into the queue when the limit is reached
      # reject-publish will reject adding more messages into a queue whose limit has been reached
      x_overflow: "reject-publish"
    # Below are the connector properties that used to start the connector.
    rabbitmq:
      CONNECTOR_NAME: "rabbitmq-source-connector"
      CONNECTOR_TASKS_MAX: "6" #should be same as no. of instances of ckaf-connect
      CONNECTOR_CONNECTOR_CLASS: "com.nokia.impactxp.kafka.connect.rabbitmq.RabbitMQSourceConnector"
      CONNECTOR_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECTOR_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECTOR_RABBITMQ_QUEUE: "xpqueue"
      CONNECTOR_RABBITMQ_HOST: "rabbitmq.default.svc.cluster.local"
      CONNECTOR_RABBITMQ_PORT: "5672"
      CONNECTOR_RABBITMQ_MANAGEMENT_PORT: "15672"
      CONNECTOR_KAFKA_TOPIC_XP_LIFECYCLE: "impact.lifecycle"
      CONNECTOR_KAFKA_TOPIC_XP_OBSERVE_NOTIFY: "impact.observeNotify"
      CONNECTOR_KAFKA_TOPIC_XP_RESPONSE: "impact.response"
      CONNECTOR_KAFKA_TOPIC_XP_MONTE: "impact.monte"
      CONNECTOR_KAFKA_TOPIC_XP_SMS: "impact.sms"
      CONNECTOR_RABBITMQ_PREFETCH_COUNT: "10000"
      CONNECTOR_ERRORS_RETRY_TIMEOUT: "10000"
      CONNECTOR_ERRORS_RETRY_DELAY_MAX_MS: "2000"
      CONNECTOR_ERRORS_TOLERANCE: "all"
      CONNECTOR_ERRORS_LOG_ENABLE: "true"
      CONNECTOR_RABBITMQ_USERNAME: "impact"
      CONNECTOR_RABBITMQ_PASSWORD: "impact123"
    s3:
      CONNECTOR_NAME: "s3-sink-connector"
      CONNECTOR_TASKS_MAX: "6" #should be same as no. of instances of ckaf-connect
      CONNECTOR_CONNECTOR_CLASS: "com.nokia.connect.s3.NokiaS3SinkConnector"
      CONNECTOR_S3_CREDENTIALS_PROVIDER_CLASS: "com.nokia.connect.s3.credentials.NokiaAwsCredentialsProviderChain"
      CONNECTOR_VALUE_CONVERTER_SCHEMAS_ENABLE: "true"
      CONNECTOR_KEY_CONVERTER_SCHEMAS_ENABLE: "false"
      CONNECTOR_TOPICS_REGEX: "impact.*"
      CONNECTOR_S3_BUCKET_NAME: "impact-connect-bucket"
      CONNECTOR_S3_REGION: "ap-southeast-1"
      CONNECTOR_S3_PART_RETRIES: "22"
      CONNECTOR_S3_RETRY_BACKOFF_MS: "200"
      CONNECTOR_S3_ACL_CANNED: "private"
      CONNECTOR_S3_PROXY_URL: ""
      CONNECTOR_STORAGE_CLASS: "io.confluent.connect.s3.storage.S3Storage"
      CONNECTOR_TOPICS_DIR: ""
      CONNECTOR_PARTITIONER_CLASS: "com.nokia.connect.s3.partitioner.S3TimeBasedPartitioner"
      CONNECTOR_FLUSH_SIZE: "100000"
      CONNECTOR_PATH_FORMAT: "'year'=YYYY/'month'=MM/'day'=dd/'hour'=HH"
      CONNECTOR_LOCALE: "US"
      CONNECTOR_STORE_URL: "http://s3-ap-southeast-1.amazonaws.com"
      CONNECTOR_TIMEZONE: "Asia/Kolkata"
      CONNECTOR_PARTITION_DURATION_MS: "900000"
      CONNECTOR_TIMESTAMP_EXTRACTOR: "RecordField"
      CONNECTOR_TIMESTAMP_FIELD: "serverTime"
      CONNECTOR_MULTITENANCY_ENABLED: "true"
      CONNECTOR_FORMAT_CLASS: "com.nokia.connect.s3.format.ParquetFormat"
      CONNECTOR_SCHEMA_COMPATIBILITY: "NONE"
      CONNECTOR_PARQUET_CODEC: "snappy"
      CONNECTOR_ERRORS_RETRY_TIMEOUT: "10000"
      CONNECTOR_ERRORS_RETRY_DELAY_MAX_MS: "2000"
      CONNECTOR_ERRORS_TOLERANCE: "all"
      CONNECTOR_ERRORS_LOG_ENABLE: "true"
      CONNECTOR_ERRORS_DEADLETTERQUEUE_TOPIC_NAME: "errors"
      CONNECTOR_ERRORS_DEADLETTERQUEUE_TOPIC_REPLICATION_FACTOR: "2"
      CONNECTOR_ERRORS_DEADLETTERQUEUE_CONTEXT_HEADERS_ENABLE: "true"
      CONNECTOR_ROTATE_SCHEDULE_INTERVAL_MS: "300000"
    # Below are the cassandra connector properties that used to start the connector.
    cassandra:
      CONNECTOR_NAME: "cassandra-sink-connector"
      CONNECTOR_CONNECTOR_CLASS: "com.nokia.ids.connect.cas.sink.NokiaCassandraSinkConnector"
      CONNECTOR_TASKS_MAX: "1"
      CONNECTOR_CONFIG_ACTION_RELOAD: "restart"
      CONNECTOR_TOPICS_REGEX: "impact.*"
      CONNECTOR_FLUSH_SIZE: "100"
      CONNECTOR_ROTATE_INTERVAL_MS: "-1"
      CONNECTOR_SCHEMA_CACHE_SIZE: "1000"
      CONNECTOR_ENHANCED_AVRO_SCHEMA_SUPPORT: "false"
      CONNECTOR_CONNECT_META_DATA: "true"
      CONNECTOR_RETRY_BACKOFF_MS: "5000"
      CONNECTOR_FILENAME_OFFSET_ZERO_PAD_WIDTH: "10"
      CONNECTOR_ERRORS_DEADLETTERQUEUE_TOPIC_REPLICATION_FACTOR: "2"
      CONNECTOR_ROTATE_SCHEDULE_INTERVAL_MS: "300000"

    sftp:
      CONNECTOR_NAME: "sftp-source-connector"
      CONNECTOR_CONNECTOR_CLASS: "com.nokia.ids.connect.remote.RemoteFileSourceConnector"
      CONNECTOR_TASKS_MAX: "1"  #should always be set to "1"
      CONNECTOR_REMOTE_HOST: "localhost"
      CONNECTOR_REMOTE_PORT: "22"
      CONNECTOR_REMOTE_USERNAME: "user"
      CONNECTOR_REMOTE_PASSWORD: "pass"
      CONNECTOR_TOPIC: "enterprise.billing"
      CONNECTOR_INPUT_PATH: "./files"
      CONNECTOR_INPUT_FILE_PATTERN: ".*\\.csv"
      CONNECTOR_INPUT_FILE_FORMAT: "CSV"
      CONNECTOR_FINISHED_PATH: "./finished"
      CONNECTOR_ERROR_PATH: "./error"
      # DELETE or MOVE
      CONNECTOR_CLEANUP_STRATEGY: "DELETE"
      CONNECTOR_KEY_SCHEMA: ""
      # The order of fields in value schema should match the order of columns in CSV
      CONNECTOR_VALUE_SCHEMA: "{\"type\":\"record\",\"name\":\"ImpactBillingValue\",\"fields\":[{\"name\":\"TenantName\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Type\",\"type\":\"string\"},{\"name\":\"SubjectIdentity\",\"type\":\"string\"},{\"name\":\"ActionIdentity\",\"type\":\"string\"},{\"name\":\"RequestTimestamp\",\"type\":{\"type\":\"long\",\"connect.version\":1,\"connect.name\":\"org.apache.kafka.connect.data.Timestamp\",\"logicalType\":\"timestamp-millis\"}},{\"name\":\"ActionTimestamp\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Role\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Action\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Result\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"RequestId\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"SubscriberId\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"NetworkId\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"CustomerId\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Manufacturer\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Model\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"PacketSize\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Protocol\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"JobType\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"JobId\",\"type\":[\"null\",\"int\"],\"default\":null},{\"name\":\"ResourceName\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"URI\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"RequestMethod\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ChargingId\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"Reason\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"URL\",\"type\":[\"null\",\"string\"],\"default\":null},{\"name\":\"ResponseCode\",\"type\":[\"null\",\"int\"],\"default\":null},{\"name\":\"NumberOfRetries\",\"type\":[\"null\",\"int\"],\"default\":null}]}"
      CONNECTOR_BATCH_SIZE: "1000"
      # Sleep time between polls if they return empty records.
      CONNECTOR_POLL_DURATION: "60000"
      # If not provided, the default will be the timezone of the connect pod. Uncomment this line, only if need to be set to a different value.
      # CONNECTOR_PARSER_TIMESTAMP_ZONE: ""
      CONNECTOR_PARSER_TIMESTAMP_FORMATS: "yyyyMMddHHmm+ss"
      # Skip header line
      CONNECTOR_CSV_SKIP_LINES: "1"

    cassandraSink:
      CONNECTOR_NAME: "cassandra-sink-connector-additional"
      CONNECTOR_CONNECTOR_CLASS: "com.nokia.ids.connect.cassandra.CassandraSinkConnector"
      CONNECTOR_TASKS_MAX: "1"
      CONNECTOR_CONFIG_ACTION_RELOAD: "restart"
      CONNECTOR_TOPICS_REGEX: "enterprise.billing"
      CONNECTOR_LOADERS: "billing"
      CONNECTOR_BILLING_TOPIC_REGEX: "enterprise.billing"
      # Either getEnterpriseByLevel.value.<Kafka field containing fully qualified group name> OR constant.<Exact keyspace name>. In case of constant, all records are inserted into the given keyspace, rather than being determined by tenant
      CONNECTOR_BILLING_KEYSPACE: "getEnterpriseByLevel.value.TenantName"
      CONNECTOR_BILLING_LTS_TABLENAME: "billing_event_lts"
      # <Cassandra column name>:value.<Kafka field name>
      CONNECTOR_BILLING_LTS_MAPPING: "pid:timePartitioner.value.RequestTimestamp,serverTime:value.RequestTimestamp,groupName:value.TenantName,type:value.Type,subjectIdentity:value.SubjectIdentity,actionIdentity:value.ActionIdentity,actionTime:value.ActionTimestamp,role:value.Role,action:value.Action,result:value.Result,requestId:value.RequestId,subscriberId:value.SubscriberId,networkId:value.NetworkId,customerId:value.CustomerId,manufacturer:value.Manufacturer,model:value.Model,packetSize:value.PacketSize,protocol:value.Protocol,jobType:value.JobType,jobId:value.JobId,resourceName:value.ResourceName,uri:value.URI,requestMethod:value.RequestMethod,chargingId:value.ChargingId,reason:value.Reason,url:value.URL,responseCode:value.ResponseCode,numberOfRetries:value.NumberOfRetries"

secret:
   name: impact-ids-secrets

dataFeed:
  # To enable delivery of messages to target AWS S3 Mark true or false based on the requirement
  s3FeedEnabled: true
  # To enable receiving of messages from source SFTP. Mark true or false based on the requirement.
  # Used for billing
  sftpConnectorEnabled: false
  # To enable storing of messages to cassandra. This is for topics besides the one covered by current cassandra connector. Mark true or false based on the requirement
  # Used for billing
  cassandraAdditionalConnectorEnabled: false

userFilter:
  # Specify users list as yaml collection/list whose data need to read from IMPACT.
  # Eg: - "All" or - "None" or see below multiple users
  # - "user1"
  # - "user2"
  # Read https://yaml:org/spec/1:1/#id857181 for syntax
  # Options can be "All" or "None" to read all or no tenants data or yaml list to read from tenant-admins listed
  - "All"

ingress:
  host: impact-edge-01

logger:
  rootLogLevel: INFO
  logLevel: DEBUG

version: ${helm.app.version}
replicas: 3
resources:
  limits:
    memory: "2048Mi"
    cpu: 1500m
  requests:
    memory: "2048Mi"
    cpu: 1500m
rbac_enabled: true
dataCleanUpJobCronExpression: '0 ${random.int[0,59]} 0 * * *'
licenseExpiryCheckCronExpression: 0 0 0/2 * * ?
apiBillingEnabled: false

# Properties related to rules engine
rulesengine:
  enabled: true
  # When enabled, the default rules will be provisioned for on enterprise on creation.
  provisionDefaultRules: false
  topic:
    value:
      # Should match the value converter used in Kafka Connect. Eg: AVRO should be used if AvroConverter is configured in Connect.
      format: AVRO
    key:
      # Should match the key converter used in Kafka Connect. To support multiple keys in KSQLDB configure it as JSON. Even if its configured as StringConverter in Connect, this property should be JSON
      format: JSON
    # Partitions and replication factor for the kafka topics.
    partitions: 24
    replication_factor: 2

visualization:
  # When enabled, grafana will be provisioned with organization, user, etc.
  enabled: true
  # When enabled, the default dashboard will be imported into grafana during enterprise creation.
  importDefaultDashboards: false
  # URL of IDS API. This should be accessible from grafana installation. Default is "http://impact-ids-service:8080/"
  idsUrl: "http://impact-ids-service:8080/"
  grafana:
    # URL of grafana service
    url: "http://impact-grafana-cpro-grafana"
    # Admin username of grafana
    adminUser: "admin"
    # Admin password of grafana
    adminPassword: "admin"
  
# Properties related to ids rule engine service
ids-re-service:
  replicas: 3
  resources:
    limits:
      memory: "1024Mi"
      cpu: 500m
    requests:
      memory: "512Mi"
      cpu: 500m
  idsAppServiceURL: http://impact-ids-internal-service:8081
  threadsPerStreamingApp: 3

# CKAF KSQL properties
ckaf-ksql:
  KafkaKsql:
    security:
      kafka:
        sasl:
          enabled: false
          #possible values "GSSAPI" or "PLAIN"
          mechanism: "PLAIN"
        ssl:
          enabled: false
      schema:
        ssl:
          enabled: false
        # if schema-registry basic auth is enabled, client to use SASL_INHERIT as the credential source
        # turn ON the flag "saslInheritAuthentication" to true
        basicAuth:
          saslInheritAuthentication: false
      rest:
        ssl:
          enabled: false
  resources:
    limits:
      cpu: 1000m
      memory: 4096Mi
      ephemeral-storage: 4G
    requests:
      cpu: 1000m
      memory: 4096Mi
      ephemeral-storage: 4G
  replicaCount: 3
  servicePort: 8088
  loadPluginsFromInitContainer: true
  pluginsInitContainerResources:
    requests:
      cpu: 200m
      memory: 1Gi
      ephemeral-storage: 1G
    limits:
      cpu: 400m
      memory: 1Gi
      ephemeral-storage: 1G
  #Point to the repository containing the ids docker images.
  pluginsInitContainerImageName: "xp-docker-candidates.repo.cci.nokia.net/ids-ksql-functions"
  pluginsInitContainerImageTag: ${helm.app.version}
  pluginsBasePath: "/opt/impact-ids/jars/"

#Grafana for Visualizing IDS Data.
cpro-grafana:
  enabled: true
  helm3: true
  adminUser: admin
  adminPassword: admin
  name: grafana
  appTitle: "Impact IDS Dashboard"
  readOnlyRootFilesystem: false
  # The Infinity Datasource Plugin is required for visualizing IDS data.
  # Download the plugins from here:
  # https://grafana.com/api/plugins/yesoreyeram-infinity-datasource/versions/0.8.0/download
  # https://github.com/marcusolsson/grafana-dynamictext-panel/releases/download/v1.9.0/marcusolsson-dynamictext-panel-1.9.0.zip
  # The plugin should be accessible from this env.
  # NOTE: Plugins should be in .tar.gz format. The plugins will be ignored and not installed otherwise.
  pluginUrls:
    - http://10.99.55.89/Images/yesoreyeram-infinity-datasource-0.8.0.tar.gz
    - http://10.99.55.89/Images/marcusolsson-dynamictext-panel-1.9.0.tar.gz
    - http://10.99.55.89/Images/briangann-datatable-panel-1.0.3.tar.gz
    - http://10.99.55.89/Images/briangann-gauge-panel-0.0.9.tar.gz
  replicas: 3
  resources:
    limits:
      cpu: 500m
      memory: 512Mi
    requests:
      cpu: 100m
      memory: 128Mi
  ingress:
    enabled: true
    path: /grafana/?(.*)
    hosts:
      - "ids-dashboard.impact.nokia.com"
  certManager:
    used: true
  # Change CSF Grafana server protocol from https to http - Start
  grafana_ini:
    database:
      password: Pas$1234
    server:
      protocol: http
  scheme: http
  livenessProbe:
    scheme: HTTP
  readinessProbe:
    scheme: HTTP
  ingress:
    annotations:
      nginx.ingress.kubernetes.io/secure-backends: "false"
  # Change CSF Grafana server protocol from https to http - End
#test-merge-trigger

Table 1. values.yaml properties
Parameter Description Values
global:docker:registry The registry where the docker images are hosted. default: xp-docker-releases.repo.lab.pl.alcatel-lucent.com
global:timeZone The timezone to be used by the services. This should be the same timezone that is being used for Cassandra. default: Asia/Kolkata
global:javaopts To provide additional deployment specific JVM tuning parameters. -XX:MinRAMPercentage=50.0
global:docker:pullPolicy The policy used to pull the docker image. default: Always
Policy Result
Always Always pull the image.
IfNotPresent Only pulls the image if it does not exist already on the node.
Never Never pulls the image.
global:cassandra:nodes A comma-separated list of Cassandra hosts.

Format: <host>:<port>

10.75.205.139:9042,10.75.205.140:9042
global:cassandra:localDatacenter Local datacenter name of cassandra cluster datacenter1
global:cassandra:keyspace Specifies the keyspace under which Intelligent Data Store schema is created. impact_ids
global:cassandra:adminUsername Specifies the Cassandra admin name. casadmin
global:cassandra:adminPassword Specifies the Cassandra admin password. motive
global:cassandra:userName The application username with which the Intelligent Data Store application will access the Cassandra database. iduser
global:cassandra:userPassword The application password with which the Intelligent Data Store application will access the Cassandra database. password
global:cassandra:Strategy For a single-node deployment, leave as SimpleStrategy. For a clustered multi-node deployment, set the value as NetworkTopologyStrategy. default: SimpleStrategy
Strategy Description
SimpleStrategy Use in case of a single node instance.

NetworkTopologyStrategy

Use in case of multi node Cassandra cluster.

global:cassandra:replication Specifies the number of database replicas required. 'replication_factor': '1'
global:cassandra:dataEncryptionEnabled Specifies if the data stored in the tables need to be encrypted. If set to true then all the data stored in the tables will be encrypted using system-key. Before enabling this feature make sure that the Cassandra has system-key generated and TDE is enabled. Default: true
global:license:impactServerUrl URL of ImpactServer. Used to fetch license from the impact server. Default: http://impactapi
global:license:impactServerPort Port of ImpactServer. Used to fetch license from the impact server. Default: 9090
global:license:impactConnectReleaseName Helm release name of the Impact Connect deployment Default: impact-connect
resources: limits: memory Indicates the maximum amount of memory that a pod can use. 2048Mi
resources: limits: cpu Indicates the maximum number of CPUs which a pod can use. Can be fractional. 1500m
resources: requests: memory Indicates the minimum amount of memory that should be available on the cluster for the pod to be scheduled. 1024Mi
resources: requests: cpu Indicates the minimum number of CPUs which should be available on the cluster for the pod to be scheduled. Can be fractional. 1500m
global:broker:hosts Comma-separated list of RabbitMQ host:port or a service:port. rabbitmq:5672
global:broker:username Username of RabbitMQ. impact
global:broker:password Password of RabbitMQ.

This is a mandatory value to be set externally.

impact123
global:broker:virtualHost virtualHost is the vhost of RabbitMQ. /
version The version of the docker image that is used. Default: release version
replicas The number of replicas of the pod to run. This should be decided based on the sizing calculator. Default: 1
rbac_enabled This runs the pods as non-root-user (recommended). Default: true
ingress:host Hostname to be configured for ingress. Can be empty. The default hostname configured in ingress controller is used.
dataCleanUpJobCronExpression This is quartz scheduling expression used to schedule the job which cleans up data older than the retention period per enterprise. 0 ${random.int[0,59]} 0 * * *

(Runs randomly after mid night within an hour's interval)

licenseExpiryCheckCronExpression This is configured to schedule how often the license Expiry has to be checked.

Default: 0 0 0/2 * * ?

(Runs every 2 hours)

apiBillingEnabled This is the property value to enable the API Billing feature. Default: false
logger:rootLogLevel Root logger level Default: INFO
logger:logLevel Logger level for Intelligent Data Store. Default: DEBUG
rulesengine:enabled Enables the rules engine feature. Default: true
rulesengine:provisionDefaultRules This property enables to control the provision of default rules when an enterprise is created. Default: false
rulesengine:topic:value:format Value format for the Kafka Topic. Should match the value converter used in Kafka Connect.

Example: AVRO should be used if AvroConverter is configured in Connect.

Default: AVRO
rulesengine:topic:key:format Key format for the Kafka Topic. Should match the key converter used in Kafka Connect.

Example: KAFKA should be used if AvroConverter is configured in Connect.

Default: KAFKA
rulesengine:topic:partitions Number of partitions for the Kafka Topic. Default: 24
rulesengine:topic:replication_factor Replication factor for the Kafka Topic. Default: 2
ids-re-service:replicas Number of replicas of the service to run. Default: 3
ids-re-service:idsAppServiceUrl The url of the Intelligent Data Store internal service. Example: http://impact-ids-internal-service:8081
ids-re-service:threadsPerStreamingApp The number of threads to be spawned per streaming app. Default:3
ids-re-service:kafka:bootStrapServers The bootstrap server url of Kafka. Examle: PLAINTEXT://kf-impact-connect-headless.default.svc.cluster.local:9092
ids-re-service:schema-registry:url The url of the schema registry. Example: http://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081
cp-ksql-server:replicaCount The number of replicas of the KSQL pod to run. This should be decided based on the sizing calculator. Default: 3
cp-ksql-server:kafka:bootstrapServers Kafka service to bootstrap ksql-server. Example: "PLAINTEXT://kf-impact-connect-headless.default.svc.cluster.local:9092" or "SSL://kf-impact-connect-headless.default.svc.cluster.local:9092"
cp-ksql-server:cp-schema-registry:url Schema registry service and port Example: http://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081 or https://impact-connect-ckaf-schema-registry-headless.default.svc.cluster.local:8081
cp-ksql-server:servicePort Port of KSQL server Default: 8088
visualization:enabled Enables the visualization feature Default: true
visualization:importDefaultDashboards To import default grafana dashboards when creating an enterprise. Default: false
visualization:idsUrl URL of Intelligent Data Store API. This should be accessible from grafana installation. Default: http://impact-ids-service:8080/
visualization:grafana:url URL of grafana service. This should be accessible from Intelligent Data Store. Example: http://impact-grafana-cpro-grafana
visualization:grafana:adminUser Administrator username of grafana. Default: admin
visualization:grafana:adminPassword Administrator password of grafana. Default: admin
cpro-grafana:enabled If enabled, the cpro-grafana is installed as part of Intelligent Data Store chart.
Note: Set this parameter to true when an Intelligent Data Store specific grafana needs to be deployed for the visualization feature. To configure an external grafana settings under visualization feature, set this parameter to false.
Default: true
CONNECTOR_CONNECTOR_CLASS The fully qualified name of the class where the connector customization is implemented.

Modifiable: No

com.nokia.ids.connect.cas.sink.NokiaCassandraSinkConnector
CONNECTOR_TASKS_MAX The maximum number of tasks created for the connector instance. The connector can create fewer tasks if it cannot achieve this level of parallelism.

Modifiable: Yes

1
CONNECTOR_CONFIG_ACTION_RELOAD The action that Connect must take on the connector when changes in external configuration providers result in a change in the connector's configuration properties. A value of none indicates that Connect will do nothing. A value of restart indicates that Connect must restart/reload the connector with the updated configuration properties. The restart may actually be scheduled in the future if the external configuration provider indicates that a configuration value will expire in the future.

Modifiable: Yes

RESTART
CONNECTOR_TOPICS List of topics to consume, separated by commas.

Modifiable: No

impact.lifecycle, impact.observeNotify, mpact.response, impact.djr
CONNECTOR_FLUSH_SIZE The number of records written to store before invoking file commits.

Modifiable: Yes

100
CONNECTOR_ROTATE_INTERVAL_MS The time interval to invoke file commits (in milliseconds). This configuration ensures that file commits are invoked at every configured interval. This configuration is useful when the data ingestion rate is low and the connector did not write enough messages to commit files. The default value -1 means that this feature is disabled.

Modifiable: Yes

-1
CONNECTOR_SCHEMA_CACHE_SIZE The size of the schema cache used in the Avro converter.

Modifiable: Yes

1000
CONNECTOR_ENHANCED_AVRO_SCHEMA_SUPPORT Enable enhanced Avro schema support in AvroConverter: Enumsymbol preservation and Package Name awareness.

Modifiable: No

false
CONNECTOR_CONNECT_META_DATA Allows Connect converter to add the metadata to the output schema.

Modifiable: Yes

true
CONNECTOR_RETRY_BACKOFF_MS Specifies the retry backoff in milliseconds. This configuration is used to notify Kafka Connect to retry delivering a message batch or performing recovery in case of transient exceptions.

Modifiable: Yes

5000
CONNECTOR_FILENAME_OFFSET_ZERO_PAD_WIDTH Specifies the width to zero-pad offsets in store's filenames if offsets are too short in order to provide fixed-width filenames that can be ordered by simple lexicographic sorting.

Modifiable: Yes

10
CONNECTOR_CASSANDRA_CONTACT_POINTS A comma-separated list of host:port values in which cassandra is reachable.

Modifiable: Yes

localhost:9042,localhost:9142
CONNECTOR_CASSANDRA_KEYSPACE The keyspace to write to. ids_keyspace
CONNECTOR_CASSANDRA_ADMIN The admin username Cassandra is configured with.

Modifiable: Yes

admin_user
CONNECTOR_CASSANDRA_ADMIN_PASSWORD The admin password Cassandra is configured with.

Modifiable: Yes

admin_pass
CONNECTOR_CASSANDRA_USERNAME The username to connect to Cassandra.

Modifiable: Yes

ids_user
CONNECTOR_CASSANDRA_PASSWORD The password to connect to Cassandra.

Modifiable: Yes

ids_pass
CONNECTOR_ERRORS_DEADLETTERQUEUE_TOPIC_REPLICATION_FACTOR The replication factor used to create the dead letter queue topic when it does not already exist.

Modifiable: Yes

2
CONNECTOR_ROTATE_SCHEDULE_INTERVAL_MS The time interval to periodically invoke file commits (in milliseconds). This configuration ensures that file commits are invoked at every configured interval. The time of commit is adjusted to 00:00 of the selected timezone. Commit is performed at the scheduled time regardless of the previous commit time or the number of messages. This configuration is useful when needed to commit data based on current server time such as, at the beginning of every hour. The default value -1 indicates that this feature is disabled.

Modifiable: Yes

300000
CONNECTOR_XP_TENANT_GROUP_BY_LEVEL This is the configurable property which decides the tenant group depending on the value configured.

Modifiable: Yes

0
CONNECTOR_NAME This is the name of the connector defined in the application.

Modifiable: No

cassandra-sink-connector