Scale limits for functions

Concurrent session limits

NSP supports a combined concurrent session limit of 125. The following table defines any concurrent session limits of NSP functions within the global NSP wide session limit.

Table 5-6: Concurrent session limits for NSP functions

Function

Maximum number of concurrent sessions

Analytics reports

10

Concurrent NE sessions on NFMP managed nodes

100

Concurrent NE sessions on MDM managed nodes

100

Scale limits for Kafka event notifications

Kafka event notifications supports a maximum of 200 OSS subscriptions through NBI notification. Within that global limit, Alarm Management supports a maximum of 5 OSS subscriptions.

Scale limits for Telemetry

Scale Limits for MDM Telemetry (multi-vendor SNMP)

MDM Telemetry data collection is limited by the maximum OSS subscriptions supported by Kafka. The maximum number of Telemetry notifications per second is 1500 per active MDM instance, where one Telemetry record (a collection of statistics counters) update equals one Telemetry notification.

Scale Limits for Cloud Native Telemetry (gNMI)

Cloud Native Telemetry data collection is limited by the maximum OSS subscriptions supported by Kafka. The maximum number of Telemetry notifications per second is 1500 per CN gNMI Collector instance, where one Telemetry record (a collection of statistics counters) update equals one Telemetry notification.

Telemetry data collection is also limited to a maximum of 2,000 NEs per CN gNMI Collector instance.

Scale Limits for Cloud Native Telemetry (accounting file processing)

NSP processing of NE accounting files is performed by Accounting Processor instances. Multiple Accounting Processors can be deployed to gain higher throughput based on network size. Each Accounting Processor instance can support up to 2000 NEs, and can support a file output rate of 6200 records/s with database and kafka rates of 1500 records/s.

Typically local storage can support up to two Accounting Processor instances depending on the available system resources. To gain higher throughput, customer provided network storage should be used.

Maximum scale of 140 million accounting records every 15 minutes is achieved with 25 Accounting Processor instances subject to NSP system capacity.

Scale Limits for combined MDM and Cloud Native Telemetry persistence

The maximum number of rows uploaded to the database per minute is 90 000 per combined active MDM Instance and CN gNMI Collector instance, where one row equals one Telemetry record. This limit applies for Postgres and Auxiliary database storage.

If NSP is deployed with multiple active MDM and/or CN Telemetry Instances, the maximum collective upload rate to a Postgres database is 180 000 rows per minute. When Telemetry data is stored in the Auxiliary Database, the upload rate scales horizontally with more active MDM and/or CN Telemetry Instances. network activity, database activity and latency can also affect database upload rates.

Event timeline limits for managed NEs and services

Some applications make use of historical data for managed NEs and services. The amount of historical data is limited according to the mediation component and database storage.

NFM-P managed NEs and services have a default event timeline of 1 week for oracle database storage and can be configured to a maximum of 1 month. For Auxiliary Database storage the event timeline can be increased to a maximum of 1 year.

For NSP, MDM managed NEs and services have an event timeline of 1 week for Postgres database storage. Auxiliary Database storage is not supported for MDM managed NEs and services.

Scale limits for alarms

The following table defines the alarm limits for NSP:

Key dimension

Maximum number of alarms

Historical alarms from non-NFM-P systems (eg. WS-NOC , MDM, NSP)

10 million

Active alarms from NFM-P, WS-NOC , and/or MDM-managed nodes

500 thousand

Note: Alarm limits describe the aggregate number of alarms that can be handled by NSP but do not supersede individual datasource limits.

The following table defines the performance limits for alarms:

Key dimension

Rate

Sustained alarm rate (combined from all sources) (see Note)

200/second

Concurrent event notification subscriptions limit

5

Note: Alarm rate describes the aggregate volume that can be handled by NSP but does not supersede individual datasource limits.

The following table defines the squelching limits for alarms:

Key dimension

Maximum number of objects

Port squelching

1000 ports

Network element squelching

1000 network elements

Resource group squelching

250 000 ports and/or network elements combined

Note: Because the maximum size for a port group is currently 100k (100 000) ports, multiple resource groups are needed to achieve the 250k squelching limit.

Network Health Overview

Network Map

The number of NEs and links managed in the network may affect performance and topology rendering time.

Multi-layer maps support a recommended maximum of 4000 objects. Users should expect the following multi-layer map loading times with different numbers of NEs.

Link Utilization Map

The Link Utilization map view has limits on the number of supported endpoints and links that can subscribe for stats simultaneously. The following table lists the recommended maximum number of links on the current operational view for different NE types. These are not absolute maximum values but safe recommended limits based on product testing.

Link Type

Maximum

7750 SR physical link

500

7705 SAR / 7210 SAS physical link

200

7750 SR LAG link

160

7705 SAR / 7210 SAS LAG link

60

MD-OAM Scale and Performance

Scale limits and performance metrics are based on testing in a lab environment with an enhanced NSP deployment.

Table 5-7: Scale and performance limits - isolated per metric type

Performance Metric

Scale Limit (records/time)

Scale Limit (tests/time interval)

gNMI collection rate

2.7m records/15min

30 000 tests @ 1 metric/10s

oam-pm accounting collection rate

2.4m records/15min

36 000 tests @ 22 bins, 15min measured interval

combined CRUD rate 1

48 000 CRUD actions/15min

24 000 service validation workflows/hour

Notes:
  1. Combined CRUD is a term used to define an aggregation of CRUD (create/update/delete) operations in the setup/execution/retrieval/deletion of service validation tests.

Table 5-8: Scale and performance limits - combined activity

Performance Metric

Scale Limit (records/time)

Scale Limit (tests/time interval)

gNMI collection rate + oam-pm accounting collection rate

3.8m records/15min

20 000 tests @ 1metric/10s + 30 000 tests @ 22bins, 15m measured interval

gNMI collection rate + oam-pm accounting collection rate + combined CRUD rate 1

2.2m records/15min + 24 000 CRUD actions/15min

12 000 tests @ 1metric/10s + 16 000 tests @ 22bins, 15m measured interval + 12 000 service validation workflows/hr

Notes:
  1. Combined CRUD is a term used to define an aggregation of CRUD (create/update/delete) operations in the setup/execution/retrieval/deletion of service validation tests.

The calculation of records per test type can be determined with the following table.

Table 5-9: Records per Test Result

Test Type

Record Count

Results class info

cfm-dmm (accounting)

24 to 93 per measurement interval

3: /base/oampm-accounting/cfm-dmm-session-acc-stats;

3 * # of bins (7-30): /base/oampm-accounting/cfm-dmm-bin-acc-stats

cfm-dmm (gNMI) 1

1 to 6 per sample window

1 * # of metrics: /base/oam-pm/eth-cfm-delay-streaming

cfm-lmm (accounting)

1 per measurement interval

1: /base/oampm-accounting/cfm-lmm-session-acc-stats

cfm-loopback

1 per test execution

1: /base/oam-result/loopback-result

cfm-slm (accounting)

1 per measurement interval

1: /base/oampm-accounting/cfm-slm-session-acc-stats

twamp-light (gNMI) 1

1 to 6 per sample window

1 * # of metrics: /base/oam-pm/twamp-light-delay-streaming

twamp-light delay (accounting)

24 to 93 per measurement interval

3: /base/oampm-accounting/twl-session-acc-stats;

3 * # of bins (7-30): /base/oampm-accounting/twl-bin-acc-stats

twamp-light loss (accounting)

1 per measurement interval

1: /base/oampm-accounting/twl-session-loss-acc-stats

Notes:
  1. Metrics for gNMI collection records include: fd-average (directions forward, backward, round-trip) and ifdv-average (directions forward, backward, round-trip).

Device Management

The following limitations apply when using NSP’s Device Management views.

While the Device Management inventory tree is manually traversed, 500 objects are loaded at a time, to a maximum of 2000 objects.

If any logical tree level shows more than 500 objects, that entire logical tree level is refreshed after a manual refresh is clicked, but only the first 500 objects are displayed.

Scale limits of Map Layout and Group Directories

Nokia recommends a maximum of 2000 NEs per region for the Operational map view.

Where IP/optical coordination is deployed, the following scaling limits for Map Layout will apply:

Group directories have the following scaling limits.

Scale limits for NSP Baseline Analytics

The NSP Baseline Analytics can support collection storage in the Postgres database or in the Auxiliary database. Baselines are supported on NFM-P and MDM managed nodes.

Key dimension

Postgres database storage

AuxDB storage

Number of baselines

10 000

100 000

Retention time

35 days

403 days

Collection Interval

300 seconds

300 seconds

Window Duration

15 minutes

15 minutes

Season

1 week

1 week

Note: Reducing the Collection Interval or Window Duration will result in a reduced number of Baselines that can be supported.

Scale limits for NSP Indicators

The NSP Indicators can support collection storage in the Postgres database or in the Auxiliary database. Indicators are supported on NFM-P and MDM managed nodes.

The NSP Indicators can only support up to a total of 20 Indicator rules. The recommended maximum number of resources that can feed an Indicator rule is 2500.

Key dimension

Postgres database storage

AuxDB storage

Number of resources (number of incoming entities into NSP Indicators)

10 000

50 000

Retention time

35 days

403 days

Collection Interval (Complex Indicators)

300 seconds

300 seconds

Collection Interval (Simple Indicators)

900 seconds

900 seconds

Window Duration (Complex Indicators)

15 minutes

15 minutes

Note: Reducing the Collection Interval or Window Duration will result in a reduced number of resources that can be supported.

Flow Collector scale for NAT collection

The Flow Collector BB NAT collection limit is 350,000 records/s when customer retrieves files with native s/ftp application.

Scale limits for Large Scale Operations

The Large Scale Operations feature has scaling limits for framework and for device operations.

The following table summarizes the framework limits.

Key dimension

Maximum

Number of concurrent LSO executions

20

Number of stored operations (historical and running)

500

Number of operation types

100

Number of targets per operation

10 000

Number of phases per operation type

10

The following table summarizes the NE device operation limits.

Key dimension

Maximum

Number of nodes for NE backup

10 000

Note: Numbers are based on using enhanced profile disk availability for File Service.

Note: Role Based Access Control will not apply to LSO app user operations in this release.

Scale limits for Zero Touch Provisioning

The following limits apply to Day 0 Zero Touch Provisioning (ZTP):

Key dimension

Maximum

NE instances created per second

5

Simultaneous downloads from file server

10

ZTP instances in various provisioning states

1000

Scale limits for Generic Mediator

The Generic Mediator application has the following scaling limits:

Key dimension

Maximum

Concurrent threads

10

Request queue size

50

User Access Control Performance

In an NSP deployment with User Access Control enabled, and more than 10 user groups are defined, in large networks (> 2000 NEs), NSP GUI performance may be affected if the resource groups contain a very large number of equipment and/or service objects.