NFM-P Scalability guidelines
Scalability limits
Table 5-12, NFM-P Release 24.8 scalability limits represents the scalability limits for Release 24.8. Note that:
-
These limits require particular hardware specifications and a specific deployment architecture.
-
Scale limits for all network elements assume a maximum sustained trap rate of 100 traps/ second for the entire network. NFM-P’s trap processing rate depends on many factors including trap type, NE type, NE configuration, NE and network latency, network reliability as well as the size and speed of the servers hosting the NFM-P application. NFM-P scalability testing runs at a sustained trap rate exceeding 100 per second for the largest deployment and server configurations.
Other NSP Components Hardware platform requirements contains information about identifying the correct platform for a particular network configuration. To achieve these scale limits, a distributed NFM-P configuration is required, and may also require an NFM-P auxiliary statistics collector and a storage array for the NFM-P database station.
Contact Nokia to ensure that you have the correct platform and configuration for your network size.
Table 5-12: NFM-P Release 24.8 scalability limits
Attribute of managed network |
Scaling limit |
---|---|
Maximum number of managed MDAs |
60 000 |
Maximum number of network elements |
50 000 |
Maximum number of GNEs 1 |
50 000 |
Maximum number of managed services |
4 000 000 |
Maximum number of optical transport services |
20 000 |
Maximum number of 1830 VWM RMUs |
60 000 |
Maximum number of SAPs |
12 000 000 |
Maximum number of simultaneous NFM-P GUI sessions |
250 |
Maximum number of simultaneous active XML API HTTP applications |
30 |
Maximum number of simultaneous active XML API JMS applications |
20 |
Maximum number of outstanding alarms |
50 000 |
Maximum number of outstanding alarms - Distributed Configuration |
250 000 |
Maximum number of Historical Alarms |
9 600 000 |
Maximum number of TCAs |
250 000 |
Maximum number of monitored services in the Service Supervision application |
1 000 000 |
Notes:
The number of interfaces on a GNE and the traps that may arise from them is the key factor determining the number of GNE devices that can be managed. As GNE devices are expected to be access devices the sizing is based on an average of 10 interfaces of interest on each device (10 x 50 000 = 500 000 interfaces). Processing of traps from interface types that are not of interest can be turned off in NFM-P. Under high trap load, NFM-P may drop traps.
NFM-P uses the number of MDAs as the fundamental unit of network dimensioning. To determine the current or eventual size of a network, the number of deployed or expected MDAs, as opposed to the capacity of each router, must be calculated.
Table 5-13: Network element maximums and equivalency
Network element type |
Maximum number of network elements supported |
MDA equivalency |
---|---|---|
7750 SR, 7450 ESS, 7450 SR |
50 000 |
|
7705 SAR |
50 000 |
1 NE == 1 equivalent MDA |
7250 IXR-6 / 7250 IXR-10 / 7250 IXR-R4 / 7250 IXR-R6 / 7250 IXR-R6d / 7250 IXR-R6dl |
50 000 |
1 MDA == 1 equivalent MDA |
7250 IXR-s / 7250 IXR-e / 7250 IXR-e2 |
25 000 |
1 NE == 2 equivalent MDAs |
7210 SAS |
50 000 |
1 NE == 1 equivalent MDA |
OMNISwitch 6250, 6400, 6450, 6850 (each shelf in the stackable chassis) |
50 000 |
1 NE == 1 equivalent MDA |
OMNISwitch 6350, 6465, 6560, 6865 (each shelf in the stackable chassis) |
5000 |
1 NE == 1 equivalent MDA |
OMNISwitch 6860, 6860E, 6860N |
5000 |
1 NE == 1 equivalent MDA |
OMNISwitch 6900 |
800 |
1 NE == 1 equivalent MDA |
OMNISwitch 9600, 9700, 9700E, 9800, 9800E (each NI) |
1000 |
1 NI == 1 equivalent MDA |
OMNISwitch 10K (each NI) |
400 |
1 NI == 1 equivalent MDA |
9500 MPR / Wavence SM |
15 000 |
1 NE == 1 equivalent MDA |
1830 VWM OSU |
2000 |
|
VSC |
1 |
N/A |
Notes:
The IMM card has an MDA equivalency of 2 MDAs per card.
The CMA card has an MDA equivalency of 1 MDA per card.
The 1830 VWM OSU Card Slot has an MDA equivalency of 1/4 MDA per card to a maximum MDA equivalency of 30 000
NFM-P performance targets
Table 5-14, NFM-P performance targets represents the performance targets for the NFM-P. Factors that may result in fluctuations of these targets include:
Table 5-14: NFM-P performance targets
Performance item description |
Target |
---|---|
NFM-P client GUI performance | |
Time to launch an NFM-P client GUI |
1 - 2 minutes |
Time to launch an NFM-P client GUI configuration form |
~5 seconds |
Time to save an NFM-P client GUI configuration form |
~2 seconds |
NFM-P server performance | |
Time to restart the NFM-P server |
15 - 30 minutes (subject to network dimensions) |
NFM-P database Backup (without statistics) |
Up to 60 minutes (subject to network size) |
NFM-P database Restore |
~45 minutes |
NFM-P server activity switch |
10 - 30 minutes (subject to network dimensions) |
NFM-P DB switchover (by invoking through the GUI) |
<10 minutes |
NFM-P DB failover |
30 minutes when managing maximum number of devices |
Recovery of standby NFM-P database after failover |
<75 minutes |
Upgrade Performance | |
NFM-P client Upgrade |
~10 minutes |
NFM-P complex upgrade (server, database, auxiliaries) 1 |
<6 hours |
NFM-P upgrade maximum visibility outage with NFM-P redundant system 2 |
15 - 30 minutes |
Notes:
The target includes the installation of the software on the existing servers and NFM-P database conversion. Operating System installation/upgrades, patching, pre/post-upgrade testing and file transfers are excluded from the target.
Provided proper planning and parallel execution procedures were followed.