4.3.7 Communications Architecture and Supporting Hardware Solutions
From an architecture viewpoint, it will be simpler to discuss hardware and communications together; communications being the glue to bind hardware components together.
188.8.131.52 CHART Network Backbone Architecture
CHART II communications will be based on work in progress to implement the CHART Backbone Network, CHART ATM Video Control Module (AVCM), and CHART Field Management Station (FMS)—although the CHART II FMS will have far more functionality than its predecessor.
figure 4-12 illustrates the CHART Backbone Network architecture. The nodes of the network consist of the SOC, District 3 and District 4. These ATM nodes are linked together by a resource shared MDSHA OC-12 SONET infrastructure; the sum of the bandwidth used by all MDSHA nodes on the SONET cannot exceed 648 Mbps (OC-12). The Montgomery County Traffic Management Center (MCTMC) will soon join these sites on the fiber optic network. Each MDSHA node on the SONET could operate concurrently with a 155 Mbps (OC-3) constant bit rate. Beyond being fast, the SONET interconnecting these four nodes also crosses the Baltimore-Washington Local Access Transport Area (LATA) boundary. The practical effect is a high bandwidth, toll free, multimedia communication path. This is consistent with the CHART Business Plan and telecommunications implementation.
Figure 4-12. CHART Network Backbone Physical View
CSC/PBFI Advantage: The SONET interconnecting the CHART II network nodes crosses the Baltimore-Washington LATA, creating a high bandwidth, toll free, multimedia communication path
184.108.40.206 CHART SOC, AOC, District Offices, TOC’s, and MSP Barrack Architectures
Now we’ll address common CHART II site hardware and communication architectures. figure 4-13 illustrates an ATM node that is a microcosm of CHART II. It includes elements to support every facet of CHART II from multicast video to traveler information dissemination via web server and kiosk. Note that the kiosk is shown connected to the Internet. The CSC/PBFI Team understands that public kiosks will be used to access many State services, not just traveler information. An efficient way to do this may be via the Internet with a browser that has programmed links to various State information centers.
figure 4-13 illustrates what the SOC and the MdTA AOC will look like. figure 4-14 illustrates what Districts 3, 4, and 7 will look like. figure 4-15 illustrates what the MCTMC and MdTA Bay Bridge West (TOC-5) will look like. The latter two sites, like the SOC and MdTA AOC, have video distribution switches to distribute video from their roadside cameras throughout their site.
Figure 4-13. SOC and MdTA AOC Site Architecture
CSC/PBFI Advantage: An ATM node, a microcosm of CHART II, includes elements to support CHART II functions
Figure 4-14. District 3, 4, and 7 Site Architecture
CSC/PBFI Advantage: A specialized architecture for the urban districts
Figure 4-15. MCTMC and TOC-5 Site Architecture
CSC/PBFI Advantage: Architecture for these sites includes video distribution switches to distribute video from their roadside cameras throughout their site
figure 4-16 illustrates the architecture for metropolitan MDSHA Maintenance Shops and metropolitan Maryland State Police barracks. To be termed metropolitan the shop or barrack must service the CHART II video area; that is, the Golden Trapezoid enclosed by Districts 3, 4, 5, and 7. Each shop and barrack will be provisioned to support one CCTV image—six DS-0’s— and four DS-0’s for compressed PCM long-distance voice. Doing the math, that means 14 DS-0’s will be available for the MDSHA Enterprise Local Area Network (LAN), needed for the CHART II Workstation, or additional PCM allocation.
Figure 4-16. Metropolitan MDSHA Maintenance Shop and MSP Barrack Architecture
CSC/PBFI Advantage: These shops or barracks service the CHART II video area within the Golden Trapezoid
figure 4-17 implies that Districts 1, 2, and 6 do not have a requirement for CHART II video but they do have requirements for other CHART II and MDSHA Enterprise LAN services. At this time, a single DS-1 will provide adequate bandwidth for the required services, including voice. Note, however, an inverse multiplexer may be used to aggregate multiple DS-1 (1.536 Mbps effective bandwidth) capacity into something greater—3, 4.6, 6.1, 9.2, or 12.3 Mbps—as data bandwidth needs grow. Aggregating up to eight individual DS-1 circuits is cheaper than leasing a DS-3. figure 4-18 illustrates two architectures that will be used to expand CHART II bandwidth to rural offices without resorting to a DS-3 lease.
Figure 4-17. Rural MDSHA District Office Architecture
CSC/PBFI Advantage: Districts 1, 2 and 6 don’t require CHART II video, but do require other CHART and MDSHA Enterprise LAN services
The left side of figure 4-18 illustrates the use of six DS-1’s to provide 9.2 Mbps bandwidth for extra data transmission capacity. The right side of the figure shows the use of six DS-1’s to provide 9.2 Mbps bandwidth for multimedia applications, in this case data and voice. Note that a DS-3 multiplexer had to be used on each end of the connection to provide channelization of the services—just as a T-1 channel bank channelizes the 24 DS-0’s that comprise the T-1—to provide 7.7 Mbps for data (5 DS-1’s) and 1.536 Mbps (1 DS-1) to the PBX for voice.
Figure 4-18. CHART Architecture Samples
CSC/PBFI Advantage: An inverse multiplexer can save circuit costs
Rural Maintenance Shops and MSP barracks don’t get CHART II video but do have access to other CHART II, MDSHA Enterprise LAN, and long-distance voice services. The architecture can be seen in figure 4-19. Du to low data needs in relation to rural district offices, rural Maintenance Shops and MSP barracks will be provisioned with T-1 frame relay circuits that have a 722 Kbps committed information rate (CIR). Approximately 256 Kbps (4 DS-0’s) of bandwidth will be allocated for voice and the remainder for data. PBX services for the shops and MSP barracks will be extended from District Offices.
Figure 4-19. Rural MDSHA Maintenance Shop Architecture
CSC/PBFI Advantage: Rural Maintenance Shops and MSP Barracks have access to CHART II, MDSHA Enterprise LAN and long distance voice services, with the exception of CHART video
220.127.116.11 CHART II Video Communication Architecture
The requirements for CHART II roadside video distribution drove the CHART Backbone Network architecture design, especially the use of ATM. The CHART Telecommunications Study, published in June, 1996, recommended MDSHA implement an ATM architecture and lease T-1 circuits to support CHART video requirements. This is a cheaper alternative to the installation of fiber optic cable along the Interstate and primary arterial roadways.
Please refer to figure 4-20. CHART II video will be collected from roadside CCTV cameras, digitized, and compressed using ITU H.261 specified compression. When compressed, the video requires 384 Kbps of bandwidth to provide 15 frames per second (fps) motion video. This means that each camera requires six DS-0’s of bandwidth for video. Each camera must also be remotely panned and tilted, its lens zoomed in and out, and the lens iris opened and closed to pass more or less light. In addition, each camera provides the capability for remote monitoring of its enclosure environment—although this feature is not currently used.
When fully provisioned, a T-1 circuit can support three CCTV cameras, three pan/tilt/zoom/iris (P/T/Z) control circuits, three video coder administration circuits, one equipment enclosure alarm circuit, one channelized multiplexer administration circuit, and one telephone voice circuit.
Roadside video will be fed to a CHART II ATM switch. The T-1 circuit will be terminated on a structured circuit emulation port. Using structured circuit emulation, the ATM switch can logically bundle any number of adjacent T-1 DS-0 channels to create logical pipes. Hence, for figure 4-21, the ATM structured circuit emulation switch port will be configured for three bundles of six DS-0’s each and six individual DS-0’s.
Figure 4-20. Roadside CCTV Camera Site Hardware Architecture
CSC/PBFI Advantage: CHART II video will be collected from roadside CCTV cameras, digitized and compressed
CHART II video will be delivered to two types of sites: those that have an existing video distribution switch and those that do not. Four sites have video distribution switches: the SOC, MdTA AOC, TOC-5 at the MdTA W.P. Lane Memorial Bridge, and the Montgomery County Traffic Management Center TOC. figures 4-21 and 4-22 illustrate the difference in distribution architecture.
In figure 4-21, four bundles of compressed video are provided by a CHART II ATM switch—local or remote—to a channelized multiplexer. The multiplexer feeds a bundle to each of up to four video decoders for decompression and conversion to NTSC analog video. Recall that a compressed video stream requires a bundle of six DS-0’s and there are four possible bundles of six DS-0’s per T-1. The NTSC video is fed to the site video distribution switch for distribution throughout the complex.
Figure 4-21. SOC, MCTMC, MdTA AOC/Bay Bridge Video Distribution Architecture
CSC/PBFI Advantage: Four bundles of compressed video are provided by a CHART II ATM switch to a channelized multiplexer
As seen in figure 4-22, the process is the same with the exception that the NTSC output of each video decoder is directly fed to a monitor.
Figure 4-22. Video Distribution Architecture at Other Metropolitan Sites
CSC/PBFI Advantage: Video output is fed directly to monitors
18.104.22.168 Field Management Station Architecture
The FMS is being developed by the CSC/PBFI Team as a distributed communication device for the CHART network under a separate task. The communications architecture, shown by figure 4-23, is relatively simple. An FMS/AVCM subsystem server will be co-located with an ATM switch. It will connect to the MDSHA Enterprise LAN to forward ITS roadside telemetry to a processing agent, which may be co-located on the FMS/AVCM subsystem.
As currently envisioned, the FMS/AVCM subsystem will communicate with one or more remote FMS communication controllers located within a Bell Atlantic service area. CHART II will take advantage of a Bell Atlantic Centrex tariff option that allows unlimited ISDN calls without usage charge within the Centrex service area. Hence, MDSHA will only have to pay a low monthly line charge versus a monthly line charge plus per-minute unit charges. The remote FMS controllers will utilize NTCIP over TCP/IP across a 56 Kbps frame relay circuit.
Figure 4-23. Distributed FMS Architecture
CSC/PBFI Advantage: An FMS/AVCM server will connect to the MDSHA Enterprise LAN to forward ITS roadside telemetry
22.214.171.124 Supporting Hardware Solutions
The SOC is the central CHART II nerve center and the MdTA AOC is its designated alternate back-up and disaster recovery site. Within these two centers, the video equipment, including the video wall, will be retained, as will the wiring. All other equipment will be new, including the workstations and LAN server. Equipment and communication circuits to support the MDSHA Enterprise LAN/WAN POP, AVCM, FMS, and CHART field devices are beyond the scope of this proposal. The WAN communication, site POP router (if the existing router must be replaced), AVCM, FMS and workstation infrastructure for CHART II is being supplied to MDSHA under other tasks of the Maryland Network Management Services contract.
The SOC will be provisioned with high availability servers and the MdTA AOC will be provisioned with high reliability servers. Both locations will be provisioned with a second highway operations technician (HOT) operator position monitor to support dual screen GUI applications and a 10 Mbps to 100 Mbps LAN upgrade.
The CSC/PBFI Team will deliver state-of-the-art hardware at the time of purchase. This means the State will most-likely see hardware that is more advanced than described in this proposal. The hardware selected will be based on the current state of the practice, within the proposed budget. A primary criterion for equipment selected by the CSC/PBFI Team will be compatibility with the Maryland Network Management Services (MNMS) effort currently underway. The CSC/PBFI Team will provide MDSHA with cut sheets for approval prior to the purchasing of the hardware.
The SOC is a traffic and crisis management center. It operates 24 hours daily, 7 days per week, 52 weeks per year. When CHART II becomes operational, the SOC will
figure 4-24 shows the SOC topology recommended by the CSC/PBFI Team. Dual monitors for the CHART II workstations are not shown in the figure.
Figure 4-24. SOC CHART II Equipment Topology
CSC/PBFI Advantage: The recommended hardware configuration is fully compatible with MNMS efforts
As the system matures, archival information is expected to become very large (terabyte range). Hence, the database server must scale easily. Preliminary analyses also indicate the SOC database server will have to support a moderate transaction rate and certain middleware applications. The decision support characteristics of CHART II also strongly suggest the database server must be highly reliable and highly available.
Having a high availability server doesn’t mean applications it supports are fault tolerant. A fault tolerant server is specially designed to provide uninterrupted application service even after catastrophic system or environmental failures—such as a life support, aircraft navigation, or telephone switching system. A down side of fault tolerance, though, is system scalability and complexity. Further, fault tolerance often means proprietary technology does not comply with generally accepted industry standards. High availability, on the other hand, implies that a much higher degree of service is required than is normally expected of a single system, or even systems with mirrored disks. High availability usually means that full redundancy is provided, and recovery from a failure takes only seconds or minutes.
The CSC/PBFI Team recommends the CHART II database server be implemented as a high availability cluster. Clustered servers are uniquely suited for the provision of highly available services. They feature redundant paths between all systems, between all disk subsystems, and to all external networks [to the servers]. No single point of failure—hardware, software, or network—can bring a cluster down. Fully integrated fault management software in the cluster will detect failures and manage the recovery process without operator intervention, allowing failed components to be replaced on-line, without impacting availability.
Assuming a high availability cluster will be used, our system engineers evaluated available cluster server offerings. Hewlett Packard, Compaq, Dell, and Digital offer a cluster solution based on MS Windows NT. But industry engineering literature strongly indicates that MS Windows NT, unlike UNIX, does not scale well—meaning database expansion would be difficult—nor is it ready for "prime time" clustering.
CSC/PBFI engineers then evaluated UNIX based cluster products. There were several to choose from but our engineers concluded a Sun Enterprise Cluster product would be attractive because MDSHA already uses Sun products. Sun offers two clustered server models to meet the needs of applications with differing priorities for high availability, performance, and scalability: the Ultra Enterprise Cluster Parallel Database (PDB) server and the Ultra Enterprise Cluster HA 2.1 server.
The PDB server is intended to optimize database performance and scalability using parallelism. The HA 2.1 server is targeted at environments requiring high availability data, file, and application services with assured rapid detection and recovery from hardware, network, operating system, or application software failure. Of the two, PDB and HA 2.1, the HA 2.1 cluster server is the better approach for the CHART II database server.
Of the Sun HA 2.1 cluster server offerings, the CSC/PBFI Team recommends the Sun Ultra Enterprise 3500. Using the Sun StorEdge™ A5000 Fibre-Channel Array product, this cluster can scale too greater than two terabytes of external storage.
The SOC application server will also be supporting critical CHART II client-server applications and probably should be highly available—perhaps even fault tolerant—if a centralized processing model is employed. However, CHART II operational applications—vice engineering and planning applications—are distributed, as are real time sensor data. Thus, relative high availability is built into our traffic management processing architecture and a cluster approach for the SOC application server seems unnecessary. However, our engineers do believe it necessary to provide relatively fast- automated recovery and significant RAM to support SOC client-server applications. To support these characteristics, the CSC/PBFI Team recommends two Compaq Proliant 3000R servers configured for mutual backup.
One of the CSC/PBFI Team GUI features is to display streaming video on CHART II workstations. There are at least two approaches to accomplish this. One approach is to install an NTSC video card and separately connect each workstation to an NTSC video distribution source. This would not be efficient or cost effective. The second approach is to provide the streaming video across the LAN. Several vendors, including Microsoft, have developed network based streaming video capabilities. The CSC/PBFI Team recommends exploiting these new COTS applications. Because of the potentially large number of users at the MDSHA Hanover, MD, facility, the CSC/PBFI Team recommends installation of a stand alone streaming video server—again, a Compaq Proliant 3000R server.
The MdTA AOC performs two CHART II functions. First, it is an independent ITS traffic management center similar to, for example, the MCTMC. Second, because of its proximity to the SOC and similar operation, it has been chosen as the SOC back-up and disaster recovery site. The equipment and wide bandwidth WAN access to be provided via the CHART Network Deployment will meet many of the SOC disaster recovery requirements.
Database and application services necessary to implement a SOC back-up operation is shown in figure 4-25. Dual monitors for the CHART II workstations are not shown in the figure.
Figure 4-25. MdTA AOC Equipment Topology
CSC/PBFI Advantage: Equipment at this site will allow it to serve as backup and an alternative operations center in the event that the SOC ceases to function
Back-up and disaster recovery for the SOC need not mean a replication of all SOC equipment. After all, CHART II traffic management applications—and real time sensor data—are distributed throughout the CHART II network. What appears to be needed are:
The CSC/PBFI Team recommends use of a Sun Ultra Enterprise 450 stand-alone server with RAID level 5 storage for disaster recovery of the database and a Compaq Proliant 3000R application server. Because there will be fewer local distribution points, the application server will also provide video server functions.