Network Management

Network Management Poses Endless Challenges

by Willie Schatz

If network managers are in accord about anything, it's that they have a lot more tasks to do than resources to handle them.

The fundamental roles of a network administrator are to provide network connections for computer equipment and to ensure availability and performance of network communications.

But that's only the beginning. The administrator must set up and manage hardware and software solutions, enabling servers, clients, printers and other peripherals to communicate. He or she also is responsible for providing users the highest quality server functionality, which means uninterrupted, optimum network availability and performance.

This same individual also must plan so any changes required in the network conform with changes in the larger enterprise system.

"People really think network management is easier than it really is," says Gerald Murphy, director of network management for RPM Consulting, Columbia, Md. The enterprise management development company, which has 180 employees, was purchased Aug. 5 by Computer Horizons Corp., a global IT services company based in Mountain Lakes, N.J.

"[People] think once they install this tool, all the network problems will magically go away. They won't of course," says Murphy. He uses InCharge software from System Management ARTS' (SMARTS), White Plains, N.Y., to recommend networking policies and procedures that will stop trouble before it starts for RPM's customers.

As Murphy says, managers rarely get the right number of people to do the job. And as the network grows, the amount of time needed for each individual task increases.

"If the network grows sixfold in two years, is the manager going to get that many people? Of course not," he says. "Instead, top management wants to know if [the manager] can cut his staff in half."

Making the Case

Network managers wishing to stay sane must show the bean counters the money, says Murphy. A successful manager convinces the chief information officer at the government agency or company that proper network management means a positive return on investment, not a negative item on the cost side.

The more people who understand how essential a functioning mission-critical network is to the organization's success, Murphy says, the easier it is for the manager to make the case for spending a little money now to save a lot later.

"The real issue for us isn't technology," says Jack Brown, the chief engineer and chief of transmittal and control for the Internal Revenue Service's Washington office. "It has nothing to do with the products. It's the business processes." Brown's domain includes 17 help desks, about 8,000 servers and 113,000 desktops.

But help is on the way.

As part of the IRS' three-year Operational Infrastructural Management automation program, Brown's domain will shrink to 3,000 servers and 88,000 desktops. And even the most recalcitrant users are getting the message that it's not nice to fool with the network.

"Everybody's usually reluctant to change," Brown says. "But more people are accepting the business value perception of the network. Our monitoring performance levels objectives are actually being met."

What's more, Brown says, the agency's TME 10-Netview products from Tivoli Systems, Austin, Texas, put him at ease about establishing service level agreements or automated management tools, such as Cisco's Netsys Service Level Management Suite, which measure network service delivery.

Perhaps more importantly, Brown says, everyone is beginning to respect the network as a key to achieving the agency's goal of providing the highest quality service to the public.

IRS' burgeoning love affair with the network has not removed the quartet of major barriers that Brown and his staff continually confront: the unexplained unavailability of customer baseline services (such as a printer going down); the inability of top managers to obtain timely, usable information; the persistent lack of resources; and the network's unyielding intrinsic complexity. On top of these issues is the problem of finding the most transparent method of traversing the heterogeneous products and different networks required to deliver the ultimate product: service that even an angry taxpayer would love.

Critical Needs

Brown's thesis is supported by the Meta Group, a Stamford, Conn., research company, which cites three critical user needs. They are:

  • to demonstrate service delivery across the entire IT infrastructure;
  • to handle quality-of-service metrics;
  • to relate infrastructure to business value.

To be successful, Meta Group says, products must address those needs by adding four domains of coverage: networks, systems, applications and the business in general. Vendors' packages must then correlate information within and across those four broad domains.

"The job becomes much easier when you can justify the return on investment," says Brown. "Then the funding for projects like our drag-and-drop and point-and-click automated distribution of software in the national transmittal center is easy to come by."

If you ask Brown, integrating the technology is easier than dealing with the money issue. "The network is like the plumbing. When you invest in the infrastructure, the water will be there when you need it," he says.

But a seemingly tiny leak in the infrastructure quickly can become a major hole in the dam. According to International Data Corp., a market research firm in Framingham, Mass., the downtime costs for a typical midsize business average $78,000 per hour. These sites typically lose more than $1 million annually because of such downtime.

Even if the network and its server are available 96 percent of the time, a relatively small, 300-user network still would be hit with $840,000 per year in lost productivity and revenue, according to IDC. In other time-sensitive industries, such as finance and banking, downtime can cost more than $1 million per hour.

Martin Frederickson, director of government sales in Tivoli's Tysons Corner, Va., office, feels inside jobs cause much of the damage. He claims the most serious security threat to his company and his customers is an untrained administrator.

Everyone knows the type: It's the person who accidentally takes down the e-mail system and then, because the network management product is not robust enough to answer any of the Five Ws ? Who? What? When? Where? Why? ? the emergency technicians don't arrive until it's much too late.

Stopping the Bleeding

Today, there is money to be made in stopping the bleeding, and no shortage of vendors promulgating their tourniquets as the tightest on the planet.

Concord Communications Inc., Marlboro, Mass., which claims it invented the network reporting and analysis market in 1992, touts its Network Health product as the only performance and analysis solution covering virtually every crucial area of the network.

And there is independent corroboration that the company isn't blowing smoke. Meta Group says Concord has approximately 30 percent of the performance analysis/reporting market, which it predicts will swell to $700 million in 2000 from $280 million in 1998. Concord also was named leader of that pack by IDC in a report, "Service-Level Monitoring: The Network Reporting and Analysis Market 1997-2002."

Not to be outdone, Netcom Systems of Chatsworth, Calif., claims it has "developed and defined the new network performance analysis market." Using the company's ATM SmartCards, network managers "for the first time ever can measure real performance of their ATM networks," according to company literature. Its other claim to fame is the "first ever multilayer, multiprotocol performance analysis system for Fast Ethernet networks and network devices."

Martin Frederickson
Is that message very much different Newton, Mass.-based Heroix Corp.'s description of RoboMon? That company says only RoboMon "embodies the expertise and thought process of a seasoned system administrator ? so you can trust it to solve problems the same way you would if you had the time."

And how about Loran Technologies Inc. of Vienna, Va., which last June claimed its Kinnetics Network Manager was the first network management appliance that easily finds broken gear "faster and cheaper that all other enterprise management platforms"?

Don't forget SMARTS, which says its InCharge tool "uniquely identifies the root cause of network, system management and application problems in seconds and determines their impact on business processes."

Whole Lotta Hype

No question, there is some awfully thick verbiage out there. But John Virden, Loran's executive vice president, thinks the name game is the name of the game.

"Unfortunately, in this marketplace everyone must use the same words," he says. "But what do they really mean? Everyone says they've got an autodiscovery engine. But I dig deeper than anyone else. And if you don't discover everything, are you really managing the network properly?"

Virden, whose company gets 75 percent of its business from the government, says: "Everyone says they've got topology mapping. But they only give you a logical map. I give you a physical map and do it automatically. That's much better."

According to SPEX, a Reston, Va., enterprise software evaluation firm, there's too much hype in the air. The company writes in its Network and Systems Management software product evaluation kit, published June 30: "Along with hype from leading vendors touting the integration of software for managing the network, there are a great number of best-of-breed tools that offer significant capabilities. Faced with this rapidly changing market, users need to put more emphasis on determining enterprise-level requirements and on creating a workable plan for managing both pieces of this large and important area."

While many customers are still searching for a magic formula, what they need to do is take a serious look at all the issues. Determining their objectives and needs and establishing criteria go a long way toward defining other pieces that should fill in the playing board for execution of a coherent network and systems strategy, the report says.

This can be done by taking a close look at performance issues (e.g., brownouts) as well as availability issues (e.g., blackouts), and putting in place goals for measuring and improving network availability and performance, according to the SPEX report.

Users must keep in mind the overall objective, SPEX says. And that is "to use technological innovations for integration, focusing throughout the enterprise on a single source of supply where possible, to minimize connection and implementation problems, and in all cases to simplify the administrator's tasks."

Talk about mission impossible.

"As a network manager, you have to know everything," says Pierre Fortin, network manager at Canada's Department of National Defence in Ottawa.

When Fortin started his job in 1989, he was supervising a Sperry 5000-based system with 20 users on dumb terminals. When he wanted to know what was happening, he looked at his database. Now he is trying to watch 1,400 users in seven buildings who are kicking the tires of a Cisco Systems 5500 server, 3.5 gigabyte backbone.

"The most challenging part is getting inside users' heads. Half the people still think there's magic happening. The other half realize that it's very serious work to maintain the network and always have it online," Fortin says. "You try to translate what they want, and many times you don't know. But you've still got to give them what they want and the performance they expect. So you just try and match the halves."

Given the value of performance analysis/reporting, Meta Group says, network managers must realize fault management responsibilities leave them almost no time to implement a solution.

But even when they find the time, many of the products that claim to provide SLA management capabilities don't come close. Such products and performance-reporting tools typically focus exclusively on tracking and reporting.

This ignores three definable pieces of network management practice: service-level generation, diagnosis and recovery, according to Meta Group.

Service-level generation involves automatically identifying which metrics should be monitored and which values, levels and thresholds are significant for these. Some tools do the former (e.g., reporting tools with autodiscovery and autopolling); fewer do the latter.

Diagnosis involves automatically establishing the cause or probable cause when a specific service level has been violated.

Recovery involves automatically correcting the cause of a service-level violation. This capability hinges on diagnosis capabilities coming far enough along to ensure the suspected cause is the actual cause.

Seismic Changes

Networking management vendors, only too aware of the recent seismic upheavals in the market, have struggled mightily to overcome those limitations.

And now that many tools equal each other in functionality, reliability, availability and serviceability, the decision to buy or not to buy becomes the same as in any other business.

"Our biggest obstacle is the price point," says Steve Powers, director of systems and network services for Intergraph Computer Systems, Huntsville, Ala.

Powers says his company sees two major trends: the broadening scope of network management, which means needing more tools to do the job, and government agencies becoming more dependent on outsourcing because they can't match private-sector salaries for skilled workers.

These are making it harder to convince potential customers to build rather than buy, he says.

"Most agencies still see network management as a cost, particularly if you're talking the latest and greatest equipment," says Dale Brown, executive manager for systems and network services at Intergraph Computer Systems.

"Boxes per se are much cheaper, but network management systems still can be pretty high priced," Brown says.

As a result, management pushes back because it focuses on the sheer cost of extra people. Industry is not doing a good job explaining the value proposition of those additional on-site people.

That's bad for business, because the networking management business appears closer to boom than bust.

"Budgets are increasingly moderately, because corporations and agencies know they have problems but don't understand them or don't have the confidence about how to attack them," says Gary Van Dyke, president of J.G. Van Dyke & Associates, Bethesda, Md., a security consulting firm with numerous government clients.

Gary Van Dyke
"Part of the problem is that the current crop of vendor products really falls short on addressing essential issues, such as trying to combine increased computing power with distributed architecture, and educating users that information security is an intrinsic part of how a network operates," Van Dyke says.

Vendors are slowly beginning to add those functionalities, but many government agency managers are still operating in a reactive mode because "they really don't have a clue about how that particular product works."

So how will those users get the drift? When all sides in the network management equation realize it's time to move forward to a new space where the territorial imperatives of legacy systems yield to the greatest good for the greatest number.

Converging Camps

"I really think the separate camps are combining into one," says James Massa, the director of Cisco Systems' federal operations in Herndon, Va. "Addressing the four legacy areas of data, contemporary data, video and voice isn't that easy for network management."

And every organization has a different idea of who should be involved in networking decisions, he says.

For example, the Federal Emergency Management Agency has much different internal priorities than the Social Security Administration.

"As technology marches forward, network management will become less labor intensive, because machines will replace people," Massa says.

"It's not going to happen immediately, though. And that still won't reduce the demand for capable network managers who know there's a big difference between buying a device and managing the solution," he says.

And the few who possess that knowledge are finding it hard to stay on top of their game.

"I remember when I thought I knew everything," says Ken Leoni, the director of business development for Heroix. "Forget that. The new technologies just keep coming and coming."

Reader Comments

Please post your comments here. Comments are moderated, so they may not appear immediately after submitting. We will not post comments that we consider abusive or off-topic.

Please type the letters/numbers you see above

What is your e-mail address?

My e-mail address is:

Do you have a password?

Forgot your password? Click here

Washington Technology Daily

Sign up for our newsletter.

Terms and Privacy Policy consent

I agree to this site's Privacy Policy.


contracts DB