Bulk of network data languishes in storage
Integrators can save dollars by putting old data on lower-cost storage area networks.
Statistically speaking, most data on enterprise networks rarely gets accessed after it is written to network storage, according to researchers from NetApp Inc. and the University of California. Evidently, we are too busy writing new data to go back over old data.
Andrew Leung, a computer science researcher at the University of California, presented the findings at the June USENIX conference in Boston. Given those results, organizations might want to consider moving much of their data to slower but less expensive storage units since it rarely gets accessed, he said.
The team studied the traffic that flowed through NetApp's enterprise file servers, which manage more than 22T of material relating to all aspects of the company's business operations.
Leung said the study is the first large-scale examination of network traffic patterns. "How people have been deploying network file systems has been changing over the past five to 10 years," he said. "They are being used more commonly for different kinds of things. So what we would like to know is how this affects the workloads of the network."
During the three-month period that the network was under scrutiny, more than 90 percent of the material on the servers was never accessed. The researchers captured packets encoded using the Common Internet File System protocol, which Microsoft Windows applications use to save data via a network. About 1.5T of data was transferred.
"Compared to the full amount of allocated storage on the file servers, this represents only 10 percent of data," Leung said. "[This] means that 90 percent of the data is untouched during this three-month period."
Moreover, among the files that were opened, 65 percent were only opened once. And most of the rest were opened five or fewer times, though about a dozen files were open 100,000 times or more.
"What this suggests, in general, is that files are infrequently re-accessed," Leung said.
The team also observed that the ratio of data being read from storage versus the amount of data written to storage had changed from what had been seen in previous studies. They found that the ratios of bytes read compared to bytes written was about two to one. "Past studies saw read-to-write ratios of 4-1 or higher, Leung said.
Developers of file systems might want to take into consideration the fact that their creations are spending almost as much time writing data as reading data. "The workloads are becoming more write-oriented, so the decrease in read-only traffic and the increase in write traffic suggests that file systems want to be more write-oriented," Leung said.
File-server vendors also might want to consider re-jiggering their pre-fetching and caching algorithms to improve performance, given those findings. "If we know that files aren't frequently re-accessed, what this suggests is that [caching] algorithms may not be the best for network file systems" because the material cached will probably not get retrieved, he said.
Speaking to Government Computer News after the presentation, Leung described the 10 percent of data that was being re-accessed. Typically, it is in the file format most closely associated with the user's job. Architects might use computer-aided design files, while developers use source-code files. Also, files that are higher up in a file path or closer to the user's home directory tend to be accessed more often than those buried deeper down in a hierarchy of subfolders.
More than 75 percent of the files being opened were very small ? less than 20K each ? although another 12 percent were more than 5G each.
USENIX: the Advanced Computing Systems Association allows technicians, scientists, systems administrators and engineers to share information on developments in the field of computer science.
NEXT STORY: Lockheed lands $1.2B TSA payroll services deal