JetFile: Implementation and Measurements

Björn Grönvall, Assar Westerlund, and Stephen Pink
Swedish Institute of Computer Science
{bg, assar, steve}@sics.se

JetFile is a scalable distributed file system that uses IP multicast for its distribution and synchronization mechanisms. We have implemented parts of the JetFile design (1) and have measured its performance over a local area network. Our measurements indicate that JetFile performance for a standard benchmark is close to the performance of the local disk.

The system is designed such that clients assume some of the traditional server responsibilities. We therefore choose to call clients, file managers. Managers store files created or used locally on disk. When another manager wishes to access a file, it is replicated at the new manager. This manager will now also act as a server for this particular file. Multicast communication is used to locate and retrieve files. A request for the file is sent to a well-defined IP multicast group address and, as a response, the file is sent from one of the replication sites to the same multicast address. Note that this scheme obviates the need to immediately write the file to a server. Managers can instead retrieve files from other managers without going to a server as is done in traditional systems.

File version numbers are used to avoid update conflicts and to serialize updates. To be able to update a file, it is necessary to first request a new version number which is created by the versioning server. Once a manager has acquired a new version number, the manager is allowed to update the file until the file is replicated. Thus, the new version number acts as an update token for the file. Request and response messages for version numbers are multicast so that they will act both as call-back breaks and inform managers of the new version number.

In our implementation, the file manager is split into a kernel module and a user-level daemon. These two communicate by sending events over a character device. The kernel module keeps a cache of Vnodes and a cache of mappings from {directory, filename} to Vnode. For data not present in the cache the kernel module must consult the daemon. The daemon implements all the protocol processing.

The versioning server is implemented in user space and only needs to keep track of the current versions of files and to service requests for new version numbers.

File system Warm cache Cold cache
UFS 22.5s(0.1) 100% N/A
JetFile 20.5s(0.3) 91% 23.2s(0.3) 103%
NFS(NetApp) 24.2s(0.5) 107% 24.8s(0.3) 110%
AFS 26.5s(0.2) 118% 28.4s(0.6) 126%
NFS 29.5s(0.1) 131% 30.2s(0.2) 134%

Tests were conducted over a 10Mb/s Ethernet on HP 735/99 model workstations. The benchmark used was the total elapsed time in seconds of the Andrew benchmark (2). We present an average and standard deviations over five runs. JetFile can create files with fewer synchronous meta-data updates and is thus faster than UFS on the MakeDir and CopyAll phases.

One machine was used as the benchmark machine; the second was used either as a JetFile manager housing the files (thus acting as a server) or as an AFS or NFS file server. Finally, a third machine was used as a versioning server.

We have demonstrated that it is possible to build distributed file systems with performance similar to that of a local disk.


NetApp: Specialized hardware device (Network Appliance F540) acting as NFS server, included only for comparison.
  1. Grönvall, B., Marsh, I., Pink, S. A Multicast-based Distributed File System for the Internet Proceedings of the Eight ACM European SIGOPS Workshop, September 1996.
  2. Howard, J.H., Kazar, M.L., Menees, S.G., Nichols, D.A., Satyanarayanan, M., Sidebotham, R.N., West, M.J. Scale and Performance in a Distributed File System ACM Transactions on Computer Systems, 6(1), February 1988.