next up previous
Next: Bibliography Up: File Systems: State of Previous: File Systems: State of


The ever increasing gap between processor and memory speeds on one side and disk systems on the other side have exposed the I/O subsystems as a bottleneck for the applications with intensive I/O requirements. Consequently, file systems, as low-level managers of storage resources, have to offer flexible and efficient services in order to allow a high utilization of disks.

Local file systems manage the storage devices attached to a particular computer. They organize the disk in linear, non-overlapping files. The file system provides also a logical name space, typically represented as a tree. As they are often the building blocks of distributed and parallel file systems, this chapter will give a short introduction to local file systems.

Distributed file systems store files on one or several machines. They provide users with a coherent, shared access to the same set of files. Two basic architectures have emerged: client-server [SG85,BR98] and server-less[AD95,FO95,TM97]. A file server is typically a process that offers a file service to a collection of clients. In a server-less file system, all the machines are equal peers that cooperate for providing applications a file service. The paper will compare and contrast this architectures and show the impact each of them has on performance and scalability.

Efficient and simultaneous access to network-attached storage devices is usually the target ofshared file systems[OK98]. For instance, Fibre Channel offers an open standard in network-attached storage interfaces. We will show how shared network storage influences the architecture of file systems.

High-performance distributed computing extended lately from tightly-connected supercomputers or clusters to computational grids composed of heterogenous systems, spread over large geographical areas. Grid file systems[OK01] manage resources spread over several administrative domains. They have to deal with unpredictable performance variations and with changeable system architecture.

Another important trend in high-performance computing is the increasing usage of mobile devices. Mobile devices have to address changing network characteristics ranging from high connectivity to no connectivity. For instance, the file system has to be able to adapt to variable bandwidths[BR98]. Data from disconnected computers has to be consistently reintegrated in the system after reconnection.

For scientific parallel computing, the distributed file systems have to specialize in order to expose the parallelism of the I/O subsystem to the applications. Several studies of parallel I/O access pattern of scientific applications [NK+96,SR97,SR98] have shown that the lack of parallelism inside the I/O subsystem drastically affects the performance. Therefore, parallel file systems[DC92,CF96,CL00,HE95,IT01] allow the parallel access to files. Files are striped over several independent devices and therefore, parallel accesses to files might translate to requests to parallel operating disks.

For an efficient sharing of files, distributed and parallel file systems must allow to user nodes the local caching of file blocks. Consequently, cache consistency has to be addressed. UNIX semantics of file sharing, that guarantees that each file modification is instantly visible to all processor, may be very costly in a distributed environment. Therefore, we will show how distributed and parallel file systems use relaxed semantics in order to reduce the overhead of file sharing.

Another important issue is how the file systems manage file metadata. In this respect, there is an increasing number of file system that use the journaling technique [BE00,PB00,SG00]. Journaling is typically implemented by using a log that records all metadata operations. The log and the data have to be updated on the disks such that recoverability is guaranteed in case of failure.

In the remainder of the paper we will present an overview of different local, distributed and parallel file systems, as outlined in this section. We will use mainly the following basic criteria for characterization:

  1. File distribution: Local or Distributed File System.
  2. Network connectivity: Tightly-connected, loosely-connected or disconnected networks.
  3. Storage attachment: Network-attached or computer-attached storage.

  4. Resource heterogeneity: Does the file system assume that resources (disks, networks) are heterogenous or homogeneous?

  5. Parallel file access: This attribute tells if parallel file access may translate into parallel disk accesses and if this behavior is controllable by applications or is system dependent.

  6. Semantics of file sharing: Unix semantics or other relaxed semantics.

  7. Physical File Placement Control: Is the file placement controllable by applications or is fixed by the system?

  8. Caching/Prefetching: The kind of caching and prefetching used by the system.

  9. Scalability: We will analyze if the system is scalable in terms of both data and metadata.

  10. Metadata management: Journaled or non-journaled file systems.

next up previous
Next: Bibliography Up: File Systems: State of Previous: File Systems: State of
Massimo Coppola 2002-02-08