Direct Access NFS
Describes the Data Fabric direct access file system.
The Data Fabric direct-access file system enables real-time read/write data flows using the Network File System (NFS) protocol. Standard applications and tools can directly access the file system storage layer using NFS. Legacy systems can access data, and traditional file I/O operations work as expected on a conventional UNIX file system. A remote client can easily mount a Data Fabric cluster over NFS to move data to and from the cluster. Application servers can write log files and other data directly to the Data Fabric cluster storage layer instead of caching the data on an external direct or network-attached storage.
You can mount a Data Fabric cluster directly through a network file system (NFS) from a Linux or a Mac client. When you mount a Data Fabric cluster, applications can read and write data directly into the cluster with standard tools, applications, and scripts. Data Fabric enables direct file modification and multiple concurrent reads and writes with POSIX semantics. For example, you can run a MapReduce application that outputs to a CSV file, and then import the CSV file directly into SQL through NFS.
Data Fabric exports each cluster
as the directory /mapr/<cluster name>
. If you create a mount point with
the local path /mapr
, Hadoop FS paths and NFS paths to the cluster will be
the same. This makes it easy to work on the same files through NFS and Hadoop. In a
multi-cluster setting, the clusters share a single namespace. You can see them all by mounting
the top-level /mapr
directory.