chunk
Files in the file system are split into chunks (similar to Hadoop blocks) that are
normally 256 MB by default. Any multiple of 65,536 bytes is a valid chunk size, but
tuning the size correctly is important. Files inherit the chunk size settings of the
directory that contains them, as do subdirectories on which chunk size has not been
explicitly set. Any files written by a Hadoop application, whether via the file APIs or
over NFS, use chunk size specified by the settings for the directory where the file is
written.