piątek, 4 stycznia 2013

Hadoop. Using HDFS.

One of Hadoop concepts is having consistent file system for all provided cluster nodes. For Hadoop Distributed File System avilable are unix-based commands set. You can read full syntax calling hadoop dsf -help, nevertheless I atached part of it below. -fs [local | <file system URI>]: Specify the file system to use.

-ls <path>: List the contents that match the specified file pattern.

-lsr <path>: Recursively list the contents that match the specified file pattern.

-mv <src> <dst>: Move files that match the specified file pattern <src>

to a destination <dst>. -cp <src> <dst>: Copy files that match the file pattern <src> to a destination.

-rm [-skipTrash] <src>: Delete all files that match the specified file pattern. Equivalent to the Unix command "rm <src>"

-rmr [-skipTrash] <src>: Remove all directories which match the specified file pattern. Equivalent to the Unix command "rm -rf <src>"

-put <localsrc> ... <dst>: Copy files from the local file system into fs.

-copyFromLocal <localsrc> ... <dst>: Identical to the -put command.

-moveFromLocal <localsrc> ... <dst>: Same as -put, except that the source is deleted after it's copied.

-get [-ignoreCrc] [-crc] <src> <localdst>: Copy files that match the file pattern <src> to the local name. <src> is kept.

-cat <src>: Fetch all files that match the file pattern <src> and display their content on stdout.

-copyToLocal [-ignoreCrc] [-crc] <src> <localdst>: Identical to the -get command.

-mkdir <path>: Create a directory in specified location.

-tail [-f] <file>: Show the last 1KB of the file. The -f option shows apended data as the file grows.

Brak komentarzy:

Prześlij komentarz