HDFS is the main hub of the Hadoop ecosystem, responsible for storing large data sets both structured & unstructured across various nodes & thereby maintaining the metadata in the form of log files. Thus, to work with such a system, we need to be well versed or at least should be aware of the common commands and processes to ease our task. In that matter, we have consolidated some of the most commonly used HDFS commands that one should know to work with HDFS. To begin with, we need to check the below list. 1. Install Hadoop 2. Run Hadoop -- we can use the 'start-all.cmd' command or start directly from the Hadoop directory. 3. Verify Hadoop services -- We can check if our Hadoop is up and running using the below command jps Great..!!! Now we are ready to execute and learn the commands. **Note:- These commands are case-specific. Do take special care of capital and small letter while writing the commands. 1. version -- this command is used to know the versi