Dear Learner,

We hope you are doing well.

Yes we can increase and decrease the block size of cluster, if we want to change the size of the whole cluster then we have to add the property in hdfs-site.xml file and then re-start the daemons automatically the block size of the whole cluster will be changed.

<value>67108864</value> //changing to 64 MB

We can also the block size of a specific file as well while writing to hdfs.

For eg -> Bydefault block size of the cluster is 128 MB but for file input1.txt I want the block size should be 64 MB then please run the below command.

hdfs dfs -D dfs.blocksize=67108864 -put input1.txt hdfs:/
hadoop fs -stat %o hdfs://localhost/input1.txt

Feel free to contact us in case you have any query.