Your /common
directory can be accessed via the $COMMON
environment variable, i.e. cd $COMMON.
/common
? /common
is a network attached FS, so limit the number of files per
directory (1 million files in a directory is a very bad idea)./common
for a job, you will need to add a
line to your submission script!
/common
for a
given job.To gain access to the path on worker nodes, a job must be submitted with the following SLURM directive:
#SBATCH --licenses=common
If a job lacks the above SLURM directive, /common
will not be accessible from the worker nodes. (Briefly, this construct will allow us
to quickly do maintenance on a single cluster without having to unmount
$COMMON
from all HCC resources).
/common
?/work
for that – /common
should mostly be used to read largely static files or data./common
is available on machines with different cpu architecture,
different network connections, and so on. caveat emptor!
module
things should be just fine!The /common file system has the capability to compress files so they store
less data on the underlying disk storage. Tools like du
will report the
true amount of space consumed by files by default. If the files have been
compressed before being stored to disk, the report will appear smaller
than what may be expected. Passing the --apparent-size
argument to
du
will cause the report to be the uncompressed size of consumed space.
$ pwd
/common/demo/demo01
$ python -c 'print "Hello World!\n" * 2**20,' > hello_world.txt
$ ls -lh hello_world.txt
-rw-r--r-- 1 demo01 demo 13M Mar 7 12:55 hello_world.txt
$ du -sh hello_world.txt
2.0K hello_world.txt
$ du -sh --apparent-size hello_world.txt
13M hello_world.txt