Accessing CERN EOS from Liverpool HEP
Files stored on the CERN EOS service can be accessed directly from Liverpool HEP Centos 7 systems in two main ways: via the EOS client or via XROOTD.
The EOS client is recommended for light access eg listing files, copying small amounts of data. XROOTD access is recommended for heavy data access eg batch analysis jobs or bulk data copying.
Accessing via EOS Client
Using the local EOS client requires a valid CERN kerberos token. You can obtain one with the following command
and show any currently valid tokens with
By default your token is stored in $HOME/.globus/ so it is available on all HEP systems, you shouldn't need to obtain a token on each system.
Once you have a token you can use the eos command to access files eg
- export EOS_MGM_URL=root://eosuser.cern.ch
- eos ls /eos/user/j/jbloggs/
- eos cp myfile.txt /eos/user/j/jbloggs/
Using system /eos mounts
EOS is mounted locally as a FUSE filesystem, giving standard POSIX file access. By default the CERN EOS areas are available. They are auto-mounted when accessed.
HEP Centos7 systems should have the EOS areas for experiments available under
. To access this you just need your valid CERN kerberos token.
The EOS user and project areas are also available under /eos/user/INITIAL/CERNUSERNAME eg /eos/user/j/jbloggs
Using user-configured eos mounts
Specific experiment EOS services (including any EOS services other than CERN) can be mounted directly on a local directory. Kerberos tokens are recommended for full access.
Configure the service details with eg
Then mount the service on a local directory eg $HEPTMP/eos
eos fuse mount $HEPTMP/eos
The EOS area for that experiment will now be available under
. When you have finished accessing files unmount the area with
eos fuse umount $HEPTMP/eos
This shouldn't be necessary for most CERN-based EOS services as they're already available under /eos. Let helpdesk know if you find any EOS areas you can't access.
Accessing via XROOTD
Files can be accessed from CERN EOS without any local clients or mounts by directly accessing the files using the XROOTD protocol. ROOT should support this protocol natively so data can be read straight from EOS without having to copy files locally first.
Access requires a valid VOMS proxy, which can be generated with voms-proxy-init --voms EXPERIMENTNAME eg
- voms-prox-init --voms atlas
A typical XROOTD URL for a file on eg the ATLAS EOS service such as
This URL can be used directly in ROOT.
This is a recommended method to use for batch jobs or interactive ROOT sessions with heavy data analysis as the access should be more reliable than using the EOS client.
Files can also be copied locally with the xrdcp tool eg
xrdcp root://eosatlas.cern.ch//eos/atlas/atlascerngroupdisk/somedir/somefile.root /my/local/dir/somefile.root
Other experiments or areas will have their own EOS services (eg eoslhcb.cern.ch for LHCB or eosuser.cern.ch for the User areas).
Data can be accessed either via the EOS FUSE filesystem under /eos or via direct XROOTD but which will give the best throughput or whether it is better than copying the files locally first is hard to generalise. It can depend greatly on the size of the dataset, how it is being analysed, how well optimised the data files are for remote access and how many processes will be accessing the data simultaneously.
As a general rule for quick test runs or only accessing a small dataset direct access through /eos or XROOTD should be sufficient. If you are accessing large (100GB->TBs) datasets it may be better to copy them locally and run from local storage. If you experience slow access, find one method is significantly faster than the other or aren't sure which method to use please contact helpdesk.