crab is used to create and submit CMSSW jobs, distributing them to computing centers all over the world. The reference TWiki pages are:

The status of the submitted jobs can be monitored at:

The status of Asynchronous Stage Out (ASO) can be monitored at:

To find the site names for whitelisting or blacklisting, check the CMS CRIC web portal. If you know which dataset you want to analyze, you can also find the site names of the sites that host the dataset using DAS. Note that a user can only access datasets on Tier2 sites, not Tier1 sites. For quicker DAS queries, use the command-line tool dasgoclient.

The CRAB3 output files are stored in a directory with this structure: outLFNDirBase/inputPrimaryDataset/outputDatasetTag/timestamp/; if doing MC event generation, the directory structure is outLFNDirBase/outputPrimaryDataset/outputDatasetTag/timestamp/ where outputPrimaryDataset is specified by the user.

The source codes responsible for the CRAB server and client and other services can be found at:

If you have questions regarding CRAB, you can ask in the Hypernews forum: