C++ enable_if via return type

I found SFINAE or “Substitution Failure Is Not An Error” quite fascinating. At first, it looked kind of cryptic to me (what do all these “typenames” mean?), so I tended to avoid it. But when used right, std::enable_if (which leverages SFINAE) really helps simplifying the code. So I started to depend on it.

Recently I wrote a function based on the example provided by Cppreference std::void_t article. Basically, I wanted to reset a variable, which can be either a scalar type (int, float, etc) or a container. If it is a container, I wanted to call Container::clear(). Otherwise, I can simply set it to zero.

template <typename T, typename = void>
struct is_clearable : std::false_type {};

template <typename T>
struct is_clearable<T, std::void_t<decltype(std::declval<T>().clear())> > : std::true_type {};

template <typename T>
inline constexpr bool is_clearable_v = is_clearable<T>::value;

template <typename T>
typename std::enable_if_t<is_clearable_v<T> > reset(T* t) {  // #1

template <typename T>
typename std::enable_if_t<!is_clearable_v<T> > reset(T* t) {  // #2
    *t = 0;

In my example, std::void_t is used to detect whether T has the member function clear(). is_clearable<T>::value yields true or false based on the result. is_clearable_v<T> is defined as a compile-time boolean constant that takes the is_clearable<T>::value.

Then, the reset(T* t) function is defined separately for the two cases. The first version is enabled (via the return type) when T has the member function clear(); the second version is enabled when T does not.

It turns out to work as advertised for me. But to apply this enable_if idiom, one would have to figure out what any of these (decltype, declval, constexpr, void_t, enable_if) are, and I think that’s not trivial without the help of some good examples.

Simple multiprocessing queue in Python

This is a very simple version of how to work with multiprocessing queue that I wrote while learning. There are two multiprocessing Queues task_queue and done_queue that are used to submit and receive the tasks. Typically we should tell the Processes to start() and join(). But I use sentinels to mark the end of task_queue so I do not have to call join(). For the done_queue, I use the fact that I know the exact number of items to get(). Usually, if we know the exact num of items, it’s better to use a multiprocessing Pool. But I use queues since I’m interested to implement the worker as an iterator (which does not assume the num of items).

import multiprocessing

class Sequence(object):
    A simple sequence that iterates over files obtained from a queue.
    SENTINEL = None

    def __init__(self, files):
        self._files = files

    def __iter__(self):
        while True:
            filename = self._next_file()
            if filename is None:
            yield filename

    def _next_file(self):
        filename = self._files.get()
        if filename == self.SENTINEL:
            return None
        return filename

def worker(task_queue, done_queue):
    seq = Sequence(task_queue)
    for x in seq:

# Main
if __name__ == '__main__':
    num_workers = 4
    num_entries = 1000

    task_queue = multiprocessing.Queue()
    done_queue = multiprocessing.Queue()

    for _ in range(num_workers):
            target=worker, args=(task_queue, done_queue)).start()

    # task_queue is supposed to take filenames, but for the purposes
    # of this exercise, it is easier to do integers
    for i in range(num_entries):

    for _ in range(num_workers):

    result = []
    for _ in range(num_entries):

    print('Done: {0}/{1} entries'.format(len(result), num_entries))

    # Sanity check
    assert sum(result) == sum(range(num_entries))

C++ stringizing and token-pasting

Macro expansion is an important thing to know when trying to do metaprogramming in C++. Specifically, the stringizing (#) and token-pasting (##) operators. They are also explained in this Cppreference article.

If the argument(s) used in the stringizing and token-pasting operators is a macro, then two levels of macro expansion are needed.

#define STRINGIFY_DETAIL(x) #x

#define PASTER(x,y) x ## y
#define EVALUATOR(x,y) PASTER(x,y)

Check out this StackOverflow answer to understand how it works.

Useful paths in /cvmfs/cms.cern.ch

CernVM-FS (or CVMFS) is developed by CERN and used in various HEP experiments for software distribution. CMSSW, along with its dependencies, is distributed via CVMFS in the namespace /cvmfs/cms.cern.ch on the CMS Tier-1, 2, and 3 machines. There are a few special paths that are useful to know.

  • To source the environment setup script:
source /cvmfs/cms.cern.ch/cmsset_default.sh
  • To find the available $SCRAM_ARCH environment variables:
ls -d /cvmfs/cms.cern.ch/slc*
  • To list the available CMSSW releases for a given $SCRAM_ARCH:
export SCRAM_ARCH=slc7_amd64_gcc900
scram list CMSSW
# or:
#   ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/cms/cmssw/
  • To setup a particular CMSSW release:
cmsrel CMSSW_11_3_0_pre6
cd CMSSW_11_3_0_pre6/src
# or:
#   scramv1 project CMSSW CMSSW_11_3_0_pre6
#   cd CMSSW_11_3_0_pre6/src
#   eval `scramv1 runtime -sh`
  • To find the source code for a particular CMSSW release:
# or:
#   ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/cms/cmssw/CMSSW_11_3_0_pre6/src
  • To find the C++ header files from external libraries (e.g. GCC) used by a particular CMSSW release:
    • Identify the XML config file that belongs to the library under $CMSSW_RELEASE_BASE/config/toolbox/$SCRAM_ARCH/tools/selected/;
    • Figure out the path from the XML config file.
cat $CMSSW_RELEASE_BASE/config/toolbox/$SCRAM_ARCH/tools/selected/gcc-cxxcompiler.xml
# Found the path
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/gcc/9.3.0/
# Navigate `include`
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/gcc/9.3.0/include/
# Found the header files
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/gcc/9.3.0/include/c++/9.3.0/
  • To find the Python packages (e.g. NumPy) used by a particular CMSSW release:
    • Identify the XML config file that belongs to the library under $CMSSW_RELEASE_BASE/config/toolbox/$SCRAM_ARCH/tools/selected/;
    • Figure out the path from the XML config file.
cat $CMSSW_RELEASE_BASE/config/toolbox/$SCRAM_ARCH/tools/selected/py3-numpy.xml
# Found the path
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/py3-numpy/1.17.5-ljfedo2/
# Navigate `lib` or `lib64`
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/py3-numpy/1.17.5-ljfedo2/lib/
# Found the source files
ls /cvmfs/cms.cern.ch/slc7_amd64_gcc900/external/py3-numpy/1.17.5-ljfedo2/lib/python3.8/site-packages/numpy/

My TDRStyle Matplotlib stylesheet

I wanted to make plots that are visually similar to the so-called CMS TDRStyle, but using Matplotlib instead. So I created a custom stylesheet (best used with matplotlib>=3.0). Note that it is not meant to be 100% identical, as I find that Matplotlib plots look better in certain aspects and I’d rather not change it for the sake of emulating the TDRStyle.

I named the stylesheet tdrstyle.mplstyle. To use it, simply drop it in the same directory as your plotting script/notebook, then apply it:

import matplotlib.pyplot as plt

# Use the stylesheet globally

# Use the stylesheet locally
with plt.style.context('tdrstyle.mplstyle'):

The stylesheet (tdrstyle.mplstyle):

### Based on built-in stylesheets: ['seaborn-white', 'seaborn-paper']

# Seaborn common parameters
# .15 = dark_gray
# .8 = light_gray
figure.facecolor: white
text.color: .15
axes.labelcolor: .15
legend.frameon: False
legend.numpoints: 1
legend.scatterpoints: 1
#xtick.direction: out
#ytick.direction: out
xtick.color: .15
ytick.color: .15
#axes.axisbelow: True
#image.cmap: Greys
font.family: sans-serif
#font.sans-serif: Arial, Liberation Sans, DejaVu Sans, Bitstream Vera Sans, sans-serif
#grid.linestyle: -
lines.solid_capstyle: round

# Seaborn whitegrid parameters
axes.grid: True
axes.facecolor: white
#axes.edgecolor: .8
#axes.linewidth: 1
#grid.color: .8
#xtick.major.size: 0
#ytick.major.size: 0
#xtick.minor.size: 0
#ytick.minor.size: 0

# Seaborn paper context
#figure.figsize: 6.4, 4.4
#axes.labelsize: 8.8
#axes.titlesize: 9.6
#xtick.labelsize: 8
#ytick.labelsize: 8
#legend.fontsize: 8

grid.linewidth: 0.8
lines.linewidth: 1.4
patch.linewidth: 0.24
lines.markersize: 5.6
lines.markeredgewidth: 0

xtick.major.width: 0.8
ytick.major.width: 0.8
xtick.minor.width: 0.4
ytick.minor.width: 0.4

xtick.major.pad: 5.6
ytick.major.pad: 5.6

### Make my modifications
### See: https://matplotlib.org/users/customizing.html#the-matplotlibrc-file for all the configuration options

figure.figsize : 4.2, 4.2
figure.dpi : 150
savefig.dpi : 150
figure.titleweight : 500

image.cmap : viridis

### FONT
font.size : 11
font.sans-serif : Helvetica, Arial, Liberation Sans, DejaVu Sans, Bitstream Vera Sans, sans-serif

### AXES
axes.labelsize : medium
axes.titlesize : medium
axes.labelweight : 500
axes.axisbelow : False
axes.edgecolor: .15
axes.linewidth: 1.25
#axes.autolimit_mode : round_numbers
#axes.xmargin : 0
#axes.ymargin : 0

#lines.linewidth : 2
#lines.markersize : 10

xtick.direction : in
ytick.direction : in
xtick.labelsize : small
ytick.labelsize : small
xtick.major.size : 6.0
ytick.major.size : 6.0
xtick.minor.size : 3.0
ytick.minor.size : 3.0
xtick.minor.visible : True
ytick.minor.visible : True
xtick.bottom : True
xtick.top : True
ytick.left : True
ytick.right : True

legend.fontsize : medium
legend.title_fontsize : medium

### GRID
grid.color : .15
grid.linestyle : :
grid.alpha : 0.6

errorbar.capsize : 0