We have had several reports of codes that previously ran giving wrong results or segmentation faults.
We have tracked this to a bug in an older version of Intel compiler that has shown up with the basic OS library (glibc) update we did during the downtime of September 28th.
There are 2 solutions:
- Recompile the code with Intel stack 2017.4 which we have installed since end of May and is the default. It may be worth to wait for the announcement of the intel/2018.0 availability which we'll have hopefully installed within a few days, just in case there are some incompatibilities from the previous 2017.4 version.
- If recompiling is not practical, or if waiting for the 2018 compiler,set environment
variable LD_BIND_NOW=1 - this changes the loading of dynamic library routines.
Setting this variable in your shell with "setenv LD_BIND_NOW 1" for tcsh or "export LD_BIND_NOW=1" for bash will only work for serial or thread parallel programs.
To run MPI parallel programs, pass this environment variable in the mpirun command, e.g for MPICH variants (MPICH, MVAPICH2, Intel MPI) as mpirun -np $SLURM_NTASKS -genv LD_BIND_NOW 1 ... or for OpenMPI as mpirun -np $SLURM_NTASKS -x LD_BIND_NOW=1 ...<
More details on this issue are here:
If you have any questions, please, let us know: firstname.lastname@example.org.