Skip to content

Lumerical Computational Solutions

Lumerical Computational Solutions is a suite of software tools for the design of photonic components, circuits and systems. CHPC hosts an University site license which has been purchased collectively by several research groups. If your group is not part of this collective, please, contact us for usage requirements.

CHPC offers installation the full suite under a license purchased by a group of research groups using these tools. If your group is not part of this licensing group, please, contact us so that we can get you in touch with the maintainers of this license and work out license sharing.

Lumerical Launcher

The Launcher is a program that allows to launch all the Lumerical tools from a common interface.

  • Version: 2022R1.2
  • Machine: all clusters
  • Location:  /uufs/chpc.utah.edu/sys/installdir/lumerical/2022R1.2

To run the Launcher:

module load lumerical
launcher

From the Launcher GUI one can choose each of the tools that Lumerical provides.

FDTD Solutions

FDTD Solutions is a parallel 3D Finite Difference Time Domain Maxwell solver. It contains both GUI based interface and parallel runtime. Commonly an user would design the system to simulate in the GUI, save the input file and then run the FDTD solver in parallel batch on the cluster.

To run the FDTD Solutions Graphical Interface:

module load lumerical
vglrun -c proxy fdtd-solutions

Make sure to use FastX to connect to CHPC systems in order to be able to launch the GUI. Once the input file is generated, modify the script found in /uufs/chpc.utah.edu/sys/installdir/lumerical/scripts  to run the simulation.

A few notes on this script. FDTD solver comes with different executables for different MPI distributions. We are using a build with Intel MPI which uses the fast InfiniBand network on CHPC clusters, and thus runs optimally. We are also using a sample input file to run the simulation, so for your own input do not copy the nanowire.fsp file, but rather use your own. Finally, choose appropriate number of parallel tasks. Do not use too many tasks (>20) for small systems because the parallel scaling will suffer with too little work per task. Try to run the same simulation with varying task count (e.g. 12, 24, 48 tasks on lonepeak 12 core nodes) to see what task count gives the best performance.

INTERCONNECT

INTERCONNECT is a photonic integrated circuit design and analysis environment. As it is a GUI environment, run it through FastX2. Longer simulations should be either run on the Frisco interactive nodes, or using interactive batch.

To run the INTERCONNECT:

module load lumerical
interconnect

 

Last Updated: 7/5/23