You are here:

Lumerical Computational Solutions

Lumerical Computational Solutions is a suite of software tools for the design of photonic components, circuits and systems. CHPC hosts an University site license which has been purchased collectively by several research groups. If your group is not part of this collective, please, contact us for usage requirements.

CHPC offers installation of two Lumerical software tools, FDTD Solutions and INTERCONNECT. If you need to use other tools at the CHPC systems, please, contact us.

FDTD Solutions

FDTD Solutions is a parallel 3D Finite Difference Time Domain Maxwell solver. It contains both GUI based interface and parallel runtime. Commonly an user would design the system to simulate in the GUI, save the input file and then run the FDTD solver in parallel batch on the cluster.

  • Version: 8.17
  • Machine: all clusters
  • Location:  /uufs/chpc.utah.edu/sys/installdir/lumerical/fdtd

To run the FDTD Solutions Graphical Interface:

module load lumerical
fdtd-solutions

Make sure to use FastX2 to connect to CHPC systems in order to be able to launch the GUI. Once the input file is generated, modify this SLURM sample script to run the simulation.

A few notes on this script. FDTD solver comes with different executables for different MPI distributions. We are using a build with Intel MPI which uses the fast InfiniBand network on CHPC clusters, and thus runs optimally. We are also using a sample input file to run the simulation, so for your own input do not copy the nanowire.fsp file, but rather use your own. Finally, choose appropriate number of parallel tasks. Do not use too many tasks (>20) for small systems because the parallel scaling will suffer with too little work per task. Try to run the same simulation with varying task count (e.g. 12, 24, 48 tasks on Ember 12 core nodes) to see what task count gives the best performance.

INTERCONNECT

INTERCONNECT is a photonic integrated circuit design and analysis environment. As it is a GUI environment, run it through FastX2. Longer simulations should be either run on the Frisco interactive nodes, or using interactive batch.

  • Version: 8.17
  • Machine: all clusters
  • Location:  /uufs/chpc.utah.edu/sys/installdir/lumerical/interconnect

To run the INTERCONNECT:

module load interconnect
interconnect

 

Last Updated: 6/22/17