FDTD Solutions 6.0.4 license crack

2D and 3D simulation capabilities
Nonuniform mesh and automesh algorithms
Simulation convergence autoshutoff
Parallel / cluster computation
Movie (.mpg) generation of simulation dynamics
Scripting language to customize simulation and analysis
Lorentz, Drude, Debye and anisotropic materials
Data import/export with BRO's ASAP ray-tracing package
Data export to Matlab or ASCII file formats
Structure import from GDSII files
Extensive online help

Boundary conditions:
absorbing (PML), periodic, Bloch, symmetric, asymmetric, and metal boundaries
Simulation objects:
primitives that can be rotated and placed in three dimensions, and structure definition from imported SEM/image files; primitives include triangles, rectangular blocks, cylinders, conic surfaces, polyons, rings, user-defined (parametric) surfaces, spheres and pyramids
Radiation sources:
waveguide sources, dipoles, plane waves, focused beams and diffraction-limited spots, total-field scattered-field (TFSF) sources, and source import/export from/to BRO's ASAP ray-tracing program
Measurement monitors:
refractive index monitors, time- and frequency-domain monitors to measure pulsed or continuous-wave (CW) field profiles and power flow, and movie monitors to generate .mpg movies of field dynamics

Parallel/clustered performance
The parallel option of FDTD Solutions allows for large-scale and rapid simulation of optical components, by distributing computational load and memory requirements across multiple nodes. Measured data shows that parallel FDTD Solutions offers significant performance enhancements as measured by the speed increase of simulations performed on multi-core and multiprocessor computing systems. Access to an 8-computer cluster will allow you to simulate a structure 8 times larger, or allow you to simulate the original structure approximately 6 times quicker.

The data further shows that these advantages - the ability to run much larger simulations, or run simulation much more quickly - are also available on much larger computer clusters, like the tests run on 128 processors as shown.

No comments: