Assign Unique Node IDs

EnSight allows nodes and elements on different parts to have the same node ID numbers. For example it is perfectly fine to have a node #1 on every part even though those nodes are not the same node. In most cases EnSight provides a way to distinguish between these nodes, for example if you query a node ID, EnSight will query the node on the part(s) that you have selected.

However, there are a few somewhat uncommon situations when one does need unique node ID numbers. Two example are when using a Periodic Matchfile (for periodic boundaries), and when positioning the plane tool using 3 nodes.

This script will assign a unique node ID to every node in the geometry.

  • Run the script and select a .geo file
  • Requires the .geo file to be ascii (not binary)
  • Starts with node ID 1 and goes up from there
  • Ignores the current node IDs, accepts any type (none, assign, given)
  • Creates a new file with the old .geo name, and renames the original to “…_original.geo”

Possible enhancements (does not do this now):

  • Automatically convert all the .geo files in a transient simulation
  • Read and write binary .geo files
  • Change element IDs

Download the script.

CFx Particle Track Translator

Updated 23-Jan-2013

Users of CFx who utilize discrete particles in their domain were a bit stuck for options for visualizing and analyzing that data in EnSight. The current CFx Export to EnSight format does not include the particle track information, and the Native Reader of CFx files into EnSight did not handle the read of those particle very well (extremely slow for small amount of particle, and unusable for more.). Given some recent inquiries, and bit more investigation on our end, we here at CEI have come up with an alternative approach which seems to work fairly well. We have written a python routine which translates the Particle Traces exported from CFx (to a ascii text file) into EnSight Measured data, which can then be loaded up with your EnSight model. This allows you to now visualize and analyze your CFx particle data in EnSight.

Transient Model Users:
Please note that in order for this routine to work with transient models, the information written to the PT1.trk file for “Traveling Time” must actually be uniquely equal to the Absolute Time (Analysis_Time). According to the CFx usage and documentation, the time written to the PT1.trk file is just the Particle_Traveling time, and therefore is not specifically the unique Analysis_Time (ie the Particle Traveling Time is relative to when that particle started, not a single fixed absolute time (aka Analysis_Time). If your transient model does conform to Traveling_Time == Analysis_Time, then this routine should work. If not, this routine will certainly not work.

The current documentation is kept on our User Defined Tools Help page here:

The translation routine takes existing EnSight Case File exported from CFx, along with a CFx exported Particle Track file (ascii format) (.trk), and translate this into measured data. The routine creates a new .case file with the appropriate additions for the Measured data. The user therefore only has to run this translation routine once, and can directly load the new .case file on his 2nd and subsequent load of the data into EnSight. With the Particle Track information read into EnSight as Measured data, the user can visualize, scale, animate, calculate, and analyze the characteristics of the Particle information.

Please view the Help page link mentioned above before launching the User Defined Tool, as there are a few options to control how the Particle Track information gets translated. Once viewed, please download the python routine below and place it (along with the .png file) into your User Defined Tools area.

Please Click here to download the current Version of this Tool (version 3.0)

Any updates to the routine will be re-posted back here, with appropriate notes/changes. Please feel free to provide feedback and comments back to us so that we can ensure that we provide the most appropriate and useful tool for your visualization needs.

Updated 16-Oct-2012:

Version 2.0 of the routine, with the following changes:

Updated routine to handle different ASCII format (provided by CEI GmbH) (differing number of values per line in file)
Updated routine to handle explicit min and max times provided in the Track files
Updated routine to handle gaps in the ParticleIDs
Updated routine to stop routine if supplied .case file already has Measured Data in it (Information provided in Terminal window)
Updated routine to handle .case file which is already transient (time set += 1)

Updated 23-January-2013:
Version 3.0:
    Updated to handle particles not starting all at the same time (for Transients only)
    Updated to handle Stick & non-existent particles at initial time
    Modified linear interpolation during re-sampling.
    Modified sort algorithm to utilize ‘sorted’ with a lambda.
Please see note at top regarding Transient Model integration.


LAMMPS DEM Particles into EnSight

Recently, a user inquired about the ability to read in Discrete Element Model (DEM) data from the LAMMPS program. EnSight has many features used to visualize and analyze DEM particles from other solvers. With a quick look at their format, along with the ability to utilize Python to write a small translation routine, there is a new translation script available here to work with LAMMPS datasets.

This Python routine takes a particular LAMMPS output file containing DEM particles, denoted with a “.dat” extension, and converts this dataset into EnSight Format. The single part in EnSight contains only points (and point elements) representing the DEM particles. The routine will also convert the variable information provided within the .dat file into nodal variables. This routine does not have a lot of testing, but seems to work fairly straight forward.
(1) The provided .dat file is ASCII
(2) Variable names used within the .dat file are provided at the top of the file
(3) Columns 1,2,3 contain the X,Y,Z positions of the particles.
(4) Timestep information is provided in the “Zone” field.

After using the tool to convert the dataset, the user will have an “ensight_files” directory containing the converted information. The user can subsequently just load the “” file within the “ensight_files” directory directly to read the data in (there is no need to convert the data each time). The Tool writes command language back into the Journal File, so that recovery and utilization of the Journal File for scripting is still valid.

Help for this tool is provided from the “Help” button straight in the tool’s GUI, or via the Tool Help in the User Defined Tool Window. Help can b found here :

The current version of the Tool is version 2.0 from September 18, 2012

Any issues or problems should be reported back here, to CEI, or the author (Kevin Colburn:

Download LAMMPS Conversion Tool here

Droplet Histogram Tool

I was asked by two customers at the same time if we have a histogram function for droplets or measured parts. In EnSight we can create histograms by looping an isovolume over a 3D part and query the attributes of that isovolume in every loop step. As droplets don’t have a volume this approach will fail.

For droplets we can use EnSight’s capability of exporting geometries. This new routine does that and gets all information from the exported files. The geometry will be exported to the temp file of the operating system. After finishing the script removes all temporary data.

One thing that is typical for droplets is the fact that they usually appear in transient files. Thus this code enables the user to select the time steps to export. By default histogram results for all time steps will be exported. Furthermore the user has to select the desired variable for the histogram. Also the number of value steps has to be selected. This number will define the refinement of the results.

As we usually handle transient files, this routine does not create a x-y plot. This would be unhandy fo several time steps. The results will be written to a text file in two columns. By default this file will be in the working directory and is called histogram_data_pairs.txt.

This routine is written as a user defined tool. If you have an existing directory for your own tools just put it together with the image into that directory. Otherwise it can be used as a standalone tool as well if line 270 is activated. Please contact me if you have any problems (


River & Flood Simulation Tools

EnSight is not just used by the typical CFD engineer at a large automotive companies. It is used across a wide variety of industries, specialties and disciplines. In this example today, EnSight is used in the Civil Engineer community for analysis and visualization of River and Flood Simulation results.

We started by taking 2 such solvers (River-FLO and FLO-2D) and created a translation Python script which would convert these formats into EnSight Case Gold files. Key benefits of this data in EnSight are the ability to take clips and graphing values along the clip; changing that clip interactively and seeing the graph update is very powerful; using the calculator to quantify the area affected by different key aspects; how much area is under more than 2 feet of water, or what the temporal max of depth to get an overall envelope of the flood, etc.

We then expanded this capability to other solvers called “MIKE21 Classic” and “MIKE21 FM”. Again, we wrote a python routine to translate these formats into EnSight Case Gold format.

The following short summary on the four different translation routines, along with two accompaning user defined tools to handle Aerial Photo projection/alignment automatically, as well as customer time annotation features.

RiverFLO-2D Translation Routine

This solver ( generates both a “.FED” file, and a series of “.EXP” files. The .fed file contains the geometric mesh. There is one .EXP file for each timestep, with a fixed specification of U & V velocities, water surface elevation, depth, and elevation as a per node variable. The time value is actually written as part of the .EXP filename, so the python routine parses that out and converts it into a time that EnSight can handle. Now, the time unit for the velocity is in seconds, so naturally you would want to specify time in seconds for EnSight. However, the time range for simulations is on the order of days, so reporting it as seconds is awkward. The python routine has a toggle to specify how the time is actually represented in the .case file so that graphs and integrations with time make more sense. By converting to anything other than seconds does mess up the pathline operation, but as that is not used for these models, it is okay.

Each of the translation routines take the incoming file and convert to EnSight Case Gold files. This ensures that the translation only occurs once. Each subsequent load of the model, the user can directly load in the .case file. In addition, items like changing timestep now happen very quickly, and provide the optimal operation of the model results in EnSight. To keep directories as neat and tidy as possible, I create an “ensight_files” directory, and place all of the .geo and variable files there.

The datasets are all 2D elements. They typically lie in the XY plane, with the elevation given as the Z coordinate. Timesteps are typically a few hundred, and dataset sizes are typically 200k to 500k. All of the variables are nodal, and exist for the 2D parts only. As detailed further below, models are originally prescribed in “global” coordinates. There are too many significant digits for EnSight to handle this properly, and thus this routine converts the model into “local” coordinates. The offset between local and global coordinates is referenced in the constant variables “Xref” and “Yref”.

Users will take advantage of the isosurface and isovolume tools in EnSight to restrict and create new parts. Calculator quantities including the EleSize * Water Depth will result in volume quantities like how much water volume is in each element. Querying variables over time, or creating clips and graphs on the clips over time are of significant help in analyzing the flow rates, distributions and temporal information. Use of the TempMean and TempMinMax functions allow the user to see an envelope of flood depth as a great indication of maximum depth reached over time.

To facilitate the time reporting (see below), there are constants per timestep written in the form of days, hours, minutes, seconds so that we can create the appropriate labels for time.

FLO-2D Translation Routine

This solver ( is fairly similar to the RiverFLO-2D in concept. There are a different set of files written with different (but conceptually similar) information. Here, the solver generates several files with the following suffixes: _CADPTS.DAT, _FPLAIN.DAT, and _TIMDEP.OUT. Here, the X & Y coordinates (nodal positions of the grid) are stored in the _CADPTS.DAT file, while the connectivity and elevation (Z coordinate) are stored in the _FPLAIN.DAT file. Using these two files, the python routine creates the .geo file for EnSight. The _TIMDEP.OUT files contain U and V velocity values and Water Depth. These, along with the Z coordinate as elevation are written to per node variable files.

As with the above, the python routine converts this format into EnSight Case Gold format, and saves this into a <casename>_ensight_files  directory. The routine only has to be run once per dataset, with any subsequent load of the dataset done directly as EnSight Case Gold file. As with the above format, the time unit control is provided in the Python so that the user can get the most useful unit for time. As above, reference values for the offset between global and local coordinates is kept track of via the Xref and Yref constants.

MIKE21_Classic Translation Routine

Details about the solver MIKE21_Classic (sometimes referred to as MIKE21C can be found here ( With this solver, it writes out an ascii .asc file output. This is a structured dataset, with nodal data written to a single file for all variables and all times. Header information in the file indicates grid size, position, variable count and names. The python routine converts through this structured information, again generating EnSight Case Gold Format files for all of the information. As with the above routines, the information is placed into an “ensight_files” directory. The bulk of this python is simpler than the other translation routines, but has slightly more complex logic for strides, skipping, and structured nature of implicit information. The conversion routine here can take 15-20 minutes depending upon the amount of variables and timesteps.

One nuance with this format, is that the order of writing the static data is different to that of the time varying (dynamic) data. Therefore, a toggle has been provided in the GUI for assuming that the static data is South to Noth & East to West, rather than the order of the dynamic data (North to South; East to West).

As with the above datasets, time unit specification is important as well the references between global and local coordinates. The same convention is followed here as well to provide a similar setup of information in EnSight.


MIKE21_FM Translation Routine

Another solver in the MIKE21 family is the MIKE21_FM (sometimes called MIKE21FM or just MIKE21) (, the “FM” here stands for Free Mesh. In this triangular unstructured grid solver configuration, information is written on out per element format (different to previous 3 formats). The files associated with this format are the .mesh and *.xyz files. What makes this format tricky is that during the run of the solver, it renumbers and reorganizes the element IDs, and when it writes the elemental result information out, it does so not by element number, but by location in 3D space. Therefore, there is no direct explicit writing of the variable with the elementID in which it exists.

Conversion of the .mesh file into an EnSight .geo file is quite straight forward. However, getting the variable information from the .xyz files back assigned to the elementID was not. I have gone around quite a bit with different methods to achieve one which was reasonably quick. In the end, some smart python loop logic allowed me to this search & find fairly quickly (takes about 40 minutes for 700k element model).

As with the other formats, data gets stored into “ensight_files” directory, and the translation only needs to happen once. Once done, users automatically get the data loaded into EnSight via Case format, and performing standard operations in EnSight from then on out. The same “Xref” and “Yref” values for local to global translations are stored as constant variables allowing users to either report true global positions or to use aerial photo alignment accurately. Time values are also extracted from the filename of the .xyz file, with the appropriate conversion to Analysis_Time information as well as a series of constants to be used for time annotation.


Global vs. Local Coordinates

Typical of this market, the model coordinates are provided in global coordinates. The model may be of a 25 square mile domain in California, but the coordinates are given relative to US reference several thousand miles away. These raw global coordinate values are too large in significant digits for EnSight to handle. So, each of the translation routines find the minimum X and Y location of the model and subtracts this off from each coordinate so that EnSight can handle the coordinates correctly (albeit as “local” coordinates). As a result, I create two model constants (Xref and Yref) which denote this difference between global and local coordinates. Users can then add this to any X or Y location to get back the global coordinates. In addition, the absolute coordinates are typically needed for Aerial Photo Alignment (see next tool). So, having these Xref and Yref constants are required for the aerial photo alignment.


Aerial Alignment Tool

Typical in this market is to utilize GIS type information within the display of the simulation information. As these are geographical simulations, having geographical information to provide context to the location are typically used. Although we can’t utilize most GIS information, we can utilize the texture map capability with aerial photos to give some sense of context. These photos pose a couple of initial “issues”, but were overcome with only a minor amount of work. Firstly, the aerial images are typically much higher resolution that EnSight can handle in the texture map. Aerial photos were typically 21k x 30k pixels. At most, EnSight can handle only 8k x 8k images, and depending upon your graphics card, a more typical limit is closer to 4k x 4k image. So, users must scale down the image to be less than 8k x 8k or 4k x 4k depending upon graphics card. The second issue is that of alignment of the image onto the geometry. Using our typical “use plane tool” to map it “by eye” is not sufficient. However, in this segment, the aerial photo image is typically accompanied by a “world coordinates” file. This word coordinate file describes how the pixels map to physical space (their width, length, orientation and position). This file is quite straight forward, and can be used to provide the appropriate S & T values in EnSight. So, I created a very small Python routine which asked the user for the Image file, World Coordinates file, and the Xref & Yref EnSight offsets (see previous). With this information, we can read the World Coordinates file and correctly create the S & T values for the image projection. The result is an immediate, exact projection of the aerial image onto the model.


Time Notation Tool

One interesting side note about this market segment is how time is best reported or annotated. In the normal CFD world, we annotate just the “Analysis_Time” field in seconds, and all is fine. Apart from having long timescales which denote using non-consistent Analysis_Time in hours or days, the user also does not want just this single string used for annotation. The annotation for time should roll up through seconds until it hits 1 minute, then the seconds reset back to zero, while the counter is bumped up on minutes. Same process for minutes to hours, hours to days, days to months, and months to years. Ideally what the user wants to see is a time annotation which looks something like :



So, there is a python tool which takes the constants written out from the above mentioned translation routines and creates the appropriate annotation string like above. This worked great for the end user to be able to see a more practical display of the current time than just “Analysis_Time”.


The current version of these River/Flood Analysis User Defined Tools can be downloaded below. This includes all four translation routines, time annotation tool, and aerial photo alignment. Should there be any questions, please do not hesitate to contact CEI.

Updated Toolset on 31-May 2012 for Command Record capability and Help Documentation

Updated Toolset on 14-Aug 2012 for TUFLOW translation routine & basic Triad tool (change triad labels from xyz to E,N,Elevation)

Updated Toolset periodically during Sept/Oct/Nov for robustness, handling of more variations of datasets and explicitly exporting out elevation variable. Latest update (04-Dec) contains updates to MIKE21_FM routine to tolerance variable locations not within element centroid (alternative search).


Click here to Download current Flood Simulation Toolset

You can view a tutorial on using these particular tools within EnSight here:


Absolute Min or Max over time

User recently asked : What is the minimum temperature experienced at any point, throughout the whole time domain?  (absolute temporal minimum) Good question. This conceptually boils down to finding the minimum at each time, and then traversing the temporal domain looking for new minimum.

The Min() function works at a particular timestep, and will automatically update when the timestep is changed, but it is not easy to keep track of what the Min(Min()) is.

EnSight already has some very helpful temporal functions to return new variables that are sampled over time. One of them is “TempMinmaxField”, which returns the minimum or maximum over time in a particular element or node. This returns a new scalar or vector field which is minimum or maximum experienced over time in each element or node. This works great, but is limited to a geometry which does not change connectivity. His geometry is changing, so this feature does not work.

However, there are some Python provided hooks into the variable information which will immediately return items like the minimum or maximum. Now, this function is not tied to any particular part, but is rather the variable attribute rather than part related quantity.

Here is an example of using that Python call to return both the absolute minimum and maximum over time for a particular variable:

# SUBROUTINE to loop through all timesteps to figure out min and max over time
def find_minmax(step_min,step_max,varname):
nstep = step_max – step_min + 1
min_val = 9e99
max_val = -9e99
a = ensight.query(ensight.VARIABLE_OBJECTS)
var_list = a[2]
for i in range(len(var_list)):
name_now = var_list[i]
if name_now == varname:
var_id = i
for i in range(nstep):
b = ensight.query(ensight.VARIABLE_INFORMATION,var_id)
local_min_val = b[2][0]
local_max_val = b[2][1]
if local_min_val < min_val:
min_val = local_min_val
if local_max_val > max_val:
max_val = local_max_val

Overset Grid enhancement : OVERINT grid.i.triq to EnSight format

For users of OVERSET grids and EnSight, we have written a little conversion routine to convert the “grid.i.triq” file from OVERINT into EnSight Format. This file contains the closed, water tight, triangulated surface representation of the geometry along with 13 variables stored at each of the node points. The python conversion routine reads the binary grid.i.triq file, and creates the appropriate EnSight Case Gold format .geo file, 13 variable files, along with the appropriate case file, and automatically loads the model into EnSight. This conversion routine only has to be run once per dataset, as all subsequent loads into EnSight can simply load in the converted EnSight Case Gold files.

More information on the OVERINT grid.i.triq file can be found here:

You can download the following bundle and place into your UserDefinedTools area (for either EnSight 9 or EnSight 10). You can then restart EnSight, and you should find the icon for converting the grid.i.triq file. Should have any questions, or require further information, please do not hesitate to contact us.

Download grid.i.triq conversion tool

Part and Variable Information

For those who would like to get detailed information about their model, including part composition, elemental makeup, variable information, there is a little python tool to obtain that. Courtesy of Mr. Bill Dunn, this routine sniffs through the model and returns dataset information such as datafiles used, part composition, element counts, variables, total node and element counts.

Download recipe part_var_core_details here

Command File translator to EnSight 10

Hi folks,

as the grouping syntax has changed in EnSight 10 I’m sure many users will be faced with the problem that their old command files might fail if they do an upgrade to the latest EnSight version. We have some users with large scripts that include grouping operations so I started to think about a solution for this issue. Attached is the latest version which is still not completely finished but it seems to work fine on most command files I have used for testing. Please feel free to run the routine on your own command files and let me know if it crashes or if EnSight 10 fails on playing the translated files.

The only way I saw to get all necessary part and group info was to replay the original command files. So the original data files must be located at the directory which is included in the command file. The script will create a temporary .enc file and a .py file so the user must have writing permissions to his working directory. To start the script, please navigate to the saving path of your command files and run it with EnSight 9. The script will automatically search all .enc and .cmd files in that directory. If the routine failes on any of the found files for a reason it will create a short output message instead of translating the original command file. The translated files will be written to a new directory in the working directory witch will be named as currentyear_currentmonth_currentday_ensight10_command_files.



Download the latest update (June 2012) here.


Data Translator for measured PIV data

Particle Image Velocimetry (PIV) is one way of experimentally measuring the velocity of a fluid. EnSight is highly suited for analysis of PIV results but there must be some way to load the data. One PIV data format is DaVis .vec or .ve7 which is an ascii file that is basically a comma-separated (.csv) file with one header line. The original data is a collection of points with defined XY position and velocity in X and Y directions. Using python I translated the .vec file to EnSight case gold format. I created a 2D part with a mesh instead of just a collection of data points so that more analysis could be performed.

Download the script. Updated 2015-07-08

Online documentation for this script.

The following images are from relative velocity measured for a rotating pump. Since the data is on a mesh, most CFD post processing techniques can be used.

(Above) Streamlines calculated with and colored by velocity.

(Above) Colored by vorticity, which was calculated in EnSight from the measured velocity.

Many other visualization techniques could also be used such as isosurfaces and elevated surfaces. Other uses might be to compare measured data vs. CFD side by side, or by directly calculating the difference. If using CFD data be careful to compare equivalent data. For example, the measured data has no information about the z component of the velocity. So if your simulation is 3D you must create a 2D velocity vector for comparison to experiment.

This translator is still fairly rough and the documentation is limited, so please contact me if you want to use it. Running the script will open a window to select a .vec file.

Update 4/2012: The script has been modified to work with DaVis 7 format. Also fixed one bug. My test data set is still very small, so if you experience a problem with the translator please send the data to me.

Update 8/2012: The script now has an option to create a transient case from multiple PIV files. It also now treats the data points as element centers instead of nodes of the mesh. Has better GUI for making selections, and reports on the success or failure of the translation.

Update 7/2015: Now handles DaVis 8 format. Also handles “,” (commas) when they are used for decimal points.