Releasing Streamlines from Centroids of Elements

imgoutA request came in the other day to release streamlines from the centroids of all of the 2D elements of the selected parts. Now, EnSight, the default seed locations are from the nodes of the parts, not the centroids. However, this user would like to release from the centroids of the parts selected.

Python to rescue. We can utilize the Calculator Capability in EnSight to compute the X,Y,Z coordinates of the selected parts, and use the “NodeToElem” to convert those three scalar variable field values to be at the centroids of each element. With Elemental Values at the centroids of the parts, we use the FlatFile export with the “CELLID” parameter and ascii format to write out the elements of selected part(s) and their elemental variables to a text file. We can then just reformat this text file into a qualified “Particle Emitter File”. Voila. Within the standard streamline creation of EnSight, we can use a “Particle Emitter File” as the seed locations for the Streamlines, and therefore generate streamlines from the centroids of all elements in the selected part(s).

See below a short little Python Code to do this operation, as well as an image of what such a release would look like.

Python Utility to create Particle Emitter File from Centroids of Parts Selected

Prepare cube tool for a skybox

Here is a very short script that prepares the cube tool to make a skybox. Just change the numbers in the first 2 lines and go. Then create the skybox manually.


Creating Custom Color Palette from Image

Suppose you have an image that was created in another package, and you would like to replicate the color palette used in the image within EnSight? Well, EnSight can easily accept custom Color Palettes (see UserManual). The only “tough” part is getting from original image file the RGB values used and into a form of the EnSight Color Palette (.cpal) file. Well, guess what, Python (and more specifically EnVe) can be used to grab out the color legend information, and create your own color palette file (.cpal) for EnSight.

So, how you go about doing this:

1. Start with an image created in another package, which has a color legend. Crop the image to just the color legend palette. You now have an image file which has (in this example) a vertical specification of the color palette.

2. Use the attached python routine below on this file image file. This python routine uses EnVe to query the image file for the RGB values of the pixels in the image, and grab those values off, and write out an EnSight Color Palette file (.cpal) with the appropriate information in it. (If your image has a horizontal legend, then you can switch the x,y values around to perform the same operation).

3. You can thus create a new EnSight Color Palette File (.cpal) from the original image created in another package. EnSight has a limit of 21 bands, but handles 16 colors per band for smooth palette range.


As always, please contact us here at CEI should you have any questions, comments, having trouble, or would like routines this customized to your needs. We are here to help.

Click here for Color Palette Create Tool


Transient STL File Conversion

Do you have a series of STL files which represent moving geometry? If this geometric motion is simple (constant rotation, or translation), you can utilize the “Rigid Body Motion” capability already in EnSight. However, what if the motion is complex, or if the STL surfaces change from timestep to timestep? The native STL file reader in EnSight expects steady state STL file information (either a single STL file, or multiple STL files via .xct; but still only for a single timestep).

A short Python routine can be used here to actually help out. This python routine takes a series of STL files, and assumes that they are part of a transient sequence, with one STL file per timestep. The Python routine converts the STL information into EnSight Case Gold format files, with multiple .geo files, allowing you to view your STL information in a transient nature. This routine can be further modified and customized to suit your needs, file conventions, or time information (since no time information is explicitly available within the .stl file).

Please feel free to contact CEI, or the author of the routine ( for further customization or questions.

To download this example Python utility, please click on the link below, and place into your User Defined Tools area.

Click here to download Multiple Transient STL Conversion Tool

Current Version : 1.0 (11-January-2013)

Current Limitations/Assumptions:

a. A series of ASCII stl files all with ‘.stl’ extension
b. Will convert and assume ALL “*.stl” files in the directory are to be converted.
c. All STL files have the same number of parts (but can change triangles from timestep to timestep)
d. Since there is no Time information with STL, will force Time 0 = 0.0 seconds, Time 1 = 1.0 seconds, etc
If user needs other time information, just change the time values in the .case file
e. STL files are all triangles (no quads)

Help Documentation can be found here :

CFx Particle Track Translator

Updated 23-Jan-2013

Users of CFx who utilize discrete particles in their domain were a bit stuck for options for visualizing and analyzing that data in EnSight. The current CFx Export to EnSight format does not include the particle track information, and the Native Reader of CFx files into EnSight did not handle the read of those particle very well (extremely slow for small amount of particle, and unusable for more.). Given some recent inquiries, and bit more investigation on our end, we here at CEI have come up with an alternative approach which seems to work fairly well. We have written a python routine which translates the Particle Traces exported from CFx (to a ascii text file) into EnSight Measured data, which can then be loaded up with your EnSight model. This allows you to now visualize and analyze your CFx particle data in EnSight.

Transient Model Users:
Please note that in order for this routine to work with transient models, the information written to the PT1.trk file for “Traveling Time” must actually be uniquely equal to the Absolute Time (Analysis_Time). According to the CFx usage and documentation, the time written to the PT1.trk file is just the Particle_Traveling time, and therefore is not specifically the unique Analysis_Time (ie the Particle Traveling Time is relative to when that particle started, not a single fixed absolute time (aka Analysis_Time). If your transient model does conform to Traveling_Time == Analysis_Time, then this routine should work. If not, this routine will certainly not work.

The current documentation is kept on our User Defined Tools Help page here:

The translation routine takes existing EnSight Case File exported from CFx, along with a CFx exported Particle Track file (ascii format) (.trk), and translate this into measured data. The routine creates a new .case file with the appropriate additions for the Measured data. The user therefore only has to run this translation routine once, and can directly load the new .case file on his 2nd and subsequent load of the data into EnSight. With the Particle Track information read into EnSight as Measured data, the user can visualize, scale, animate, calculate, and analyze the characteristics of the Particle information.

Please view the Help page link mentioned above before launching the User Defined Tool, as there are a few options to control how the Particle Track information gets translated. Once viewed, please download the python routine below and place it (along with the .png file) into your User Defined Tools area.

Please Click here to download the current Version of this Tool (version 3.0)

Any updates to the routine will be re-posted back here, with appropriate notes/changes. Please feel free to provide feedback and comments back to us so that we can ensure that we provide the most appropriate and useful tool for your visualization needs.

Updated 16-Oct-2012:

Version 2.0 of the routine, with the following changes:

Updated routine to handle different ASCII format (provided by CEI GmbH) (differing number of values per line in file)
Updated routine to handle explicit min and max times provided in the Track files
Updated routine to handle gaps in the ParticleIDs
Updated routine to stop routine if supplied .case file already has Measured Data in it (Information provided in Terminal window)
Updated routine to handle .case file which is already transient (time set += 1)

Updated 23-January-2013:
Version 3.0:
    Updated to handle particles not starting all at the same time (for Transients only)
    Updated to handle Stick & non-existent particles at initial time
    Modified linear interpolation during re-sampling.
    Modified sort algorithm to utilize ‘sorted’ with a lambda.
Please see note at top regarding Transient Model integration.


Custom Image Save Shortcut

A customer recently asked if a more custom save image could be created. He always wants to use the “Default Print Colors”, and always wants to save out a certain image type and size. His main concern was always having to toggle on the “default print colors”, (as they are turned off after the image is saved), and the 6 mouse clicks/operations to save an image (File (1), Export(2), Image (3), specify filename (4), Default Print colors (5), and Ok (6)).

One option is to use the right Click (Right Click to Viewport — > default print colors), the Right Click again to Save image to file, then specify file name, and Ok. These are perfectly okay when saving a few images, but having to save 20 or 30 images, and you would want to look for a shortcut way to do this.

With EnSight’s scripting, and two lines of python, this can be made into a nice short macro, which would do the above steps in a single keystroke.

Here is the python macro (you can directly assign this to any of your favorite keyboard short cuts):

# Example Python to Create new Output image
import datetime
Basename = "EnSight_Image_"

# Use datetime to generate a filename with unique name
now =
unique_part = now.strftime("%Y-%m-%d_%H-%M-%S")
new_filename = str(Basename) + str(unique_part)

# Save the image


Here, the three lines of python obtain the current time, and append that to the name “EnSight_Image_” in order to create a unique filename to be saved. The remaining lines in the file are direct EnSight journal commands converted to Python. Note here the use of the “image_convert” lines which do the operation of the default print colors.

So, with only three lines of python, you now can take the journal commands from a file save and make them into a quick shortcut to save an image out.

River & Flood Simulation Tools

EnSight is not just used by the typical CFD engineer at a large automotive companies. It is used across a wide variety of industries, specialties and disciplines. In this example today, EnSight is used in the Civil Engineer community for analysis and visualization of River and Flood Simulation results.

We started by taking 2 such solvers (River-FLO and FLO-2D) and created a translation Python script which would convert these formats into EnSight Case Gold files. Key benefits of this data in EnSight are the ability to take clips and graphing values along the clip; changing that clip interactively and seeing the graph update is very powerful; using the calculator to quantify the area affected by different key aspects; how much area is under more than 2 feet of water, or what the temporal max of depth to get an overall envelope of the flood, etc.

We then expanded this capability to other solvers called “MIKE21 Classic” and “MIKE21 FM”. Again, we wrote a python routine to translate these formats into EnSight Case Gold format.

The following short summary on the four different translation routines, along with two accompaning user defined tools to handle Aerial Photo projection/alignment automatically, as well as customer time annotation features.

RiverFLO-2D Translation Routine

This solver ( generates both a “.FED” file, and a series of “.EXP” files. The .fed file contains the geometric mesh. There is one .EXP file for each timestep, with a fixed specification of U & V velocities, water surface elevation, depth, and elevation as a per node variable. The time value is actually written as part of the .EXP filename, so the python routine parses that out and converts it into a time that EnSight can handle. Now, the time unit for the velocity is in seconds, so naturally you would want to specify time in seconds for EnSight. However, the time range for simulations is on the order of days, so reporting it as seconds is awkward. The python routine has a toggle to specify how the time is actually represented in the .case file so that graphs and integrations with time make more sense. By converting to anything other than seconds does mess up the pathline operation, but as that is not used for these models, it is okay.

Each of the translation routines take the incoming file and convert to EnSight Case Gold files. This ensures that the translation only occurs once. Each subsequent load of the model, the user can directly load in the .case file. In addition, items like changing timestep now happen very quickly, and provide the optimal operation of the model results in EnSight. To keep directories as neat and tidy as possible, I create an “ensight_files” directory, and place all of the .geo and variable files there.

The datasets are all 2D elements. They typically lie in the XY plane, with the elevation given as the Z coordinate. Timesteps are typically a few hundred, and dataset sizes are typically 200k to 500k. All of the variables are nodal, and exist for the 2D parts only. As detailed further below, models are originally prescribed in “global” coordinates. There are too many significant digits for EnSight to handle this properly, and thus this routine converts the model into “local” coordinates. The offset between local and global coordinates is referenced in the constant variables “Xref” and “Yref”.

Users will take advantage of the isosurface and isovolume tools in EnSight to restrict and create new parts. Calculator quantities including the EleSize * Water Depth will result in volume quantities like how much water volume is in each element. Querying variables over time, or creating clips and graphs on the clips over time are of significant help in analyzing the flow rates, distributions and temporal information. Use of the TempMean and TempMinMax functions allow the user to see an envelope of flood depth as a great indication of maximum depth reached over time.

To facilitate the time reporting (see below), there are constants per timestep written in the form of days, hours, minutes, seconds so that we can create the appropriate labels for time.

FLO-2D Translation Routine

This solver ( is fairly similar to the RiverFLO-2D in concept. There are a different set of files written with different (but conceptually similar) information. Here, the solver generates several files with the following suffixes: _CADPTS.DAT, _FPLAIN.DAT, and _TIMDEP.OUT. Here, the X & Y coordinates (nodal positions of the grid) are stored in the _CADPTS.DAT file, while the connectivity and elevation (Z coordinate) are stored in the _FPLAIN.DAT file. Using these two files, the python routine creates the .geo file for EnSight. The _TIMDEP.OUT files contain U and V velocity values and Water Depth. These, along with the Z coordinate as elevation are written to per node variable files.

As with the above, the python routine converts this format into EnSight Case Gold format, and saves this into a <casename>_ensight_files  directory. The routine only has to be run once per dataset, with any subsequent load of the dataset done directly as EnSight Case Gold file. As with the above format, the time unit control is provided in the Python so that the user can get the most useful unit for time. As above, reference values for the offset between global and local coordinates is kept track of via the Xref and Yref constants.

MIKE21_Classic Translation Routine

Details about the solver MIKE21_Classic (sometimes referred to as MIKE21C can be found here ( With this solver, it writes out an ascii .asc file output. This is a structured dataset, with nodal data written to a single file for all variables and all times. Header information in the file indicates grid size, position, variable count and names. The python routine converts through this structured information, again generating EnSight Case Gold Format files for all of the information. As with the above routines, the information is placed into an “ensight_files” directory. The bulk of this python is simpler than the other translation routines, but has slightly more complex logic for strides, skipping, and structured nature of implicit information. The conversion routine here can take 15-20 minutes depending upon the amount of variables and timesteps.

One nuance with this format, is that the order of writing the static data is different to that of the time varying (dynamic) data. Therefore, a toggle has been provided in the GUI for assuming that the static data is South to Noth & East to West, rather than the order of the dynamic data (North to South; East to West).

As with the above datasets, time unit specification is important as well the references between global and local coordinates. The same convention is followed here as well to provide a similar setup of information in EnSight.


MIKE21_FM Translation Routine

Another solver in the MIKE21 family is the MIKE21_FM (sometimes called MIKE21FM or just MIKE21) (, the “FM” here stands for Free Mesh. In this triangular unstructured grid solver configuration, information is written on out per element format (different to previous 3 formats). The files associated with this format are the .mesh and *.xyz files. What makes this format tricky is that during the run of the solver, it renumbers and reorganizes the element IDs, and when it writes the elemental result information out, it does so not by element number, but by location in 3D space. Therefore, there is no direct explicit writing of the variable with the elementID in which it exists.

Conversion of the .mesh file into an EnSight .geo file is quite straight forward. However, getting the variable information from the .xyz files back assigned to the elementID was not. I have gone around quite a bit with different methods to achieve one which was reasonably quick. In the end, some smart python loop logic allowed me to this search & find fairly quickly (takes about 40 minutes for 700k element model).

As with the other formats, data gets stored into “ensight_files” directory, and the translation only needs to happen once. Once done, users automatically get the data loaded into EnSight via Case format, and performing standard operations in EnSight from then on out. The same “Xref” and “Yref” values for local to global translations are stored as constant variables allowing users to either report true global positions or to use aerial photo alignment accurately. Time values are also extracted from the filename of the .xyz file, with the appropriate conversion to Analysis_Time information as well as a series of constants to be used for time annotation.


Global vs. Local Coordinates

Typical of this market, the model coordinates are provided in global coordinates. The model may be of a 25 square mile domain in California, but the coordinates are given relative to US reference several thousand miles away. These raw global coordinate values are too large in significant digits for EnSight to handle. So, each of the translation routines find the minimum X and Y location of the model and subtracts this off from each coordinate so that EnSight can handle the coordinates correctly (albeit as “local” coordinates). As a result, I create two model constants (Xref and Yref) which denote this difference between global and local coordinates. Users can then add this to any X or Y location to get back the global coordinates. In addition, the absolute coordinates are typically needed for Aerial Photo Alignment (see next tool). So, having these Xref and Yref constants are required for the aerial photo alignment.


Aerial Alignment Tool

Typical in this market is to utilize GIS type information within the display of the simulation information. As these are geographical simulations, having geographical information to provide context to the location are typically used. Although we can’t utilize most GIS information, we can utilize the texture map capability with aerial photos to give some sense of context. These photos pose a couple of initial “issues”, but were overcome with only a minor amount of work. Firstly, the aerial images are typically much higher resolution that EnSight can handle in the texture map. Aerial photos were typically 21k x 30k pixels. At most, EnSight can handle only 8k x 8k images, and depending upon your graphics card, a more typical limit is closer to 4k x 4k image. So, users must scale down the image to be less than 8k x 8k or 4k x 4k depending upon graphics card. The second issue is that of alignment of the image onto the geometry. Using our typical “use plane tool” to map it “by eye” is not sufficient. However, in this segment, the aerial photo image is typically accompanied by a “world coordinates” file. This word coordinate file describes how the pixels map to physical space (their width, length, orientation and position). This file is quite straight forward, and can be used to provide the appropriate S & T values in EnSight. So, I created a very small Python routine which asked the user for the Image file, World Coordinates file, and the Xref & Yref EnSight offsets (see previous). With this information, we can read the World Coordinates file and correctly create the S & T values for the image projection. The result is an immediate, exact projection of the aerial image onto the model.


Time Notation Tool

One interesting side note about this market segment is how time is best reported or annotated. In the normal CFD world, we annotate just the “Analysis_Time” field in seconds, and all is fine. Apart from having long timescales which denote using non-consistent Analysis_Time in hours or days, the user also does not want just this single string used for annotation. The annotation for time should roll up through seconds until it hits 1 minute, then the seconds reset back to zero, while the counter is bumped up on minutes. Same process for minutes to hours, hours to days, days to months, and months to years. Ideally what the user wants to see is a time annotation which looks something like :



So, there is a python tool which takes the constants written out from the above mentioned translation routines and creates the appropriate annotation string like above. This worked great for the end user to be able to see a more practical display of the current time than just “Analysis_Time”.


The current version of these River/Flood Analysis User Defined Tools can be downloaded below. This includes all four translation routines, time annotation tool, and aerial photo alignment. Should there be any questions, please do not hesitate to contact CEI.

Updated Toolset on 31-May 2012 for Command Record capability and Help Documentation

Updated Toolset on 14-Aug 2012 for TUFLOW translation routine & basic Triad tool (change triad labels from xyz to E,N,Elevation)

Updated Toolset periodically during Sept/Oct/Nov for robustness, handling of more variations of datasets and explicitly exporting out elevation variable. Latest update (04-Dec) contains updates to MIKE21_FM routine to tolerance variable locations not within element centroid (alternative search).


Click here to Download current Flood Simulation Toolset

You can view a tutorial on using these particular tools within EnSight here:


Absolute Min or Max over time

User recently asked : What is the minimum temperature experienced at any point, throughout the whole time domain?  (absolute temporal minimum) Good question. This conceptually boils down to finding the minimum at each time, and then traversing the temporal domain looking for new minimum.

The Min() function works at a particular timestep, and will automatically update when the timestep is changed, but it is not easy to keep track of what the Min(Min()) is.

EnSight already has some very helpful temporal functions to return new variables that are sampled over time. One of them is “TempMinmaxField”, which returns the minimum or maximum over time in a particular element or node. This returns a new scalar or vector field which is minimum or maximum experienced over time in each element or node. This works great, but is limited to a geometry which does not change connectivity. His geometry is changing, so this feature does not work.

However, there are some Python provided hooks into the variable information which will immediately return items like the minimum or maximum. Now, this function is not tied to any particular part, but is rather the variable attribute rather than part related quantity.

Here is an example of using that Python call to return both the absolute minimum and maximum over time for a particular variable:

# SUBROUTINE to loop through all timesteps to figure out min and max over time
def find_minmax(step_min,step_max,varname):
nstep = step_max – step_min + 1
min_val = 9e99
max_val = -9e99
a = ensight.query(ensight.VARIABLE_OBJECTS)
var_list = a[2]
for i in range(len(var_list)):
name_now = var_list[i]
if name_now == varname:
var_id = i
for i in range(nstep):
b = ensight.query(ensight.VARIABLE_INFORMATION,var_id)
local_min_val = b[2][0]
local_max_val = b[2][1]
if local_min_val < min_val:
min_val = local_min_val
if local_max_val > max_val:
max_val = local_max_val

Right click data access

An annoying problem I saw quite often in doing support or training classes was to access the EnSight installation data sets. Usually the way to open one of the models is:

File -> Open -> Navigate to $CEI_HOME$/ensightxx/data -> select parts to load

Really exhausting if you have to do this 20 times in a single training class day. Using the ‘Open recent data file’ menu point was no really solution for me as EnSight will always ask you if you want to replace your current data. Too many clicks for me. I have written a routine which creates a background right click menu enhancement. Please find the linked routine. If you run that routine it will search for the hard drive location


The routine will create every directory level if the path could not be not found. However it won’t overwrite existing directories so you can run it if you already have defined some functions at that directory. As soon as the routine has created the new file called it will exit EnSight. Restart it to use the new right click enhancement.

The enhacement includes a quick tool access and hide function, which can be useful for quadric tools. Furthermore we can delete all currently loaded cases and we can find an option ‘Open Data’. Here the user can access all models which are automatically installed with EnSight. If the user selects oen model or clicks on ‘Delete data’, EnSight will immediately replce all current cases without asking. So this is a very quick way to open the file you need for demonstrating any EnSight functions.













Spatial Probability Density Function (PDF) in EnSight

In analyzing the variable distribution within a solution, it is quite helpful to look at some type of Probability Density Function for the variable. There are a few interpretations of this function depending upon whether you are looking at the probability across time, space, or different datasets. In essence, the spatial probability density function indicates to the engineer how much of the domain (% of total volume in this case) which contains particular variable range. “How much of the domain has Species 1 near 0.01, or How much of the domain has Temperature near 900 degrees K”.

This particular tool I wrote is to analyze the distribution of a function spatially, sometimes referred to a volumetric probability density function, or a volumetric histogram. In a very basic interpretation, it is similar to the histogram display in the Color Palette Editor, normalized based on element volume.

This routine takes the parent part(s) that you have selected along with the variable that you’d to interrogate, and a user specified number of bins (or bars) in the histogram. The routine automatically finds the minimum and maximum variable range for the part(s). Based on this range, it then divides the volume into N number of IsoVolumes (number of bins) based on this variable range. The routine then determines the fraction of the total volume which is contained within each of these variable constrained ranges. The result is placed into a query register and automatically plotted on the screen.

The Tool presents the user with the simple Window to select the variable, and number of bins (or bars) for the distribution function.


After executing, you will then get a graph of distribution of the variable within the parent part(s) selected.

The values on the graph should always sum to 1.0, as it is presented as a fraction of the domain which contains that variable.

Note: As users increase the number of bars( or bins) for the graph, the shape of the curve will increase in resolution, although values on the Y-axis of the graph will adjust (as more bins used will still sum to 1.0).

This Tool can be downloaded from the link below. Please unzip the file and place both the Python Script and Icon PNG file into your UserDefinedTools area and restart EnSight. You should see a “PDF Graph” icon available in your UDT area, and you can double click to execute. Note, in EnSight 10, you can now place these UDT

Update 16-April-2012:

Based on User Requests, I’ve updated the routine to create a Stair-Stepped Graph rather than the smooth graph to represent the “binning” concept further in the graph.

Updated 22-April-2012:

Based on User Requests, I’ve updated the routine to handle transient models now. It does this by exporting out each timestep as a 1D bar part that is then automatically read back into EnSight at the end. This results in the PDF function correctly sitting in the time domain, and allows the user to normally control the timestep & all other features of EnSight while the PDF Graph correctly dove-tailed into the heirarchy of EnSight. This also allows the user to further interrogate and  and operate with the data as though it were normal Elemental Data (filtering, elevated surfaces, coloring, etc.)

I also updated the routine to correctly work with 2D parent parts (using the EleSize() and StatMoment() together rather than Vol().)

In addition, I also updated the routine to select the original Parent Parts for ease of repeat usage (rather than leaving the IsoVolume part selected).

Here is a example movie made from the Transient PDF function:



Update 05-Nov-2013: Updated routine for bug regarding the variable used for the IsoVolume. Previously, the variable selected in the GUI might not have been the variable used in the IsoVolume. Fixed in version 2.3

Update 03-Nov-2015 : Updated the explicit setting for the IsoVolume to “Band”, rather than default (not explicitly set). This would cause problems if previous IsoVolumes were not of type == Band.

Full Change log within header of Python File


Click here to Download PDF Tool v2.3 (pdf_graph_v2.3) Dated 05-Nov-2013

Click here to Download PDF Tool v 2.6 (pdf_graph_v2.6) Dated 03-Nov-2015