Until now, visual software development using “What You See Is What You Get” (WYSIWYG) techniques in Delphi has been done mainly using the Visual Component Library (VCL). The ease of form creation and control placement has largely decided of the success of Delphi as an environment for rapid creation of even the most complex GUI applications.
The VCL library is strongly associated with the Windows platform, which virtually excludes it from delivering the cross-platform applications that are very important in both current and future versions of Delphi. Now there is a new solution, FireMonkey! FMX is a library with its own set of controls. In a nutshell, rendering the forms created with Delphi using FMX is done inside the library. As a result, the visual components have become platform-independent. The possibility to create forms, operating and looking the same on Windows, Mac, and in the future, even on the Linux system sounds promising, but that’s not all. FireMonkey has much more to offer. Look at multimedia software, increasingly emerging three-dimensional applications or modern graphical interfaces. Delphi is not left behind. It will now have a reputation as a tool for far more than only database applications.
The new “FireMonkey” library supports the creation of applications as easily and intuitively as we have known from Delphi for years. Something that many other development environments lack. FMX is also ideally suited for rapid software development of three-dimensional elements. Inserting and moving a geometric object and generating events for it, such as clicking a mouse or selection, just became easy. Apart from the basic control of three-dimensional space FMX also provides methods to work with the GPU via shaders.
This paper presents some of the FireMonkey possibilities used when creating the demonstration, and after time it can be regarded as a good introduction to FireMonkey development. In this article we are demonstrating the process of writing an application using FireMonkey. You can learn how to combine GUI and rendering of three-dimensional objects and further enhance it with visual effects based on GPU shaders. However, what is not covered here is shader development. Writing shaders is complex and requires a good knowledge of algebra and geometry, and we encourage you to read and learn about it using existing resources listed in the “References” section at this end of this article.
The main purpose of this demonstration program is to test the possibilities and convenience of using FireMonkey for creating visual applications. The result of our work will be two graphics programs. The first is the browser of models stored in the COLLADA format, supported by the most of programs used in graphics today. The second program is a mini compilation of models with a slightly modified method of moving first-person camera. Models and textures were also prepared for the purpose of this demonstration.
Creating an application project
Let's start by creating a new FireMonkey application.
From the main Delphi menu select “File → New → Other...”
Then from the “New Items” dialog, open "Delphi Projects", select "3D FireMonkey Application" and click OK.
3D application - Creating a 2D interface in 3D
A 3D application by default displays a three dimensional scene and that is what interests us most in terms of this document. Let's open the form module. You should see the following view:
Add “TCube” and “TLight” components to the form. This gives you a simple 3D scene in which we can easily assign events to objects. Notice the contents of the “Object Inspector” with events for “Cube1” component.
Just press F9, and we have our first 3D program. You can now play with properties of visual objects to achieve a better view. It is a good idea to change the color of the form.
In a 3D application it is impossible to directly use components like buttons or lists. However, there is a very simple way to deal with this. To do this we will need some kind of a bridge between the 3D and 2D scenes. FireMonkey provides us with the “TLayer3D” component that has been designed just for that. Let's add it to the form.
We can now use the “Layer3D1” component as the surface for using components known from the VCL. Make sure your “Layer3D1” is selected and try to add a button.
You will notice that the button works virtually in the same way as known from the VCL, has similar properties and events. In this case, however, the button is located on a layer suspended in space, resembling a piece of paper.
Let’s return to the “Layer3D1” component. To achieve a normal 2D interface, we need to change the type of projection of this component. Use the "Projection" property in the Object Inspector and set it to "dxProjectionScreen"
The effect of this should look similar to the following:
As you can see “Layer3D1” component overshadowed the whole scene. To ensure visibility of the stage and the layer with 2D controls, you can simply set the "Align" property of “Layer3D1” component, for example to "vaRight" and adjust its width.
We have now created our first FireMonkey program that combines 2D with 3D interface.
The most important part of this article is to present two demos: “Composition” and “Model Viewer”. Let's see how the complete FMX interface of our “Model Viewer” application will look:
In the middle of the screen there is a blue “TDummy” component, which is used to display custom 3D objects.
Using its “OnRender” event we can implement our own system for displaying models and achieve advanced effects based on shaders.
Loading COLLADA 1.5
A custom “model viewer” application requires the capability to draw.
Models require the viewer's own drawing capabilities of displaying 3D models. To this end, we chose the COLLADA format (*. dae). This format is supported by the most well-known graphic software such as 3D Studio Max, Blender, Maya 3D.
COLLADA format defines an XML-based schema to make it easy to move 3D assets between applications, enabling diverse 3D authoring and content processing tools to be combined in production. The intermediate language provides comprehensive encoding of visual scenes including: geometry, shaders and effects, physics, animation, kinematics, and even multiple version representations of the same asset.
The full specifications of the current version and COLLADA xml scheme can be found on the Khronos Group (http://www.khronos.org/collada) web page.
The easiest way to load COLLADA files into memory in Delphi is to use the “XML Data Binding” wizard.
Select: File>> New>> Other ... >> Delphi Projects>> XML>> XML Data Binding
After these easy steps in the wizard we are able to generate a unit with Delphi code to load a COLLADA file.
Data loaded from a COLLADA file with automatically generated classes from XML Schema require additional processing in order to be transfered to the graphics card. This task deals with the “TModel” class that was created, located in the shared unit “MV.ColladaImport.pas”.
Here is an example of loading a model from a file using the class “TModel”:
FModel := TModel.CreateFromFile('ExampleModel.dae');
The COLLADA format is divided into librararies responsible for storing data of different parts of the scene. These parts may be animated, contain lights, camera, geometry, effects, materials, physics, etc.
The “TModel.CreateFromFile” method deals with some of them.
In the real world, as opposed to the virtual world, has the advantage of being infinite in detail and size. For a graphical representation of objects it is not possible, because you are use generalizations in the form of triangles with a material applied to them. Each set of triangles forms a geometry called a mesh and it is assigned a material. The corresponding structure in code is “MV.ColladaImport TMeshBuffer”:
TMeshBuffer = record
Vertices are always described by attributes. In FireMonkey using“TVertexBuffer” class we have access to some basic attributes:
“TVertexBuffer” contains vertex array, and “TIndexBuffer” an array of indices to them. Subsequent indices are the vertices of polygons, for example, each successive three indices forms a triangle. These data are loaded from the COLLADA file, but you can not use them directly. This process requires additional operations due to the fact that an independent file format stores the indexing attributes in different tables. This is the task of “TModel.GetMesh” method to recognize a geoemetry being loaded.
- In the first step, the mesh data are loaded into independent arrays of records “TVertexSource”.
- Further distribution is made from arrays of data into triangles. Indexes can be given attributes in the file COLLDADA in several ways, in order to maintain compatibility between different formats, two procedures, “LoadPolygons” and “LoadTriangles” are used. The exact manner of their action requires an analysis of the structure of the COLLADA file which is not the main focus of this document.
- The result of these operations is a collection of triangles, which could already be successfully passed to the graphics card and displayed, however, in the next step “GetOptimizedVertexArrayData” procedure is executed. Its task is to optimize the mesh model in a way that vertices with the same attributes are merged into one and corresponding indices are updated in the “TIndexBuffer”.
- The last step of loading the mesh is to fetch material information from previously prepared lists. This is done through searching by name in the list of available materials.
Processing images, effects and materials
The system is based on the material relationships between the three libraries:
In the demo, we confine ourselves only to load the textures stored in “Library_images”, and the creation of materials relating to them without the use of libraries specific effects at the level of the model. We could have used a “TBitmap” class images from disk. In order to get the most out of texture information stored in a COLLADA file, we wrote a “TTgaBitmap” class that adds additional capabilities to a “TBitmap” class that are specific to TGA graphic format. This gives the best support for the transparency channel. Any material can contain multiple textures that are used in the model. For us the most important were three of them, shown in the table below. “RGB” and “Alpha” are texture channels.
Sample textures used in a monkey model are shown in the figure below:
- ambient color - the color of static light influencing the object. It was used to light up the fiery fragments of the monkey.
- diffuse color – it is a color of an object with all the effects excluded, which is a kind of a skin.
- transparent intensity - corresponding to the power of transparency from 0 to 255. If the pixel value is equal to “0” then it will not be displayed, “127” is in the middle of translucent, etc.
- normal color – it is an encoded normal value of a surface, which is used for mapping the details of a complex model on a mesh with low detail. The effect is visible only in the case of adding light to model. In the case monkey model, the details are visible in the form of fur.
- specular intensity - is a factor determining the strength of reflection. Gives a gloss, in the case of a monkey model it was used for the eyes.
Processing visual scenes
All geometry data is stored in the local coordinates of a model. In order to view the model like in a graphical editor we should apply the transformation matrix to the model. For this purpose, we load all nodes in the tree holding the scene together with the transformation of geometry to which they refer from a COLLADA file.
“GetNode” procedure takes a node in the model and calculates its transformation matrix. In COLLADA format there are several ways of storing transformation data, including for example rotation angles, position and scale, or - as in the newer versions of a format, as a ready made transformation matrix.
For the purposes of the demo we created a system to handle the camera (MV.CameraLookAt.pas) allowing the observation of an object from a distance by rotating around a point in a 3D space. This task was achieved using the basic trigonometric formulas. We have written “SphericalToRect” function that converts a point from spherical to Cartesian coordinates. So giving the angle of rotation around the vertical axis, inclination, and the distance you can calculate the position of the camera in 3D space.
Using mathematical functions provided by FireMonkey we are using “MatrixLookAtRH” function to determine the transformation matrix of the camera as follows:
function TCameraLookAt.GetCameraMatrix: TMatrix3D;
FEye = Vector3DAdd (FTarget, SphericalToRect (FRotation, FDistance));
Result: = MatrixLookAtRH (FEye, FTarget, Vector3D (0, 0, 1));
The first parameter for the “MatrixLookAtRH” procedure is the location of a camera (“FEye”), the second is the target and the third is the vertical axis.
In this way, we created a fully functional camera ready for use in a 3D editor. The only remaining parts to be implemented are handling mouse events for rotations, shifting the point of view and distance with the mouse. Using FireMonkey this task can be performed as follows:
procedure TfMain.ViewportMouseMove(Sender: TObject; Shift: TShiftState; X,
// Calculate delta between prevoius and actual position of mouse
LMouseDelta := Point(FMousePosition.X - X, FMousePosition.Y - Y);
// Save mouse position as previous
FMousePosition := Point(X, Y);
if ssLeft in Shift then
FCamera.Rotate(LMouseDelta.X * FRotSpeed, LMouseDelta.Y * FRotSpeed)
else if ssRight in Shift then
FCamera.Move(LMouseDelta.X * FMoveSpeed, LMouseDelta.Y * FMoveSpeed, 0);
procedure TfMain.ViewportMouseWheel(Sender: TObject; Shift: TShiftState;
WheelDelta: Integer; var Handled: Boolean);
FCamera.Zoom(WheelDelta * FZoomSpeed);
The camera can be controlled by using “Rotate”, “Move” and “Zoom” procedures. Parameter values are deltas that describe how much a given value needs to change.
Shader Compilation in Delphi
Shaders are short computer programs, often written in a special language (called “shader language”) which describes the properties of pixels and vertices. Their use allows for a much more complicated and impressive visual effects associated, for example, with lighting and texturing.
The most popular modern languages in which you create shaders are:
- GLSL - OpenGL Shading Language (OpenGL fragment)
- HLSL - High Level Shading Language (DirectX shading language)
- Cg - C for graphics (developed by nVidia)
The current version FireMonkey for programs designed for Windows lets you apply compiled HLSL shaders.
In this demo we are going to focus on supporting shaders on Windows.
To be able to recompile the shaders for later use in FireMonkey download the latest "DirectX 9.0 SDK." This SDK contains FXC shader compiler located in the "Utilities" folder.
In order to be able to automatically recompile shaders used in the “Model Viewer” demo, we have placed the FXC tool in the demo “bin” folder.
The project "Model Viewer" uses the "Build Events" available in "Project Options", which automates the process of compiling HLSL source code to binary format. This event is triggered when recompiling a project.
More specifically we have used the "Pre-Build" event, which calls the script “CompileDXShaders.bat” from the “bin” directory.
for %%f in ("%SPATH%*.dxps") do (
call "%BIN%fxc.exe" /Tps_2_0 /Fo "%SPATH%%%~nf.ps.fxo" "%%f" /Zpc
for %%f in ("%SPATH%*.dxvs") do (
call "%BIN%fxc.exe" /Tvs_1_1 /Fo "%SPATH%%%~nf.vs.fxo" "%%f" /Zpc
for %%f in ("%SPATH%*.dxps") do (
call "%BIN%fxc.exe" /Tps_2_0 /Fc "%SPATH%%%~nf.ps.fxc" "%%f" /Zpc
for %%f in ("%SPATH%*.dxvs") do (
call "%BIN%fxc.exe" /Tvs_1_1 /Fc "%SPATH%%%~nf.vs.fxc" "%%f" /Zpc
The script compiles all the shaders that are located in the “..\shared\shaders” directory.
All files with “dxps" and "dxvs" extensions found in directory "..\shared\shaders\" will be compiled in versions vs 1.1 and ps 2.0 for the two types of files:
- / Fo (*. FXO) - the resulting binary file directly loaded in FireMonkey.
- / Fc (*. fxc) - assembly language listing file. This is a text file from which you can read the indexes to records submitted to shader “uniform” registers.
In order to test this mechanism we can create two test shaders in text files and save them to “..\shared\shaders” folder.
Listing vertex shader (test.dxvs):
float4x4 uCameraMatrix uniform;
float4x4 uTransformation uniform;
VS_IN struct ;
VS_OUT main (VS_IN
Listing pixel shader (test.dxps):
float4 main (VS_OUT
These are some of the simplest shaders performing the following operation.
We use input attributes to the vertex shaders to pass vertex position (pos) and texture coordinates (tex). Vertex position is transformed by a matrix model and the camera. The coordinates are sent directly to the pixel shader. In the pixel shader, in turn, we read the pixel color at coordinates (tex) on the “diffuseMap” texture and return as a result.
Loading shaders generated in this way we do as follows:
FPixelShader: = Context.CreatePixelShader (
TFile.ReadAllBytes (SHADERS_PATH +
FVertexShader: = Context.CreateVertexShader (
TFile.ReadAllBytes (SHADERS_PATH +
“TFile.ReadAllBytes” loads the file into memory and returns it as a dynamic array.
During the creation of demonstration programs we have focused on achieving the best results. This required programming the, currently used, visual effects using shaders. The subject of writing shaders is voluminous and requires an understanding of computer graphics, physics, computational geometry and mathematics. We created a Phong lighting model for a single source, with support for bump mapping and static light maps. We will not delve into mathematical formulas because it is out of the scope of this article. Please refer to the relevant literature on writing shaders.
The final result of our work is as follows:
Firemonkey allows for two modes of rendering. "Fixed Pipeline" is no longer supported by the graphics libraries and a more preferred "Programmable Pipeline" is used, which relies on shaders for rendering graphics.
Vertices are processed by a vertex shader. The result is then passed to the rasterization (filling) in the area of a triangle, where in turn, we can access interpolated vertex and pixel textures data. The result of calculations is the pixel color, which is passed to the screen buffer, or optionally to the texture. The data are transmitted to the shader using TContext class in several ways:
- Varying - in a case of passing data to a vertex shader, these are the data stored in “TVertexBuffer” class, and after some processing, passed to pixel shader.
Context.DrawTrianglesList (VetexData, IndexData, 1);
- Uniforms – For example these are material colors, the position of the light or the model matrix. These are fixed settings for all shader executions within the same mesh. In Firemonkey we pass vectors and matrices using the following calls:
Index corresponds to the number from a generated shader register. Indexes can be read from files *. FXO, what was described in the section on compiling shaders.
- TextureUnits - effects similar to the uniforms, but the difference is that they are used for textures.
The “unit” parameter is a unit number of a “texture”. We have decided on the following schema for numbering:
0 - diffuse
1 - normal
2 - ambient
Rendering the model that was prepared earlier is realized as follows:
Context.SetVertexShaderMatrix (0, FCamera.GetCameraMatrix (Context));
In the main loop rendering scene model data are transmitted to the graphics card. This is done using the context belonging to the calling component in the “OnRender” event.
“TModel.Render” method takes a class as a parameter context in which the component will be drawn. In particular, it deals with several tasks:
- Set the transformation matrix model in the scene
- Activation of textures for materials handled by the model.
- Transmission of vertex attributes.
- Draw a triangle on the basis of indices of vertices.
After drawing all models it is necessary to disable shaders by passing “nil” to the context.