Friday, January 29, 2010

OpenGL Transformation


Geometric data such as vertex positions and normal vectors are transformed via Vertex Operation andPrimitive Assembly operation in OpenGL pipeline before raterization process.
OpenGL vertex transformation
OpenGL vertex transformation

Object Coordinates

It is the local coordinate system of objects and is initial position and orientation of objects before any transform is applied. In order to transform objects, use glRotatef(), glTranslatef(), glScalef().

Eye Coordinates

It is yielded by multiplying GL_MODELVIEW matrix and object coordinates. Objects are transformed from object space to eye space using GL_MODELVIEW matrix in OpenGL. GL_MODELVIEWmatrix is a combination of Model and View matrices (Mview * Mmodel). Model transform is to convert from object space to world space. And, View transform is to convert from world space to eye space.
OpenGL eye coordinates
Note that there is no separate camera (view) matrix in OpenGL. Therefore, in order to simulate transforming the camera or view, the scene (3D objects and lights) must be transformed with the inverse of the view transformation. In other words, OpenGL defines that the camera is always located at (0, 0, 0) and facing to -Z axis in the eye space coordinates, and cannot be transformed. See more details of GL_MODELVIEW matrix in ModelView Matrix.
Normal vectors are also transformed from object coordinates to eye coordinates for lighting calculation. Note that normals are transformed in different way as vertices do. It is mutiplying the tranpose of the inverse of GL_MODELVIEW matrix by a normal vector. See more details in Normal Vector Transformation.
normal vector transformation

Clip Coordinates

It is after applying eye coordinates into GL_PROJECTION matrix. Objects are clipped out from the viewing volume (frustum). Frustum is used to determine how objects are projected onto screen (perspective or orthogonal) and which objects or portions of objects are clipped out of the final image. See more details of GL_PROJECTION matrix in Projection Matrix.
OpenGL clip coordinates

Normalized Device Coordinates (NDC)

It is yielded by dividing the clip coordinates by w. It is called perspective division. It is more like window (screen) coordinates, but has not been translated and scaled to screen pixels yet. The range of values is now normalized from -1 to 1 in all 3 axes.
OpenGL Normalized Device Coordinates

Window Coordinates (Screen Coordinates)

It is yielded by applying normalized device coordinates (NDC) to viewport transformation. The NDC are scaled and translated in order to fit into the rendering screen. The window coordinates finally are passed to the raterization process of OpenGL pipeline to become a fragment. glViewport()command is used to define the rectangle of the rendering area where the final image is mapped. And, glDepthRange() is used to determine the z value of the window coordinates. The window coordinates are computed with the given parameters of the above 2 functions;
glViewport(x, y, w, h);
glDepthRange(n, f);
OpenGL Window Coordinates
The viewport transform formula is simply acquired by the linear relationship between NDC and the window coordinates;


OpenGL Transformation Matrix
OpenGL Transform Matrix
OpenGL Transform Matrix
OpenGL uses 4 x 4 matrix for transformations. Notice that 16 elements in the matrix are stored as 1D array in column-major order. You need to transpose this matrix if you want to convert it to the standard convention, row-major format.
OpenGL has 4 different types of matrices; GL_MODELVIEW,GL_PROJECTIONGL_TEXTURE, and GL_COLOR. You can switch the current type by using glMatrixMode() in your code. For example, in order to select GL_MODELVIEW matrix, use glMatrixMode(GL_MODELVIEW).

Model-View Matrix (GL_MODELVIEW)

GL_MODELVIEW matrix combines viewing matrix and modeling matrix into one matrix. In order to transform the view (camera), you need to move whole scene with the inverse transformation.gluLookAt() is particularly used to set viewing transform.
Columns of OpenGL ModelView matrix
4 columns of GL_MODELVIEW matrix
The 3 matrix elements of the rightmost column (m12,m13m14) are for the translation transformation,glTranslatef(). The element m15 is the homogeneous coordinate. It is specially used for projective transformation.
3 elements sets, (m0m1m2), (m4m5m6) and (m8,m9m10) are for Euclidean and affine transformation, such as rotation glRotatef() or scaling glScalef(). Note that these 3 sets are actually representing 3 orthogonal axes;
  • (m0m1m2)   : +X axis, left vector, (1, 0, 0) by default
  • (m4m5m6)   : +Y axis, up vector, (0, 1, 0) by default
  • (m8m9m10) : +Z axis, forward vector, (0, 0, 1) by default
We can directly construct GL_MODELVIEW matrix from angles or lookat vector without using OpenGL transform functions. Here are some useful codes to build GL_MODELVIEW matrix:
  • Angles to Axes
  • Lookat to Axes
Note that OpenGL performs matrices multiplications in reverse order if multiple transforms are applied to a vertex. For example, If a vertex is transformed by MA first, and transformed by MBsecond, then OpenGL performs MB x MA first before multiplying the vertex. So, the last transform comes first and the first transform occurs last in your code.
// Note that the object will be translated first then rotated
glRotatef(angle, 1, 0, 0);   // rotate object angle degree around X-axis
glTranslatef(x, y, z);       // move object to (x, y, z)
drawObject();

Projection Matrix (GL_PROJECTION)

GL_PROJECTION matrix is used to define the frustum. This frustum determines which objects or portions of objects will be clipped out. Also, it determines how the 3D scene is projected onto the screen. (Please see more details how to construct the projection matrix.)
OpenGL provides 2 functions for GL_PROJECTION transformation. glFrustum() is to produce a perspective projection, and glOrtho() is to produce a orthographic (parallel) projection. Both functions require 6 parameters to specify 6 clipping planes; leftrightbottomtopnear and farplanes. 8 vertices of the viewing frustum are shown in the following image.
OpenGL Perspective Frustum
OpenGL Perspective Viewing Frustum
The vertices of the far (back) plane can be simply calculated by the ratio of similar triangles, for example, the left of the far plane is;
OpenGL Orthographic Frustum
OpenGL Orthographic Frustum
For orthographic projection, this ratio will be 1, so the leftright,bottom and top values of the far plane will be same as on the near plane.
You may also use gluPerspective() and gluOrtho2D() functions with less number of parameters. gluPerspective() requires only 4 parameters; vertical field of view (FOV), the aspect ratio of width to height and the distances to near and far clipping planes. The equivalent conversion from gluPerspective() to glFrustum() is described in the following code.
// This creates a symmetric frustum.
// It converts to 6 params (l, r, b, t, n, f) for glFrustum()
// from given 4 params (fovy, aspect, near, far)
void makeFrustum(double fovY, double aspectRatio, double front, double back)
{
    const double DEG2RAD = 3.14159265 / 180;

    double tangent = tan(fovY/2 * DEG2RAD);   // tangent of half fovY
    double height = front * tangent;          // half height of near plane
    double width = height * aspectRatio;      // half width of near plane

    // params: left, right, bottom, top, near, far
    glFrustum(-width, width, -height, height, front, back);
}

An example of an asymmetric frustum
However, you have to use glFrustum() directly if you need to create a non-symmetrical viewing volume. For example, if you want to render a wide scene into 2 adjoining screens, you can break down the frustum into 2 asymmetric frustums (left and right). Then, render the scene with each frustum.

Texture Matrix (GL_TEXTURE)

Texture coordinates (strq) are multiplied by GL_TEXTURE matrix before any texture mapping. By default it is the identity, so texture will be mapped to objects exactly where you assigned the texture coordinates. By modifying GL_TEXTURE, you can slide, rotate, stretch, and shrink the texture.
// rotate texture around X-axis
glMatrixMode(GL_TEXTURE);
glRotatef(angle, 1, 0, 0);

Color Matrix (GL_COLOR)

The color components (rgba) are multiplied by GL_COLOR matrix. It can be used for color space conversion and color component swaping. GL_COLOR matrix is not commonly used and is requiredGL_ARB_imaging extension.

Other Matrix Routines

glPushMatrix() : 
push the current matrix into the current matrix stack.
glPopMatrix() : 
pop the current matrix from the current matrix stack.
glLoadIdentity() : 
set the current matrix to the identity matrix.
glLoadMatrix{fd}(m) : 
replace the current matrix with the matrix m.
glLoadTransposeMatrix{fd}(m) : 
replace the current matrix with the row-major ordered matrix m.
glMultMatrix{fd}(m) : 
multiply the current matrix by the matrix m, and update the result to the current matrix.
glMultTransposeMatrix{fd}(m) : 
multiply the current matrix by the row-major ordered matrix m, and update the result to the current matrix.
glGetFloatv(GL_MODELVIEW_MATRIX, m) : 
return 16 values of GL_MODELVIEW matrix to m

Thursday, January 28, 2010

RGB to Grayscale

RGB to Grayscale

In order to convert RGB or BGR color image to grayscale image, we are frequently use the following conversion formulae:
Luminace = 0.3086 * Red + 0.6094 * Green + 0.0820 * Blue
Luminace = 0.299 * Red + 0.587 * Green + 0.114 * Blue



Notice that the luminance intensity is a sum of different weight of each color component. If we use same weight, for example, (R + G + B) / 3, then pure red, pure green and pure blue result in same gray scale level. And the other reason for using different weights is that human eye is more sensitive on green and red components than blue channel.

Fast Approximation

Implementing the conversion formula is quite simple in C/C++;
// interleaved color, RGBRGB...
for ( i = 0; i < size; i += 3 )
{
    // add weighted sum then round
    out[i] = (unsigned char)(0.299*in[i] + 0.587*in[i+1] + 0.114*in[i+2] + 0.5);
}
Since multiplication in integer domain is much faster than in float, we rewrite the formula like this for faster computation;
Luminance = (2 * Red + 5 * Green + 1 * Blue) / 8
Furthermore, we can optimize the code by substituting the multiplication and division into bit shift operators and addition. I have found this is about 4 times faster than the original float computation.
int tmp;
for ( i = 0; i < size; i += 3 )
{
    // faster computation, (2*R + 5*G + B) / 8
    tmp = in[i] << 1;                   // 2 * red
    tmp += in[i+1] << 2 + in[i+1];      // 5 * green
    tmp += in[i+2];                     // 1 * blue
    out[i] = (unsigned char)(tmp >> 3); // divide by 8
}

OpenGL Pipeline

OpenGL Pipeline has a series of processing stages in order. Two graphical information, vertex-based data and pixel-based data, are processed through the pipeline, combined together then written into the frame buffer. Notice that OpenGL can send the processed data back to your application. (See the grey colour lines)
OpenGL Pipeline
OpenGL Pipeline

Display List

Display list is a group of OpenGL commands that have been stored (compiled) for later execution. All data, geometry (vertex) and pixel data, can be stored in a display list. It may improve performance since commands and data are cached in a display list. When OpenGL program runs on the network, you can reduce data transmission over the network by using display list. Since display lists are part of server state and reside on the server machine, the client machine needs to send commands and data only once to server's display list.
 

Vertex Operation

Each vertex and normal coordinates are transformed by GL_MODELVIEW matrix (from object coordinates to eye coordinates). Also, if lighting is enabled, the lighting calculation per vertex is performed using the transformed vertex and normal data. This lighting calculation updates new color of the vertex.
 

Primitive Assembly

After vertex operation, the primitives (point, line, and polygon) are transformed once again by projection matrix then clipped by viewing volume clipping planes; from eye coordinates to clip coordinates. After that, perspective division by w occurs and viewport transform is applied in order to map 3D scene to window space coordinates. Last thing to do in Primitive Assembly is culling test if culling is enabled.
 

Pixel Transfer Operation

After the pixels from client's memory are unpacked(read), the data are performed scaling, bias, mapping and clamping. These operations are called Pixel Transfer Operation. The transferred data are either stored in texture memory or rasterized directly to fragments.
 

Texture Memory

Texture images are loaded into texture memory to be applied onto geometric objects.
 

Raterization

Rasterization is the conversion of both geometric and pixel data into fragment. Fragments are a rectangular array containing color, depth, line width, point size and antialiasing calculations (GL_POINT_SMOOTH, GL_LINE_SMOOTH, GL_POLYGON_SMOOTH). If shading mode is GL_FILL, then the interior pixels (area) of polygon will be filled at this stage. Each fragment corresponds to a pixel in the frame buffer.
 

Fragment Operation

It is the last process to convert fragments to pixels onto frame buffer. The first process in this stage is texel generation; A texture element is generated from texture memory and it is applied to the each fragment. Then fog calculations are applied. After that, there are several fragment tests follow in order; Scissor Test ⇒ Alpha Test ⇒ Stencil Test ⇒ Depth Test.
Finally, blending, dithering, logical operation and masking by bitmask are performed and actual pixel data are stored in frame buffer.
 

Feedback

OpenGL can return most of current states and information through glGet*() and glIsEnabled() commands. Further more, you can read a rectangular area of pixel data from frame buffer using glReadPixels(), and get fully transformed vertex data using glRenderMode(GL_FEEDBACK). glCopyPixels() does not return pixel data to the specified system memory, but copy them back to the another frame buffer, for example, from front buffer to back buffer.

OpenGL Introduction

OpenGL Introduction

OpenGL is a software interface to graphics hardware. It is designed as a hardware-independent interface to be used for many different hardware platforms. OpenGL programs can also work across a network (client-server paradigm) even if the client and server are different kinds of computers. The client in OpenGL is a computer on which an OpenGL program actually executes, and the server is a computer that performs the drawings.

OpenGL uses the prefix gl for core OpenGL commands and glu for commands in OpenGL Utility Library. Similarly, OpenGL constants begin with GL_ and use all capital letters. OpenGL also uses suffix to specify the number of arguments and data type passed to a OpenGL call.
glColor3f(1, 0, 0);         // set rendering color to red with 3 floating numbers
glColor4d(0, 1, 0, 0.2);    // set color to green with 20% of opacity (double)
glVertex3fv(vertex);        // set x-y-z coordinates using pointer

State Machine

OpenGL is a state machine. Modes and attributes in OpenGL will be remained in effect until they are changed. Most state variables can be enabled or disabled with glEnable() or glDisable(). You can also check if a state is currently enabled or disabled with glIsEnabled(). You can save or restore a collection of state variables into/from attribute stacks using glPushAttrib() or glPopAttrib(). GL_ALL_ATTRIB_BITS parameter can be used to save/restore all states. The number of stacks must be at least 16 in OpenGL standard.
(Check your maximum stack size with glinfo.)
 
glPushAttrib(GL_LIGHTING_BIT);    // elegant way to change states because
    glDisable(GL_LIGHTING);       // you can restore exact previous states
    glEnable(GL_COLOR_MATERIAL);  // after calling glPopAttrib()
glPushAttrib(GL_COLOR_BUFFER_BIT);
    glDisable(GL_DITHER);
    glEnable(GL_BLEND);



glPopAttrib();                    // restore GL_COLOR_BUFFER_BIT
glPopAttrib();                    // restore GL_LIGHTING_BIT

glBegin() and glEnd()

In order to draw geometric primitives (points, lines, triangles, etc) in OpenGL, you can specify a list of vertex data between glBegin() and glEnd(). This method is called immediate mode. (You may draw geometric primitives using other methods such as vertex array.)
glBegin(GL_TRIANGLES);
    glColor3f(1, 0, 0);     // set vertex color to red
    glVertex3fv(v1);        // draw a triangle with v1, v2, v3
    glVertex3fv(v2);
    glVertex3fv(v3);
glEnd();
There are 10 types of primitives in OpenGL; GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP, GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON.

glFlush() & glFinish()

Similar to computer IO buffer, OpenGL commands are not executed immediately. All commands are stored in buffers first, including network buffers and the graphics accelerator itself, and are awaiting execution until buffers are full. For example, if an application runs over the network, it is much more efficient to send a collection of commands in a single packet than to send each command over network one at a time.
glFlush() empties all commands in these buffers and forces all pending commands will to be executed immediately without waiting buffers are full. Therefore glFlush() guarantees that all OpenGL commands made up to that point will complete executions in a finite amount time after calling glFlush(). And glFlush() does not wait until previous executions are complete and may return immediately to your program. So you are free to send more commands even though previously issued commands are not finished.
glFinish() flushes buffers and forces commands to begin execution as glFlush() does, but glFinish() blocks other OpenGL commands and waits for all execution is complete. Consequently, glFinish() does not return to your program until all previously called commands are complete. It might be used to synchronize tasks or to measure exact elapsed time that certain OpenGL commands are executed.

glVertexArray Explained

Instead you specify individual vertex data in immediate mode (between glBegin() and glEnd() pairs), you can store vertex data in a set of arrays including vertex coordinates, normals, texture coordinates and color information. And you can draw geometric primitives by dereferencing the array elements with array indices.
Take a look the following code to draw a cube with immediate mode.
Each face needs 4 times of glVertex*() calls to make a quad, for example, the quad at front is v0-v1-v2-v3. A cube has 6 faces, so the total number of glVertex*() calls is 24. If you also specify normals and colors to the corresponding vertices, the number of function calls increases to 3 times more; 24 of glColor*() and 24 of glNormal*().
The other thing that you should notice is the vertex "v0" is shared with 3 adjacent polygons; front, right and up face. In immediate mode, you have to provide the shared vertex 3 times, once for each face as shown in the code.
glBegin(GL_QUADS);      // draw a cube with 6 quads
    glVertex3fv(v0);    // front face
    glVertex3fv(v1);
    glVertex3fv(v2);
    glVertex3fv(v3);

    glVertex3fv(v0);    // right face
    glVertex3fv(v3);
    glVertex3fv(v4);
    glVertex3fv(v5);

    glVertex3fv(v0);    // up face
    glVertex3fv(v5);
    glVertex3fv(v6);
    glVertex3fv(v1);

    ...                 // draw other 3 faces

glEnd();
Using vertex arrays reduces the number of function calls and redundant usage of shared vertices. Therefore, you may increase the performance of rendering. Here, 3 different OpenGL functions are explained to use vertex arrays; glDrawArrays(), glDrawElements() and glDrawRangeElements(). Although, better approach is using vertex buffer objects (VBO) or display lists.

Initialization

OpenGL provides glEnableClientState() and glDisableClientState() functions to activate and deactivate 6 different types of arrays. Plus, there are 6 functions to specify the exact positions(addresses) of arrays, so, OpenGL can access the arrays in your application.
  • glVertexPointer():  specify pointer to vertex coords array
  • glNormalPointer():  specify pointer to normal array
  • glColorPointer():  specify pointer to RGB color array
  • glIndexPointer():  specify pointer to indexed color array
  • glTexCoordPointer():  specify pointer to texture cords array
  • glEdgeFlagPointer():  specify pointer to edge flag array
Each specifying function requires different parameters. Please look at OpenGL function manuals. Edge flags are used to mark whether the vertex is on the boundary edge or not. Hence, the only edges where edge flags are on will be visible if glPolygonMode() is set with GL_LINE.
Notice that vertex arrays are located in your application(system memory), which is on the client side. And, OpenGL on the server side gets access to them. That is why there are distinctive commands for vertex array; glEnableClientState() and glDisableClientState() instead of using glEnable() and glDisable().

glDrawArrays()

glDrawArrays() reads vertex data from the enabled arrays by marching straight through the array without skipping or hopping. Because glDrawArrays() does not allows hopping around the vertex arrays, you still have to repeat the shared vertices once per face.
glDrawArrays() takes 3 arguments. The first thing is the primitive type. The second parameter is the starting offset of the array. The last parameter is the number of vertices to pass to rendering pipeline of OpenGL.
For above example to draw a cube, the first parameter is GL_QUADS, the second is 0, which means starting from beginning of the array. And the last parameter is 24: a cube requires 6 faces and each face needs 4 vertices to build a quad, 6 × 4 = 24.

GLfloat vertices[] = {...}; // 24 of vertex coords
...
// activate and specify pointer to vertex array
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);

// draw a cube
glDrawArrays(GL_QUADS, 0, 24);

// deactivate vertex arrays after drawing
glDisableClientState(GL_VERTEX_ARRAY);
As a result of using glDrawArrays(), you can replace 24 glVertex*() calls with a single glDrawArrays() call. However, we still need to duplicate the shared vertices, so the number of vertices defined in the array is still 24 instead of 8. glDrawElements() is the solution to reduce the number of vertices in the array, so it allows transferring less data to OpenGL.
The size of vertex coordinates array is now 8, which is exactly same number of vertices in the cube without any redundant entries.
Note that the data type of index array is GLubyte instead of GLuint or GLushort. It should be the smallest data type that can fit maximum index number in order to reduce the size of index array, otherwise, it may cause performance drop due to the size of index array. Since the vertex array contains 8 vertices, GLubyte is enough to store all indices.

Different normals at shared vertex
Another thing you should consider is the normal vectors at the shared vertices. If the normals of the adjacent polygons at a shared vertex are all different, then normal vectors should be specified as many as the number of faces, once for each face.

For example, the vertex v0 is shared with the front, right and up face, but, the normals cannot be shared at v0. The normal of the front face is n0, the right face normal is n1 and the up face is n2. For this situation, the normal is not the same at a shared vertex, the vertex cannot be defined only once in vertex array any more. It must be defined multiple times in the array for vertex coordinates in order to match the same amount of elements in the normal array.

glDrawRangeElements()

Like glDrawElements(), glDrawRangeElements() is also good for hopping around vertex array. However, glDrawRangeElements() has two more parameters (start and end index) to specify a range of vertices to be prefetched. By adding this restriction of a range, OpenGL may be able to obtain only limited amount of vertex array data prior to rendering, and may increase performance.
The additional parameters in glDrawRangeElements() are start and end index, then OpenGL prefetches a limited amount of vertices from these values: end - start + 1. And the values in index array must lie in between start and end index. Note that not all vertices in range (start, end) must be referenced. But, if you specify a sparsely used range, it causes unnecessary process for many unused vertices in that range.
GLfloat vertices[] = {...};     // 8 of vertex coords
GLubyte indices[] = {0,1,2,3,   // 24 of indices
                     0,3,4,5,
                     0,5,6,1,
                     1,6,7,2,
                     7,4,3,2,
                     4,7,6,5};
...
// activate and specify pointer to vertex array
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, vertices);

// draw first half, range is 6 - 0 + 1 = 7 vertices
glDrawRangeElements(GL_QUADS, 0, 6, 12, GL_UNSIGNED_BYTE, indices);

// draw second half, range is 7 - 1 + 1 = 7 vertices
glDrawRangeElements(GL_QUADS, 1, 7, 12, GL_UNSIGNED_BYTE, indices+12);

// deactivate vertex arrays after drawing
glDisableClientState(GL_VERTEX_ARRAY);
You can find out maximum number of vertices to be prefetched and the maximum number of indices to be referenced by using glGetIntegerv() with GL_MAX_ELEMENTS_VERTICES and GL_MAX_ELEMENTS_INDICES.
Note that glDrawRangeElements() is available OpenGL version 1.2 or greater.

Wednesday, January 27, 2010

Winmain Explained

The WinMain function is the conventional name for the user-provided entry point for a Windows-based application.
 The name WinMain is used by convention by many programming frameworks. Depending on the programming framework, the call to the WinMain function can be preceded and followed by additional activities specific to that framework.
Your WinMain should initialize the application, display its main window, and enter a message retrieval-and-dispatch loop that is the top-level control structure for the remainder of the application's execution. Terminate the message loop when it receives a WM_QUIT message. At that point, your WinMain should exit the application, returning the value passed in the WM_QUIT message's wParam parameter. If WM_QUIT was received as a result of calling PostQuitMessage, the value of wParam is the value of the PostQuitMessage function's nExitCode parameter. For more information, see Creating a Message Loop.

Note:
ANSI applications can use the lpCmdLine parameter of the WinMain function to access the command-line string, excluding the program name. Note that lpCmdLine uses the LPSTR data type instead of the LPTSTR data type. This means that WinMain cannot be used by Unicode programs. The GetCommandLineW function can be used to obtain the command line as a Unicode string. Some programming frameworks might provide an alternative entry point that provides a Unicode command line. For example, the Microsoft Visual Studio C++ complier uses the name wWinMain for the Unicode entry point.


Syntax
int WINAPI WinMain(      
    HINSTANCE hInstance,
    HINSTANCE hPrevInstance,
    LPSTR lpCmdLine,
    int nCmdShow
);
Parameters

hInstance
[in] Handle to the current instance of the application.
hPrevInstance
[in] Handle to the previous instance of the application. This parameter is always NULL. If you need to detect whether another instance already exists, create a uniquely named mutex using the CreateMutex function. CreateMutex will succeed even if the mutex already exists, but the function will return ERROR_ALREADY_EXISTS. This indicates that another instance of your application exists, because it created the mutex first. However, a malicious user can create this mutex before you do and prevent your application from starting. To prevent this situation, create a randomly named mutex and store the name so that it can only be obtained by an authorized user. Alternatively, you can use a file for this purpose. To limit your application to one instance per user, create a locked file in the user's profile directory.
lpCmdLine
[in] Pointer to a null-terminated string specifying the command line for the application, excluding the program name. To retrieve the entire command line, use the GetCommandLine function.
nCmdShow
[in] Specifies how the window is to be shown. This parameter can be one of the following values.
SW_HIDE
Hides the window and activates another window.
SW_MAXIMIZE
Maximizes the specified window.
SW_MINIMIZE
Minimizes the specified window and activates the next top-level window in the Z order.
SW_RESTORE
Activates and displays the window. If the window is minimized or maximized, the system restores it to its original size and position. An application should specify this flag when restoring a minimized window.
SW_SHOW
Activates the window and displays it in its current size and position.
SW_SHOWMAXIMIZED
Activates the window and displays it as a maximized window.
SW_SHOWMINIMIZED
Activates the window and displays it as a minimized window.
SW_SHOWMINNOACTIVE
Displays the window as a minimized window. This value is similar to SW_SHOWMINIMIZED, except the window is not activated.
SW_SHOWNA
Displays the window in its current size and position. This value is similar to SW_SHOW, except the window is not activated.
SW_SHOWNOACTIVATE
Displays a window in its most recent size and position. This value is similar to SW_SHOWNORMAL, except the window is not actived.
SW_SHOWNORMAL
Activates and displays a window. If the window is minimized or maximized, the system restores it to its original size and position. An application should specify this flag when displaying the window for the first time.
Return Value
If the function succeeds, terminating when it receives a WM_QUIT message, it should return the exit value contained in that message's wParam parameter. If the function terminates before entering the message loop, it should return zero.



Explaination of Win32 Project under VisualStudio

To create a new Win32 project

  1. On the File menu, click New, and then click Project....
  2. In the Project Types pane, select Win32 in the Visual C++ node, and then select Win32 Project in the Templates pane.
    Type a name for the project, such as win32app. You can accept the default location, type a location, or browse to a directory where you want to save the project.
  3. On the Win32 Application Wizard, select Next.
  4. On the Win32 Application Wizard, under Application type select Windows application. Under Additional options select Empty project. Leave the remaining options as they are. Click Finish to create the project.
  5. Add a C++ file to the project by selecting Add New Item... from the Project menu. In the Add New Item dialog box, select C++ File (.cpp). Type a name for the file, such as GT_HelloWorldWin32.cpp, and click Add.

To start a Win32 applications

  1. As you know, every C and C++ application must have a main function. This function is the starting point for the application. Similarly, in a Win32 application, every application must have a WinMain function. The syntax for WinMain is as follows:



    int WINAPI WinMain(HINSTANCE hInstance,
                       HINSTANCE hPrevInstance,
                       LPSTR lpCmdLine,
                       int nCmdShow);
    
    For an explanation of the parameters and return value of this function, see WinMain Function.


  2. In addition to WinMain, each Win32 application must also have a second function which is usually called WndProc, which stands for window procedure. The syntax for WndProc is as follows:

    LRESULT CALLBACK WndProc(HWND, UINT, WPARAM, LPARAM);
    
    The purpose of this function is to handle any messages that your application receives from the operating system. When does your application receive messages from the operating system? All the time! For example, imagine that we have created a dialog box that has an OK button. When the user clicks that button, the operating system sends our application a message, which lets us know that a user pressed this button. The WndProc function is responsible for responding to that event. In our example, the appropriate response might be to close the dialog box.
     
    EXTRA READ
      More Information on  Window Procedures

    • Window Procedures Every window has an associated window procedure — a function that processes all messages sent or posted to all windows of the class. All aspects of a window's appearance and behavior depend on the window procedure's response to these messages. 
    • Each window is a member of a particular window class. The window class determines the default window procedure that an individual window uses to process its messages.
       

To add functionality to WinMain

  1. First, create inside the WinMain function a window class structure of type WNDCLASSEX. This structure contains information about your window, such as the application's icon, the background color of the window, the name to display in the title bar, the name of the window procedure function, and so on. A typical WNDCLASSEX structure follows:



    WNDCLASSEX wcex;
    
        wcex.cbSize = sizeof(WNDCLASSEX);
        wcex.style          = CS_HREDRAW | CS_VREDRAW;
        wcex.lpfnWndProc    = WndProc;
        wcex.cbClsExtra     = 0;
        wcex.cbWndExtra     = 0;
        wcex.hInstance      = hInstance;
        wcex.hIcon          = LoadIcon(hInstance, MAKEINTRESOURCE(IDI_APPLICATION));
        wcex.hCursor        = LoadCursor(NULL, IDC_ARROW);
        wcex.hbrBackground  = (HBRUSH)(COLOR_WINDOW+1);
        wcex.lpszMenuName   = NULL;
        wcex.lpszClassName  = szWindowClass;
        wcex.hIconSm        = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_APPLICATION));
    
    For an explanation of the fields of this structure, see WNDCLASSEX.



  2. Now that you have created your window class, you must register it. Use the RegisterClassEx function, and pass the window class structure as an argument:



    if (!RegisterClassEx(&wcex))
        {
            MessageBox(NULL,
                _T("Call to RegisterClassEx failed!"),
                _T("Win32 Guided Tour"),
                NULL);
    
            return 1;
        }
    



  3. Now that you have registered your class, it is time to create a window. Use the CreateWindow function as follows:



    static TCHAR szWindowClass[] = _T("win32app");
    static TCHAR szTitle[] = _T("Win32 Guided Tour Application");
    // The parameters to CreateWindow explained:
    // szWindowClass: the name of the application
    // szTitle: the text that appears in the title bar
    // WS_OVERLAPPEDWINDOW: the type of window to create
    // CW_USEDEFAULT, CW_USEDEFAULT: initial position (x, y)
    // 500, 100: initial size (width, length)
    // NULL: the parent of this window
    // NULL: this application dows not have a menu bar
    // hInstance: the first parameter from WinMain
    // NULL: not used in this application
    HWND hWnd = CreateWindow(
        szWindowClass,
        szTitle,
        WS_OVERLAPPEDWINDOW,
        CW_USEDEFAULT, CW_USEDEFAULT,
        500, 100,
        NULL,
        NULL,
        hInstance,
        NULL
    );
    if (!hWnd)
    {
        MessageBox(NULL,
            _T("Call to CreateWindow failed!"),
            _T("Win32 Guided Tour"),
            NULL);
    
        return 1;
    }
    
    This function returns an HWND, which is a handle to a window. For more information, see Windows Data Types.



  4. Now that we have created the window, we can display it to the screen using the following code:



    // The parameters to ShowWindow explained:
    // hWnd: the value returned from CreateWindow
    // nCmdShow: the fourth parameter from WinMain
    ShowWindow(hWnd,
        nCmdShow);
    UpdateWindow(hWnd);
    
    So far, this window will not display much, because we have not yet implemented the WndProc function.



  5. The final step of WinMain is the message loop. The purpose of this loop is to listen for messages that the operating system sends. When the application receives a message, the message is dispatched to the WndProc function to handle it. The message loop resembles this:



    MSG msg;
        while (GetMessage(&msg, NULL, 0, 0))
        {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
    
        return (int) msg.wParam;
    
    For more information about the structures and functions that is used in the message loop, see MSG, GetMessage, TranslateMessage, and DispatchMessage.
    The steps that you have just completed are common to most Win32 applications. For include directives and global variable declarations required in this application,
    At this point, your WinMain function should resemble this:

    int WINAPI WinMain(HINSTANCE hInstance,
                       HINSTANCE hPrevInstance,
                       LPSTR lpCmdLine,
                       int nCmdShow)
    {
        WNDCLASSEX wcex;
    
        wcex.cbSize = sizeof(WNDCLASSEX);
        wcex.style          = CS_HREDRAW | CS_VREDRAW;
        wcex.lpfnWndProc    = WndProc;
        wcex.cbClsExtra     = 0;
        wcex.cbWndExtra     = 0;
        wcex.hInstance      = hInstance;
        wcex.hIcon          = LoadIcon(hInstance, MAKEINTRESOURCE(IDI_APPLICATION));
        wcex.hCursor        = LoadCursor(NULL, IDC_ARROW);
        wcex.hbrBackground  = (HBRUSH)(COLOR_WINDOW+1);
        wcex.lpszMenuName   = NULL;
        wcex.lpszClassName  = szWindowClass;
        wcex.hIconSm        = LoadIcon(wcex.hInstance, MAKEINTRESOURCE(IDI_APPLICATION));
    
        if (!RegisterClassEx(&wcex))
        {
            MessageBox(NULL,
                _T("Call to RegisterClassEx failed!"),
                _T("Win32 Guided Tour"),
                NULL);
    
            return 1;
        }
    
        hInst = hInstance; // Store instance handle in our global variable
    
        // The parameters to CreateWindow explained:
        // szWindowClass: the name of the application
        // szTitle: the text that appears in the title bar
        // WS_OVERLAPPEDWINDOW: the type of window to create
        // CW_USEDEFAULT, CW_USEDEFAULT: initial position (x, y)
        // 500, 100: initial size (width, length)
        // NULL: the parent of this window
        // NULL: this application dows not have a menu bar
        // hInstance: the first parameter from WinMain
        // NULL: not used in this application
        HWND hWnd = CreateWindow(
            szWindowClass,
            szTitle,
            WS_OVERLAPPEDWINDOW,
            CW_USEDEFAULT, CW_USEDEFAULT,
            500, 100,
            NULL,
            NULL,
            hInstance,
            NULL
        );
    
        if (!hWnd)
        {
            MessageBox(NULL,
                _T("Call to CreateWindow failed!"),
                _T("Win32 Guided Tour"),
                NULL);
    
            return 1;
        }
    
        // The parameters to ShowWindow explained:
        // hWnd: the value returned from CreateWindow
        // nCmdShow: the fourth parameter from WinMain
        ShowWindow(hWnd,
            nCmdShow);
        UpdateWindow(hWnd);
    
        // Main message loop:
        MSG msg;
        while (GetMessage(&msg, NULL, 0, 0))
        {
            TranslateMessage(&msg);
            DispatchMessage(&msg);
        }
    
        return (int) msg.wParam;
    }
    


To add functionality to WndProc

  1. The purpose of the WndProc function is to handle messages that your application receives. You usually implement this by using a switch function.
    The first message we will handle is the WM_PAINT message. Your application receives this message when a portion of your application's window must be updated. When a window is first created, the whole window must be updated, and this message is passed to indicate this.
    The first thing that you should do when you handle a WM_PAINT message is call BeginPaint, and the last thing that you should do is call EndPaint. In between these two function calls you handle all the logic to lay out the text, buttons, and other controls for your window. For this application, we display the string "Hello, World!" inside the window. To display text, use the TextOut function, as shown here:

    PAINTSTRUCT ps;
    HDC hdc;
    TCHAR greeting[] = _T("Hello, World!");
    
    switch (message)
    {
    case WM_PAINT:
        hdc = BeginPaint(hWnd, &ps);
    
        // Here your application is laid out.
        // For this introduction, we just print out "Hello, World!"
        // in the top left corner.
        TextOut(hdc,
            5, 5,
            greeting, _tcslen(greeting));
        // End application-specific layout section.
    
        EndPaint(hWnd, &ps);
        break;
    }
    



  2. Your application will typically handle many other messages, such as WM_CREATE and WM_DESTROY. A simple but complete WndProc function follows:

    LRESULT CALLBACK WndProc(HWND hWnd, UINT message, WPARAM wParam, LPARAM lParam)
    {
        PAINTSTRUCT ps;
        HDC hdc;
        TCHAR greeting[] = _T("Hello, World!");
    
        switch (message)
        {
        case WM_PAINT:
            hdc = BeginPaint(hWnd, &ps);
    
            // Here your application is laid out.
            // For this introduction, we just print out "Hello, World!"
            // in the top left corner.
            TextOut(hdc,
                5, 5,
                greeting, _tcslen(greeting));
            // End application specific layout section.
    
            EndPaint(hWnd, &ps);
            break;
        case WM_DESTROY:
            PostQuitMessage(0);
            break;
        default:
            return DefWindowProc(hWnd, message, wParam, lParam);
            break;
        }
    
        return 0;
    }