Wednesday, July 4, 2012

Row Major Order vs Column Major Order in Computer Science

9 comments

I was just reading this great article on Wikipedia about Row Major Order and Column Major Order and how they relate to Computer Science and storage of two-dimensional arrays. It was really enlightening for me so unless you know all about it, I would suggest giving it a read if you see anything you like here. I have been doing some stuff with GLSL and reading about it and I was surprised to read that it uses column major order when storing matrices (which are basically just 2D arrays).

When I took Linear Algebra at the University a while back we had talked a bit about the differences between doing things using either rows or columns and that you could do both. I did not quite understand this at the time, I knew that you could do the same things in different ways if you used rows or columns. But now I know that at least when it comes to computers there can be applications where either one could be better than the other.

Row-Major Order

I am going to talk for a bit about Row-Major Order, and anyone that is close with the C/C++ programming language will know that two dimensional arrays are stored in physical memory contiguously like so:

int A[3][4] =  {{19,22,31,42},
                 {50,61,32,83}, 
                 {93,47,15,66}};

This array would be stored in memory in Row-Major Order with rows one after the other stored in memory, with the array cells contiguously stored as shown below:

19
22
31
42
50
61
32
83
93
47
15
66

If you know that your array is in Row-Major Order and initialized with data of a certain type and size then you can calculate the linear offset in memory of a certain element from the beginning of the array. The figure I created below shows how to calculate such an offset.

'Two-Dimensional' Arrays in Java vs C++

This is also how two dimensional arrays are usually stored in assembly, but you have enough control at that level to store them however you want. It is important to know if you are using Java, that despite being similar in many ways to C++, it does not actually support two-dimensional arrays. Probably the biggest characteristic of an array whether one-dimensional or two-dimensional is that it is stored in contiguous memory, all of it together. In Java what we have are arrays of arrays. It might sound nit-picky but it is not quite the same thing as the two-dimensional arrays in C/C++.

In Java, they do it similar to Row-Major Order. If we take the above example again. If we declare an integer array in java int A[3][4] it would initialize an array that we could reference as simply A, where A.length would return 3. A can be referenced as an array of references or 'pointers' to 3 different integer arrays. In Java it is worth mentioning that because these are references Java's arrays are inherently jagged. You can have different number of elements in each array, and even a null row. We could set A[2] = null; and it would delete the reference putting the integer array and the 4 ints up for garbage collection.

In the above array int A[3][4] in Java each of our three rows would have elements that were stored in contiguous memory, but the rows themselves are not stored in adjacent memory addresses. You cannot count on them to be anywhere related in memory:

19
22
31
42
50
61
32
83
93
47
15
66
Here in Java the element that contains 42 would not be stored next to the element containing 50. 83 is not contiguous to 93.

Column-Major Order

There are several programming languages that store two-dimensional arrays in Column-Major Order rather than by row. This is popular for matrix operations so it can be found in the MATLAB computing language, Fortran the scientific computing language, and because of heavy matrix math in graphics operations, shading languages such as GLSL and HLSL use Column-Major Ordering.

Using Column-Major Order, columns instead of rows are stored in sequence. If we take our original array A,

int A[3][4] =  {{19,22,31,42},
                 {50,61,32,83}, 
                 {93,47,15,66}};

And we store the above array in Column Major order it becomes the following stored contiguously in adjacent cells of memory, with one column after the other as shown below:

19
50
93
22
61
47
31
32
15
42
83
66

If the array is stored like this, then the linear offset can be calculated as explained by the following figure I created with LaTeX below:

Graphics shader languages and the like use Column-Major Ordering because certain matrix operations require us to consider a matrix as sets of column vectors. This would be very slow if we had our matrices in storage in Row-Major Order. In fact the extra work needed if we have to consider a Row-Major Ordered array as a Column-Major Order array is that of transposing the matrix. This is relatively expensive and is hard to do in place for arrays with an unequal amount of rows and columns.

Column-Major Ordering in OpenGL Shader Language - GLSL
In the open graphics library OpenGL the shader language called GLSL has arrays in the form of vectors and two-dimensional arrays in the form of square matrices. There are mat2, mat3, and mat4 data types, which store 2x2, 3x3, or 4x4 matrices respectively (float by default). Initialization in GLSL is mostly handled through constructors and it has pretty good flexibility about this. You can assume always though that initialization will be handled in Column-Major order. The following illustrates my point:

vec2 a = vec2(3.0, 4.0); // stored column vec [3.0,4.0]
vec2 c = vec2(5.0, 6.0);

mat2 ac = mat2(a,c); // Stored in memory as [3.0,4.0,5.0,6.0] mat2 d = mat2(3.0,4.0,5.0,6.0); //two column same as ac

Tuesday, July 3, 2012

UV Texture Coordinates and Texture Mapping - OpenGL / DirectX

8 comments

Because I am not in school right now, I have been getting pretty heavy into WebGL graphics trying to reinforce and learn new 3D and graphics concepts. At the moment I am trying to get a solid foundation on texture mapping before moving onto bump mapping, shadow mapping and the like, so I decided to write up a simple tutorial.

When I am trying to learn something new I think a great way to start off is to at least skim the associated wikipedia page, so why not pop over there either before or after reading this and check out the page on Texture Mapping. Did you know that texture mapping was first laid out in the Ph.D thesis of Dr. Edwin Catmull all the way back in 1974? No wonder the Atari and the like debuted just a few years later.

Texture coordinates are used in order to map a static image onto a 2D or 3D geometry in order to create a mesh. There are a few different ways that texture coordinates can be defined. In any case the standard is to specify them as an ordered pair (u, v) usually with values ranging from 0 to 1 floating point.

Basically the idea with any kind of texture coordinates is that we will use some of the coordinates to map to particular vertices of a geometry. The rest of the pixels are interpolated between the specified vertices. All kinds of 3D objects can be mapped to using a 2D texture, however the mapping will be different for differently shaped objects and obviously some objects will be harder to map than others.

I think that the typical workflow at most studios would involve a modeler first creating the geometry, then using tools to unfold it and draw a texture, by using these tools they can match the texture coordinates to the objects correct associated vertices. The mapping data would then be handed over to the programmer. I am not very experienced with the modeling side of things so I would love to hear from anyone who knows more about how this works. A lot of graphics libraries will handle this for you in some default way for simple geometries, whilst having some mechanism for the programmer to specify custom mapping of texture coordinates to the objects local coordinates.

With that being said, it makes sense that regardless of how the coordinate system is set up, it seems they are always specified as floating point values between 0 and 1. Basically, this means we are only specifying positions on our texture as percentages. This is because we need to interpolate between vertices and this makes it a lot easier.

Although, Cartesian coordinates are the ones we all learn in school and are probably the simplest, these are not however the coordinates that are usually used with textures. Instead, textures are mapped according to what we call UV Texture Coordinates. Basically UV coordinates are just a way of making the range of our coordinates can take be the same regardless of the width or height of the image.

Even though Cartesian coordinates are not used for mapping a texture to geometry, they are relevant however because a digital image is stored as a Cartesian grid. Where each single pixel is a square of the grid. An image is made up of individual quantifiable pixels starting at the first pixel 1x1 and going to the last pixel which will be (img_width)x(img_height).

Now if we have a range of possible values and we want to map all of these possible values to a floating point range between 0 and 1, how can we do this? Well basically all we have to do is divide each value by the maximum of its possible range. I could talk about math all day, but let me just lay it out more formally:


If we define the pixels of an image as a set of ordered pairs (x,y) on a Cartesian grid, then we can define it's texture coordinates as the range of ordered pairs (U,V) such that U = x / (img_width) and V = y / (img_height).

So for an example, let's say that we have a texture that has a width of 700 pixels and a height of 512 pixels. Pixels are perfect squares, all the same size, so we've got 700x512 different pixels. And let's say we need to know what the (U,V) texture coordinate is at the pixel 310x400. We could therefore come up with the texture coordinate (0.44286, 0.78125).

U = 0.44286 = 310/700 , V = 0.78125 = 400/512.

I am going to attempt to make this tutorial equally valid for both OpenGL and DirectX. So far the above really applies to both. It is really only where the origin begins that they do not agree. I think it is important for people to realize there is not always a "best" way to do things and that not always everyone is going to agree. You are going to have to learn more than one system of doing things in lots of cases. In the case of texture coordinates, the big difference between OpenGL and DirectX is:

In OpenGL the origin of our texture coordinates is in the bottom left corner of the texture and in DirectX it is in the upper left. This means that our first coordinate will always be the same in both systems. We have to flip the second coordinate to convert between the two. This simple illustrations on this page demonstrate the differences. We can do this by subtracting the second coordinate by one.

DirectX to OpenGL --- (U,V) --> (U, 1-V)
OpenGL to DirectX --- (W,X) --> (W, 1-X)
DirectX (U,V)=(0.4, 0.75) --- 40% from left, 75% from top
OpenGL (U,V)=(0.4, 0.75) --- 40% from left, 75% from bottom

DirectX (U,V)=(0.4, 0.25) --- 40% from the left, 25% from top / 75% from bottom
OpenGL (U,V)=(0.4, 0.25) --- 40% from the left, 25% from bottom / 75% from top

Thursday, June 7, 2012

Developing in style with Visual Studio 2012

0 comments

So I am really excited about Windows 8 coming out. High profile applications such as Visual Studio and Photoshop are giving us a look at the new improved designs of Windows 8. Just a couple days ago the new Windows 8 preview came out and along with it they released the Visual Studio 2012 RC (Release Candidate). Normally I go to DreamSpark for all my Microsoft dev downloads, which is just their educational software channel. Check it out if you are a student. Actually though, for this download it is not needed. Anyone can download Visual Studio 2012 RC from here. Microsoft allows developers to use even the Professional and Ultimate versions of the software in order to get feedback on the applications before their official release in order to make it less buggy when it finally is released for a hefty price. After it is released anyone who has some kind of student eligibility can head over to Dreamspark to download it.

One thing that is immediately obvious about the changes between VS2010 and VS2012 is the massive design differences. The aesthetics are immediately obvious at the applications splash screen, which is metro-esque and dark. Dark is how I like my editors. There is great news for anyone who thinks like I do because they have finally allowed me to change the color scheme of my Visual Studio installation. Well sort of.. They let you choose between Light and Dark currently. Here are some screenshots of the dark color scheme interface:

You can head over to this MSDN article to learn How to: Change the Fonts and Colors Used in the IDE

The dark scheme I am talking about only applies to the menus and such. The text editor colors are controlled from the Preferences (see MSDN link above). What is cool is that they have this website called studiostyl.es where they host exported vs settings files that contain only text editor font settings. You can choose from many many different themes, such as my personal favorite Visual Studio 11 Dark Theme. You then download your favorite one, import it into Visual Studio and suddenly our IDE has almost as much style as The Dev-(b)log.

Now I love Linux but it can't replace Windows

0 comments

I must say after installing and using Ubuntu 12.04 on my Dell Studio XPS 1647 that I am in love with the operating system. However after I began trying to get into web development stuff again and start using javascript libraries like three.js. I realized I have been spoiled by developing in Visual Studio in Windows. Maybe I am the only one that feels like I am trying to write code in the dark ages when trying to edit javascript in notepad. I tried to download and use Eclipse and setup the Javascript IDE so I could set up three.js projects with code completion and0 intellisense like capabilities. I want code validation and easy previewing as I am coding. The thing about three.js is that it will not load files from the hard drive directly. They must be located on the webserver. This means that in addition to some kind of IDE I figured out that I could install and setup apache2 on Ubuntu and redirect the public folder to my choosing. This way I could view my projects by going to addresses in the browser such as localhost/threejsDemo.

I felt like this was a great setup for a long time and I am still a fanatic about Linux now. I feel like my laptop doesn't get nearly as good of performance. I know for a fact that the ATI proprietary driver for linux is not nearly as good as the latest ones on Windows. The CPU does not feel nearly as quick (probably no hyperthreading/turboboost). As I began adding more and more javascript libraries and more functionality eclipse seemed to slow to a crawl and I would get errors every so often. It would always seem to run out of ram. It attempted code validation with each edit and something was just not working well. I feel like if I could get some hardware that have well supported linux drivers then that will be the way to go. For now though I have switched back over to Windows 7 for the past couple days and now I notice the performance difference was drastic. And when I was getting comfortable just yesterday I saw that I could download a brand new shiny IDE in Windows land...

Friday, June 1, 2012

A bit of background...

1 comments

Ok, so I want to reiterate that I am first and foremost a programmer of native languages such as C/C++, Java, etc that execute outside of the browser. But I have recently gotten more interested in web development and I have especially been excited about it after having gone through the basics of network programming with sockets (bsd and winsock). And with the new developments with HTML5 (application development) and cross-platform and mobile apps. However, I haven't touched any markup XML or HTML or any javascript in years, I am more than a little rusty with it. I am however familiar with the language concepts involved in javascript. Interpreted, dynamically typed, etc. It has taken me some time to adjust to writing markup and scripting then simply opening it up from a webserver in a browser rather than compiling my code and executing it natively on the machine. I cannot believe the amount of time that you can save not having to compile between small changes in code and testing. However, I have also found out the hard way, that with the lax structure of html and no compiler for javascript to tell you when you have simple mistakes in your code you can spend a lot of time running around in circles not knowing what is wrong.

Also another thing I am not used to is dealing with a rather small, mostly undocumented, open source code base. From using it so far I feel that Three.JS has the potential to be a great 3D tool for app developers. However, there is almost no docs and many many examples. You must learn by pouring through the more than one hundred complete open source example apps included at the three.js github page. Another problem is that I have not settled on a very good development environment for HTML5 developemnt (game, 3D, or otherwise). I am currently using Eclipse with as many web related plugins as I can get. I have looked at one called Aptana and I think I am going to download it. I believe it is a replacement for the HTML editor which provides some intellisense-like features. Currently using javascript IDE build of eclipse with all the indigo web plugins + the web toolkit plugins I seem to get decent autocomplete for the three.js library. It is slow when filling out the list of functions in the THREE namespace and it seems to eat up RAM and eclipse doesn't like the lack of semicolons that I recently found out is common in javascript (eclipse has me wishing the spec wasn't so lax). I am now running Ubuntu 12.04 Linux on my Dell Studio XPS 1647 i5 laptop. It runs pretty well. ATI's proprietary driver for linux is not as good as it is for Windows (which is to be expected) and my CPU fan seems to kick on at the slightest load. But my speeds and everything seem decent. After using Linux I am amazed as a developer at the level of control you can have with an open source OS. It is the same feeling I got when I rooted my phone (XD Android).

Anyway, I figured out a couple things that were holding me back from just popping out a demo better than my last. First I had some problems that it seemed my textures didn't want to load right some of the time. It was driving me crazy because it seemed that the same code would do different things at different times for no apparent reason. I then read somewhere that the texture loader used by three.js apparently can only load certain (perhaps all) textures from a webserver, not the hard disk. This was surprising to me but I suppose it leverages something server side (ajax or something) in order to load textures better. Anyway, after firing up apache2 on my linux box I had all the textures working in no time. My next step was to add lighting to my scene to give it more depth (things look kind of flat when still if there is no lighting effects). Seemed like it was giving me trouble as well. Only Lambert and Phong materials will work with lighting I come to discover.

Anyway, I am getting ready to post-up the demo as well as walkthrough of the source I have so far. I am using a three.js boilerplate and extension library aptly named THREEx (github). It seems to be pretty good. I however eventually had trouble with lighting and had to redownload and load the latest minified library from the original three.js library in order to get my ambient lights going. So I'm not sure if I could have accidentally modified it or if there is something broken with an update somewhere perhaps. Anyway, I will include a zip of the entire project.

The boilerplate starts you with an FPS output onscreen, and f to go fullscreen, and p for screenshot. I add in a flag for webgl detection and show you how to customize your scene to look best on multiple possible renderers. Also I add to the onscreen output HUD what renderer (webgl or canvas) the demo is currently running on. (You might not think you need this, but it has surprised me a couple times and actually been useful).

Saturday, May 26, 2012

Three.JS Setup Tutorial

0 comments

Ok guys, so I have been looking at getting back into web development by starting this blog, but mostly I am interested in creating HTML5 web applications. I have been looking at the open source Three.JS library which can use the HTML5 canvas element for rendering or WebGL. I am amazed at how quickly you can get up and running with full access to the GPU with such a short amount of code. I heavily commented it for anyone wishing to learn from it. It is basically the sample found at the Three.JS Github Page but separated and much more thoroughly commented. ;)

Here is the HTML page that we will use to run our javascript to view our 3D Scene:

<doctype html>
<html lang="en">
 <head>
  <title>Three.js WebGL Test</title>
  <meta charset="utf-8">
  <style>
   body 
   {
    margin: 0px;
    background-color: #000000;
    overflow: hidden;
   }
  </style>
 </head>
 <body>
  <noscript>
   <p style="color: #ff0000">
Sorry, you need Javascript in order to view this web app. 
    Please enable it in your browser settings.
   </p>
</noscript>
  <script src="Three.js"></script>
  <script src="WebGLTest.js"></script>
 </body>
</html>

Notice all that is really needed is to include source to the minified Three.js library and after that we're ready to get our own scripting on the go:

/** 
 * WebGLTest.js
 * 
 *  Author: Cory Gross, May 26, 2012
 *  Description: Used to illustrate setting up a WebGL renderer and camera
 *      using Three.js to be displayed using HTML5 technologies. Thoroughly
 *      commented to illustrate the core components of a typical 3D set up.
 **/
var camera, scene, renderer;  //Core Three.JS components
var geometry, material, mesh; //Other globals

/** Simply call the functions created below */
init();
animate();

/** This function is responsible for creating all of our Three.js objects that
 *  that will be part of our scene. The core components and all other global 
 *  variables should be initialized here. */
function init()
{
    scene = new THREE.Scene(); //Scene object holds set of all 3D objects
    
/** We need to define a view frustrum for our camera, this is a 3D region
     *  of space which contains all the points visible to the camera. The field
     *  of view is the angle measured from the y-axis. Objects closer than the
     *  near plane or behind the far plane distance will not be rendered. */

    var FOV = 75;
    var aspectRatio = window.innerWidth / window.innerHeight;
    var near = 1;
    var far = 1000;
    

    /** Initialize our camera with frustrum view data and set position */
    camera = new THREE.PerspectiveCamera(FOV, aspectRatio, near, far);
    camera.position.z = 400; // Positive z axis comes out of screen

    /** Three.JS provides global namespace functions for creating geometry
     *  and core objects such as meshes and materials. */
    geometry = new THREE.CubeGeometry(200, 200, 200);
    material = new THREE.MeshBasicMaterial({color: 0xff0000, wireframe: true});

    /** Apply the material to the geometry to create a 3D mesh object
     *  finally add that to our scene at default position at origin */
    mesh = new THREE.Mesh(geometry, material);
    scene.add(mesh);

    /** Three.JS provides several renderers, WebGL is used here so Chrome is
     *  recommended but HTML5 Canvas is also supported along with others. */
    renderer = new THREE.WebGLRenderer();
    renderer.setSize(window.innerWidth, window.innerHeight);
    document.body.appendChild( renderer.domElement );
}

/** This function does all animation before calling redrawing the scene */
function animate()
{
    /** Three.JS includes requestAnimationFrame shim. This is implemented
        by modern browsers to optimize animations */
    requestAnimationFrame(animate);
    /** Rotate our mesh, slower around x axis, faster around y */
    mesh.rotation.x += 0.005;
    mesh.rotation.y += 0.01;
    /** Redraw the scene based on our camera's projection */
    renderer.render(scene, camera);
}

After this all that is left to do is do is put these two files, along with the minified Three.js file, into a directory of your choice. Or upload it to a webserver and view in your favorite WebGL supported web browser. I will be posting more soon.

Here is my project uploaded to a free web host: check it out

Friday, May 18, 2012

Welcome to the machine...

1 comments
So this is my first post on my new blog. I have had a couple of these where I have posted what I have been working on in the past, always having them hosted privately with funding. This time I have decided to go with Google's Blogger service. I spent just a little time customizing a template to my liking. Everything has to be dark as sunglasses at night.

Anyway, I have been working intensely since summer began a week or so ago on building up my knowledge of network programming (before I started I practically had none). I have learned all about the network stack that most servers have and read up on networking basics and communication protocols. I found some fantastic websites and guides. For anyone interested in this type of thing I highly recommend checking out these when you get time:

Beej's Guide to Network Programming (Sockets)
HTTP Made Really Easy

Both of these are great written guides and you are sure to learn a ton if you haven't had the joy of reading them yet. Anyways.. I read both of those back to front, err... front to.. kind of both ways actually. Earlier this past semester I got to wondering about how much work goes into a modern browser and I read another article which is very informative as well.

How Modern Browsers Work

After I learned a good bit about sockets and got in good with the Winsock2 API. I created a nice program that would take a hostname, use the API to tap into DNS to resolve the hostname to an IP address. At that point the program connect to server on port 80 (default for HTTP protocol) and sends a request to the server.

HTTP requests come in a few forms. You can send a HEAD request which returns a response that basically tells you what kind of content you will receive if you go to that address. My program used a GET request in order to pull the HTML (or other data) from the default page at the given hostname's address. I'm not sure if you can see where this is going, but basically this program is already part of what a browser does. All that is left really is to be able to render HTML.

I am now working with a cross-platform GUI library that is actually capable of rendering a DOM tree (this is what HTML is parsed into). I am now working on a basic text based browser. I plan on making a tutorial or guide when I am finished. The library is called wxWidgets and I mostly chose it for being open-source (my favorite), cross-platform, native C++, and mostly for the ability to render the HTML DOM tree.

Here is a screenshot of the GUI I threw together with a add-on tool called wxFormBuilder:


And here is a screenshot of the console program using sockets and http to pull the html data from a simple test site I set up: