Exams

18 07 2008

I was studying all week for an exam tonight (around 00:00 ~ 01:00 utc). Coding will continue after that.





Week #5

14 07 2008

Last week the code was simplified, with the intention for it to work as a simple ray tracer. The view_pixel() function was pretty simple, something like if it hits, paint it black, else paint it white. Frame buffer set and the first images consisted of three, equally spaced lines. There was something wrong.

Further hacking in view.c and I found that the pixel width was a problem. When it was set, in view_2init(), the first images were shrinked horizontally, but that was some bug with ap->a_x not being multiplied by pwidth. This was the result:

D
First Rendering. This is a cube 😀

Next step – flat shaded images. Back to hacking view.c.

At first I thought it had something to do with kut_planes (because RT_HIT_NORMAL was there) so I tried to use it, but only got segmentation errors. After I realized that handling normals shouldn’t be optional (there was a big if (do_kut_plane) block), I found out how to use RT_HIT_NORMAL (and there needed to be a for loop, to find valid partitions), that code was used and this happened (also, color identification was moved back to rayhit() and view_pixel() just handled ap->a_color):

Isnt that a beautiful cube?

Isn't that a beautiful cube?

That was a big motivation for me.

Rayhit() wasn’t saving the hit points properly – when it got to the memory de-allocation, in view_end(), rtmlt crashed. So, in order to create those images, that part of the code was commented and I only gave some attention to that after the flat shaded images. It was leaking big chunks of memory per rendering, but now it is fixed and the paths are being recorded. But not correctly – currently, it saves every point in a single path (the point recording is important for path tracing, this will be changed in the coming weeks).

Also, allocating memory in each rayhit() function is a problem – performance will be much better if the allocation is done somewhere else, as Sean mentioned. Another thing pending is the file writing, that was cut in order to create the images (in the three lines days, it created a 128mb file).
Moving on to path tracing now.





Flat shaded images!

10 07 2008
Image generated using the rtmlt application

Image generated using the rtmlt application





First image

9 07 2008

Commented a lot of code and simplified view_pixel() and rayhit(). Managed to output the first image (but only to the framebuffer).

http://img99.imageshack.us/img99/1521/silhouettezu3.jpg

And this is the RT output:

http://img164.imageshack.us/img164/1715/rttn3.jpg





Week #4

7 07 2008

School demanded a more time this week, plus the need to take care of things from the abscent week made me need to step a little back from the project, and that wasn’t something I was comfortable with doing, having been away a whole week already.

On to more pressing matters. Functions similar to RT are ready and I intend to code rayhit() for my next step. Some considerations regarding that subject:

  • RT uses ap->a_uptr to store visited regions. Since ap->a_uptr, in RTMLT, points to the MLT structure, I’ve decided to include a generic pointer inside MLT_APP, to hold that information.
  • Where to find a function to treat reflections?? And refractions?? The idea is to recursevely call rt_shootray(), and at the end of rayhit(), set the new direction and origin point from where to shoot the next ray. Which functions must be used in order to achieve that?
  • Should the coloring be done in rayhit()?? Since the MLT path points affect every other point of the path, should the lighting be calculated as we build the path (in rayhit) ?? Still, some info must be collected regarding the color and other properties of the material hit, to make further calculations, if not done in rayhit().
  • Also, how should light be literally transmited ?? Exactly where is the information about the light and how will it affect coloring ?? How exactly are the calculations done ?? Another idea is to add, to the structure point_list, a pointer or a value, to hold the light being emitted from that point. But since every emitted light depend on every other emitted light, how should this be done?? Perhaps there is a more simple way.

Another thing that is bothering me is the time constraints I have – I’m not having the time I’d like to dedicate to the project. Other GSoCers have solid results to show and, with the midterm evaluations approaching, perhaps some alterations need to be done. Feedback appreciated in this regard.





I’m back

30 06 2008

And I’m back. Last week was not the best one for coding.





Absence

19 06 2008

I’ll need to be out of town from 19th June to 29th June. I’ll be back by 30th June.





Week #3

17 06 2008

I’ve been working on view_pixel() since last week, trying to understand each step so that later it can be modified to support MLT. An idea that I had is to make view_pixel() interact with mlt_app path lists and point lists (expanding the point_list structure to support color information about each point), iterating through the point lists and determining the pixel color based on that.

The path building will be responsibility of rayhit(). I’m thinking about setting a fixed number for points built from the camera ray and from the light source ray, and then improve it.

Commit logs from the week:

  • ap->a_uptr points to a mlt_app structure, successfully linking the MLT algorithm with the rt
  • Added memory allocation and freeing of the mlt structures.
  • Added hit point storage into the mlt structures.
  • Fixed a bug in viewarea.c, about unitialized point list heads. The area_center() function started iterating from the second element.
  • Updated MSVC 9 build and added rtmlt back.
  • Added scanline.h and scanline.c into the build, and moved scanline specific functions from view.c to scanline.c. Also created alloc_scanlines().
  • Added remrt to the MSVC 9 build.
  • Fixed a bug that crashed RT, had to do with scanlines. Modified alloc_scanlines() to return a pointer to the allocated memory block.
  • Added some headers to the viewmlt.c file. I didn’t initially include “ext.h” in, so I had to extern every variable. Fixed now.
  • Finished view_pixel() prototype.




Week #2

8 06 2008

Successfully “linked” viewmlt.c with the other RTUIF files (worker.c, do.c, main.c and do.c). All the callback functions were added and rtmlt successfully builds, at least with MSVC 9.

Near the end of the week I finally approached viewarea.c, to change it to use bu_lists and vmath.h macros. In early april, bu_list was an incredibly cryptic concept to me and even though I tried to use it, the segmentation faults (and the deadline coming closer and closer) motivated me to change it to standard lists. When I sat down to understand it, I decided to write a separate piece of code, with a structure similar to bu_lists, to test it and learn how it works. When I felt comfortable, I moved on to viewarea.c and it is now BUtilized and working.

But it has an issue that is bugging me – how to check if the data in the head of the point list can be overwritten or if another point should be pushed back into the list? In order to preserve precision, the area_center function simply discards the head element and starts counting from the second one. Comments were written to make that function and the issue clear.

On a related note, Sean and I talked about an old structure called rt_pt_node, a list of points, used by a few primitives. It does not use bu_lists, so adding it and then modifying the associated code sections is an idea.

Work on viewmlt.c will continue this week. I’ll have time to work on it Tuesday afternoon and probably at night too. I’ll have some problems passing some parameters that are exclusive to the MLT and not present in the call back functions.

10/06/2008 Update: Started studying how to implement viewmlt’s rayhit(). First thoughts:

  1. the hit points will be saved in a struct point_list (but how to do so without global variables? The function will be called multiple times and neither mlt_app or point_list is included in the parameters)Solved: the application structure has a generic pointer that can be used to link to a mlt_app structure, with mlt specific information.
  2. ap->a_ray.r_pt (origin of the ray to be fired) will be set to the hit point;Done: used VJOIN1() macro to find the hit point, added the point to the point list and assigned it to ap->a_ray.r_pt.
  3. ap->a_ray.r_dir (direction of the ray to be fired) will be set using a BRDF function, phong model probably (this need some thinking and studying of the model implementation, in liboptical);
  4. and rt_shootray() will be called again.

Also, some concepts that I need to understand fully – what exactly are segments, partitions and regions? Since the algorithm will take only one point into consideration, this is not really essential. But would be useful.

My idea of implementation is something like this:

  1. Set origin to the viewer
  2. Shoot a ray in a random direction.
  3. Build a path and store it in mlt_app.
  4. Set origin to the light source
  5. Shoot a ray in a random direction.
  6. Record those points in the path built in step 3.
  7. Verify if it is a valid path. If it is not, go back to step 1.
  8. Mutate the path a few times and record their contributions in the final image (iterate through the pixels for each mutation).
  9. Go back to step 1 ?

Commit logs from the week:

  • Successfully added do.c, worker.c, main.c and opt.c to the rtmlt project and added the relevant functions to make it rt compatible.
  • Removed mlt_defs.h from the project and moved the definitions to the .c file.
  • Modified mlt_hit() and mlt_miss() to rayhit() and raymiss().
  • Modified the area center patch to use bu_lists and vector macros.
  • Updated viewmlt.c (fixed rayhit and raymiss prototypes, added some stub code and linked application with mlt_app). 11/06/2008
  • Removed structure path_list. Now mlt_app has direct access to the point_list. 12/06/2008
  • Added structure initialization code. Used VJOIN1() macro to detect the hit point. The hit point is stored in mlt_app->path and in ap->a_ray.r_pt. 12/06/2008.




Week #1

2 06 2008

The “summer” started slow, with time being demanded by the university. But I managed to get something that was annoying me way before the beginning of the program, when I was writing the proposal patch – the lack of compilation support for MSVC 9.

I’ve also started outlining the first requirements for the initial development phase of the algorithm:

  1. Study how the .g file is loaded in the application and how the objects will be handled.
  2. Study how exactly the viewmlt.c file will interact with main.c, worker.c, do.c and/or view.c – for example, where the command line will be treated.
  3. Think about how the application structure will behave with regards to the mlt application structure. Will the application be nested inside mltapp? And how certain parameters, relevant to the MLT algorithm (like number of bounces), will be passed on to the relevant functions (like rt_shootray() and a_hit())?

And regarding mlt_defs.h, I think it is pretty useless – I’ve taken the definitions to viewmlt.c.

03/06/2008 UPDATE: Partially solved the second step – viewmlt.c now interacts with the other files (and rtmlt.vcproj compiles, which is awesome, as the project was removed from msvc8’s solution temporarily due to not compiling), due to adding the correct functions to the file (viewdummy.c), and I guess parsing the command line and opening the .g file will not be done in viewmlt.c.

Commit Logs from the week:

  • Added compilation support for the MSVC 9. To use this feature, simply load /misc/win32-msvc9/brlcad/brlcad.sln
  • Added files related (mlt_defs.h and viewmlt.c) to the MLT, in the src/rt/ directory, and edited makefiles.
  • 03/06/2008 – Successfully added do.c, worker.c, main.c and opt.c to the rtmlt project and added the relevant functions to make it rt compatible. Removed mlt_defs.h from the project and moved the definitions to the .c file.