profile.jpg

MA Khatri

Hello! I'm MA. I recently graduated from Northwestern University with a Master's in Computer Graphics and Vision and also have a Bachelor's in Physics with a concentration in Astronomy, and Computer Science. My main interests are in computer graphics, ranging from real time rendering to offline path-traced rendering.

OpenGL Renderer and CPU Path Tracer

Inspired by my old WebGL work on creating a JavaScript-based ray tracer, I wanted to try to re-create the project in OpenGL and use it as a platform to take a deeper dive into rasterized graphics and ray tracing. So, I followed The Cherno’s YouTube series on creating an OpenGL-based rasterized renderer. His videos guided me through a basic setup for an extendable OpenGL-based renderer with ImGui and support for multiple types of shaders and textures. I extended the rasterized renderer to complete the course assignments for Cem Yuksel’s Interactive Graphics Course, adding environment maps, reflections, shadow maps, and bump mapping with tessellation and geometry shaders.

Bubble Localization and Rendering for the SBC

My master’s thesis was on determining methods for triangulating the position of bubbles in the superheated liquid argon bubble chambers of the Scintillating Bubble Chamber collaboration. On the surface, triangulation for these chambers seems to be a matter of doing simple 3-view triangulation. However, triangulation is complicated by the fact that the cameras for these chambers are imaging through several distortion-inducing refractive surfaces which need to be accounted for. In my report, I go over the challanges in bubble localization in more detail, propose methods for overcoming them, describe how I tested the methods by creating photo-realistic renders of the chamber using Mitsuba 3, and provide metrics on which factors most influence the resulting errors in triangulation.

Literature Reviews on Structure from Motion and Deep Learning Generative Models

The last two courses in the Computer Vision sequence, Advanced Computer Vision and Statistical Pattern Recognition, required that we do literature reviews on a particular topic of our choosing. So, I chose to do surveys on large scale structure from motion using bundle adjustment, and deep learning generative models. You can view the surveys below:

    / [pdf]
    / [pdf]

Path of Steel

An isometric action RPG game in Unity made in collaboration with Nuremir Babanov, Mauro Herrera, and Ege Yilmaz as part of the Game Design Studio course.

While all members of the team worked on several aspects of the game, the general tasks were split up as follows:

  • Enemy design and AI
    • Nuremir
    • MA
  • Character controls, abilities, UI
    • Ege
  • Level layout and design
    • Mauro

The code repository for the game can be accessed here.

Generative Adversarial Networks

The final project for my Deep Learning course completed in collaboration with Ben Fisk.

This project revolved around implementing Generative Adversarial Networks (GANs) to produce synthetic images which look visually similar to the training data. GANs are a type of generative deep model which aim to replicate training data by simultaneously training a generator and a discriminator. The two components of the GAN play a two-player mini-max game where the generator aims to trick the discriminator into believing its generated examples are part of the real dataset while the discriminator is trying to maximize its ability to correctly distinguish between real and fake data. For more information on how GANs work, I recommend reading the original paper by Goodfellow et. al. (2014).

Panoramic Image Stitching

Image stitching is the process of combining several images of a scene taken from approximately the same viewpoint but at different angles to create a larger resulting image. One of the underlying assumptions of this process is that the fields of view of the images to be stitched slightly overlap. This assumption allows for the detection of common features within the images. The relative positions of the features allows for the calculation of a matrix transformation which describes how to warp one image to fit the coordinate system of another. Once the coordinate transformation is determined, the images can be warped, combined, blended together, and then cropped to produce a larger, panoramic image of the scene.