Hi, I’m Haralambos Kokkinakos, a passionate game developer and computer graphics enthusiast actively seeking opportunities in gameplay and/or graphics engineering starting in Winter 2026.
I'm currently pursuing my Masters Degree at the University of Pennsylvania in Computer Graphics and Game Technology after finishing my undergraduate degree in Computer Engineering at Villanova University
I thrive on learning cutting-edge graphics techniques and leveraging them to create immersive and unique player and viewer experiences as shown in my work below.
My goal in my career is to apply new and interactive graphics concepts into my work such that players and viewers can have a unique and memorable experience.
Throughout my education and professional experiences, I have developed many skills, including machine learning, software engineering, and computer graphics, which I continue to apply in my work and personal projects.
Please feel free to explore my previous work by clicking on the projects below.
During the Fall 2025 semester, I took the course GPU Programming and Architecture CIS5650 at the University of Pennsylvania. This course is widely known for its difficulty, attention to detail in optimization, and ability to teach complex topics in GPU programming and graphics. Thus far, I have implemented
boid simluation, stream compaction, and path tracing in CUDA as well as forward+ and clustered deferred rendering in WebGPU. Additionally, these are open source and my repositories for them can be found on my github profile accesible from the linked title above. The READMEs for these projects include detailed descriptions
about the implementations of each project, results, and a performance analysis.
Unity Space Invaders Remake
As part of my Game Design Practicum course, I implemented a remake of the classic game Space Invaders. This remake was done in Unity using 3D assets and realistic physics. In addition to the original game's features, I implemented additional features which enables adds more resource management to the game.
The player starts with limited ammo. Ammo is used when the player fires bullets as the aliens. When a shot is fired, it eventually comes back down to the platform at the bottom of the screen. In order to reload, the player must push ammo off the sides of the platform or wait for it to regenerate.
Facial Animation Machine Learning
During the summer of 2025, I worked under Dr. Stephen Lane on an independent study aimed at developing a pipeline for character animation using artificial intelligence. The purpose of the study was to
find a method of obtaining character animation from an AI generated video of a character. In order to accomplish this, I generated AI videos of a character and used a differentiable rendering to
calculate the error between frames of the AI generated video and the model of the character. Then, I trained a neural network to predict the blendshapes and reduce the error between the video and the models predicted blendshapes.
The model was trained using thousands of variations of the model's face rendered with randomized blendshapes. The result is the video above which showcases how accurately the model can predict blendshapes even in instances where the
character in the AI generated video has neck movement. The model also outputs a csv file containing the blendshape values at each frame so that the character can be animated in game engines like Unreal Engine and Unity.
GLSL Monte Carlo Path Tracer
The video above is a demonstration of my implementation of a Monte Carlo Path Tracer in GLSL. This project had several steps, starting with the implementations of warping functions.
These warping functions are used by the path tracer to randomly generate rays such that the path tracer can correctly simulate lambertian shading. The probability density functions for these
warping functions are also calculated. Then, I implemented a naive variation of the Monte Carlo Path Tracer. This recursively bounces ray up to a specified number of times and solves the light transport equation.
Then, I implemented light source sampling, which bounces rays once directly towards a light source, checking for visibility to that light source. I was then able to combine both of these implementations into a multiple imporance sampling
path tracer. Finally, I added recursion to the multiple importance sampling path tracer, representing the full path tracer.
C++ and OpenGL Mini-Minecraft Project
This video showcases a group project I participated in with the goal of developing our own implementation of minecraft.
I was responsible for implementing player physics including player movement, player controls, collision detection, and breaking/placing blocks.
I also implemented efficient rendering using multithreading. This required me to work with my teammates to learn how they implemented procedural terrain in order to create terrain data and vertex buffers using multithreading.
My final implemented feature was introducing a day/night cycle to the Minecraft world. To accomplish this, I developed a custom raytraced sky using noise functions to
generate windy and cloudy effects. I used a handful of color palettes to represent different times and shaded the sky based on the position of the sky using raytracing.
Then, the time of day would determine which color palette to use and whether or not to interpolate between color palettes in order to smoothly transition between the times of day.
NPC AI Behavior Animation Tool
A project I had the opportunity to work on during my Computer Animation course was an NPC AI Behavior animation tool. In this project, I implemented a variety of AI behvaiors including seek, flee, arrival, departure, wander, obstacle avoidance, separation, cohesion, and alignment.
This tool can additionally be extended into a Unity plugin to solve for character walking animations when combined with my forward/inverse kinematics and skinning solvers. The behavior implementations were coded in C++ and the graphic interface utilizes OpenGL, IMGui, and additional 3rd party llibraries.
Forward/Inverse Kinematic Solver and Skinning
This video showcases a tool I worked on that implemented forward and inverse kinematic solvers. I was responsible for writing code which
could solve forward kinematics which is used to tell a player skeleton how to move given certain inputs. In this case, the inputs were BVH files
of various animations as shown in the video. Next, I added functionality for inverse kinematics usinng both limb based and cyclic cordinate descent such that the user can control the position of certain
joints on the player skeleton. Finally, I implemented a binding skinning tool which allows the base skin of the character to be rebinded to other skins using the linear blending skinning technique.
Particle System and Firework Simulation
During the fall semester of 2024, I worked on a project in which i developed a particle system in C++. THe video above showcases all of the various features I developed including
gravity, random direction of particles, varying lift time, and particle color editor. The firework simulator works such that pressing the space bar launches a firework which travels in a random
direction upwards. The firework and its particles generated are always affected by gravity. We can also introduce more forces such as drag, wind, attraction repulsion, and random forces.
OpenGL Post Processing Effects with Custom Vertex and Fragment Shaders
This video showcases a project where I implemented a variety of post processing effects as well as some vertex and fragment shaders.
This project was done in OpenGL and C++. A framework code base was given to me and I was responsible for understanding the base code so that I could implement these concepts. The vertex and fragment shaders calculated
the visual effect for blinn-phong lighting and also performed texture sampling for matcap. The last surface shader is a custom one which deforms the vertices by mixing values between the base model and a sphere while
interpolating color based on time and distance from the center of the frame. Post processing effects can then be applied to the surface shaded frame. These included a sobel and gaussian blur filter along with a custom
filter which applies a worley noise based effect creating a mosaic-like image
C++ Rasterizer and Custom Lighting
The video above displays a project in which I implemented the an entire rasterizer pipeline in C++ and OpenGL. I was given a base code in which I
needed to analyze and implement my own full 2D and 3D rasterizer. This rasterizer takes in json files of parsed data from an obj file that represents a model. I then implemented classes and methods so that the model's faces could
be converted into triangles. These triangles were then contained within bounding boxes in order to efficiently calculate the points which edges intersect with the bounding box rows.
Then, we translate the coordinates obtained from the intersections from world space to camera space to screen space to pixel space.
For 3D models, perspective-correct interpolation is applied using barycentric interpolation. After all this, we have information about all fragments within the scene. I then use z-buffering to determine which fragment is shown on each pixel.
Then, lighting is applied to the scene using lambertian shading. I added two extra ligting effects including blinn-phong and toon shading.
2D Modeling Design and Scene Graph Traversal
During my first semester in my computer graphics and game technology masters program at the University of Pennsylvania, I developed classes and methods in C++ to create and traverse scene graphs which display polygons to the screen.
The video above shows a 2D model of a character I designed by creating polygons and applying transformations including translation, rotation, and scaling. This model can then be modified in the GUI editor by the user.
Applied Machine Learning Projects
During my junior and senior year, I took an undergraduate and graduate course in applied machine learning. I completed two fully developed machine
learning models in these classes. The first model aimed to predict how well a song would perform on youtube and spotify given information about the song's
genre, beat, tempo, and much more. The accuracy of the model reached 94.3% in predicting whether or not a song's like to view ratio would be above a threshold
provided by the user. The second model I developed was a classification model for predicting a player's probability of winning the game Team Fight Tactics developed
by Riot Games. This model was trained to have a fundamental understanding of the game by analyzing information including augment selection, active traits,
units, and items to determine whether or not the player would win or lose. Both of these models were trained using Python, Scikit Learn, and Pytorch.
Augmented Reality Sampling Mission Tool
In the summer of 2022, I worked at DCS Corporation. During my time here, I got the chance to design an augmented reality app that helps track information about hazards. I wrote this using Java and Kotlin and utilized Googles Geospatial API. My team at the company has now picked up where I left off on the app. They are working on implementing additional functionality that I recommended.
Experience
For my most recent internship during 2023, I worked at The Johns Hopkins Applied Physics Lab in the Air and Missile Defense Sector as a software engineer. I contributed to the development of machine learning models which aimed to decrease the runtime of missile trajectory calculation. I also designed protocol for modules within software designed by APL and Lockheed known as Cerberus.
During the summer of 2022, I worked at DCS Corporation as a software engineer. My first project of the summer was creating 3D assets for a virtual reality software used by the military called Virtual Tactical Assault Kit. Then, I helped debug and add functionality to an android application used to read and track radiation sensors. I also worked on developing extentions to an another military application called Android Team Awareness Kit. Finally, I developed a brand new augmented reality application used to create sampling missions for ground troops in areas effected by hazardous warfare. More info on this can be found below.
In the summer of 2021 I worked at Stanley Black and Decker as an electrical engineer. I had the opportunity to work on a team of mechanical and electrical engineers developing new hand tools. During the summer, I 3D-printed parts for tools, tested circuits, and designed testing modules to aid the design of various equipment. I also worked in a multidisciplinary project in which I led a group of finance, buisness, and engineering interns. We developed a marking strategy based on sales data we analyzed using a machine learning algorithm in Python.
Education
University of Pennsylvania
Major: Masters of Engineering in Computer Graphics and Game Technology
GPA: 3.90/4.0
Relevant Courses: Interactive Computer Graphics, Computer Animation, 3-D Computer Modeling, Advanced Computer Graphics, Advanced Topics in Computer Graphics and Animation, Game Design and Development
Villanova University
Major: Bachelors of Science in Computer Engineering Minor: Computer Science GPA: 3.87/4.0
Relevant Courses: Computer Graphics, Senior Design Capstone, Software Engineering (graduate), Advanced Machine Learning (graduate), Game Development in Unreal Engine 5 with C++ (Udemy), Applied Machine Learning, Discrete Time Signals & Systems, Computer Networks, Computer and Network Security, Computer Architecture, Digital Electronics, Operating Systems, C++ Algorithms and Data Structures, Engineering Probability & Statistics, Principles of Database Systems, Discrete Structures, Electrical Circuit Fundamentals, Fundamentals of Computer Engineering I & II, Physics Mechanics, Physics Electricity & Magnetism, Differential Equations with Linear Algebra