Shengze Zhang
Aspiring Graphics Programmer
About Me
- Currently I am a graduate student in Entertainment Technology Center at Carnegie Mellon University.
- I am a guy of rendering, toally obessed by the beauitful images generated by computer and those amazing algorithms.
Currently I am writing my own game rendering engine and a Ray Tracing based renderer.
Education
-
- Entertainment Technology Center, Carnegie Mellon University
- Aug. 2014 - May 2016
- School of Software, Shanghai Jiao Tong University
- Sep. 2010 - Jul. 2014
Study and Research Interests
-
- Modern physics engine.
- Real time physically based simulation, deformable/breakable objects simulation.
- Real time Physically based rendering.
Professional Experiences
-
- Software and Services Group, Intel Asia-Pacific Research & Development Ltd.
- Software Development Intern
- Zizhu Shanghai, China
- Oct. 2013 - Apr. 2014
- Software and Services Group, Intel Asia-Pacific Research & Development Ltd.
Contact Me
-
- Cell: (+1)412-980-3422
- Email: shengzez AT andrew.cmu.edu
View in a standalone page or download it.
Create a world in two weeks!
Building Virtual Worlds(BVW) is a practical course established by Prof. Randy Pausch.
It challenges students to create a virtual world with randomly generated team of 5 in TWO weeks for FIVE rounds in ONE semster. In each round, each team needs to fulfill different requirements and use different kinds of platform to build a highly immersive, interactive and fun experience.
From 2013, BVW has a grant event called BVW Festival for each year. In BVW Festival, each of the 20 worlds selected from all the 80+ worlds will be showed in a decrorated theming room, and ETC invites people to attend and play them! For this year, the number of people attending BVW Festival is more than 400!
My round 4 and round 5 are in BVW Festival.
Alt.Ctrl.GDC Alternative Input Exhibit: Boot of Fate, BVW ROUND 4

Book of Fate
Round4 is a story-telling round, which means we need to tell an interactive and immersive story in virtual world.
We create a magic world where a young wizard use his wisdom and magic to resuce his master who was kidnnaped by a dark wizard. It's not a complicated story but we use story telling skill to make it appealing.
To make the game more interesting, we utilize makey-makey to bulid a magic book with which player can charge the magic power and follow a magic pattern to fire it.





One interesting thing is that we use 3D in environment and 2D in all the other things in the world, leading to a unique graphic style.
Take responsible for magic design, cut scene and software framework.
- Sphere magic spell and a shield with hitting distortion.
- Area magic spell.
- Portal design and effect.
During the festival preparation time, I integrated all the battles scene, redesigned the water attack and added many combat feedbacks.
Private Ryan
Round5, the last round of BVW, is what we call festival round. The virtual world in this round is designed and fitted for festival.
Because of this guide or requirement, many teams created multi-player game. So our team wanted to use different new techonologies and create a world which can gave players a unique experience.
Private Ryan is a virtual reality game where player will first take a virtual balance training, and then execute a mission using the balance skill. Player will sense the feel of height and the difficulty to balance in a virtual world.
To implement this game, we need to make a moveable Oculus Rift and physical-virtual world mapping, leading to several really hard challenge for us.
-
Moveable Oculus Rift
For any VR world where player needs to move, there are two necessary things: direction and movement. For most OVR game, oculus itself is performed as a direction source and a button on a gamepad is acted as movement trigger.
This simple solution leads to two major problems: no turning head and bad sense of immersion. ETC has many OVR worlds trying to solve these problems. Among them, Doll uses wireless mouse to solve the second problem, enabling player to walk in the virtual world. However, the range of their movement is still restricted in a small area near PC and the first problem remains unsolved.
To solve these problems, we go a step further. We use a MacBook in a backpack worn by the player to solve the restrcited movement problem; use two Kinect2 to calculate the direction of player's body, his body motion and his movement.
As a result, player can turn their head and move in a much bigger space in the physical world with the combination of Oculur Rift and two Kinect2. -
Graphics Power Limitation
Oculus Rift does not support dual graphics card and this can be a problem on laptop because we can not disable integrated graphics card for that almost all the mordern laptops use integrated graphics card to export image.
Therefore, we can only use laptops with integrated graphics card only.(We choose MacBook for its light weight.) At the same time, VR experience needs realistic graphics and as higher frame rate as possible. So we do a lot of optimization to reduce the rendering burden without losing much graphics details.- Light Map
- Draw Call Batching
- Material Compressing
-
Oculus Rift Drift
Dirft is always a problem of OVR, even all head-mount based virtual reality equipment. DK1 uses Magnetometer to calibrate and DK2 adds camera to help the calibration.
Apparently, OVR camera is not available in our world. Magnetic field in our theming room is extremely unstable. So we uses the raw data from OVR and use PID control to do the calibration in a single thread.
The calibration effect is better than OVR without magnetometer calibration(no calibration in yaw axis). But becuase of the CPU power limitation(the sampling rate can not reach 1000HZ as OVR does), we still use default OVR data for our final build.
-
Responsibility
- Graphics Enhancement. SSAO shader(used in desktop view)
- Optimization. Increase the frame rate from 20 to 50 for the first scene and from 20 to 40 for the second scene
- Calibration Algorithm(PID Control)
Lightning Round
So, our lightning round game's name is... LIGHTNING ROUND!
The reason why this round is called lightning round is that this round only has one week.
As a result, we create an experience which is simple, engaging and immersive, abandoning any other factors unrelevant to these features.
Basically, lightning round is a physical world rather than a visual world.
There are several check points on the ground, and two players need to reach the light point showed on the computer screen while they maintain the connection with each other and with at least one point.
Besides, there are some randomly assigned bad points which player should not touch.
So, the essential strategy of this game is to let one player hold on one point and stretch them to let the other player reach the assigned light point.
Take responsible of Makey Makey setup and electric fencing around the virtual arena.
Awaken The Spring
In this round we needs to create a world that lets naive guests feel like they have a lot of freedom. This round is usually called naive guest round.
Our team doesn't develop a game in this round at all. We create a interactive art experience instead. Guest use their hand tracked by Kinect to explore and interact with the virual art world. It's not an aggressive, energetic gaming experience, but a peaceful, beautiful journey bothing relaxing your body and mind.
A lot of diffculties relevant to how 3D effects show in 2D view are met.
Take responsible for visual feedback, graphics enheancement and animation system.
- Sprite sheeting animation controlling.
- Snow system.
- Butterfly interaction skill, including a fragment shader and a procedural particle system.
- Sun shaft according to different layers.
- Distortion shader simulating 3D water effect in 2D view.
Anna Lee
The subject of this round is that the guest helps a character who is afraid of another. Such fearness can be anything, other character, height, entity, etc.
We create a story that, a little girl named Anna Lee went to visit her dead grandpa in a cemetery, and was scared by evil ghosts, so her grandpa(guest) protect and guide her to flee from there.
Oculus Rift is used to turn this first person perspective game into a virtual reality world, and PS Move is used as input device.
As long as this and later rounds are team work, I start to focus on graphics enhancement such as particle system, effect and shader.
- OVR and PS Move integration.
- Integrated thunder effect, including visual, light and sound.
- Dark fog to show the horrible ambience in cemetery.
- Rain system.
Escape From Space Station
Escape from space station is a short adventure game. Player needs to use hint mode(Q) to find clues and try to escape from the base.
As a solo round, round0 mostly help student to get familiar with role and tools. Student needs to demonstrate that he have the necessary skills to take responsibility for the specific team role.
As a result, I focus on implementing some reusebale feature and framework in this round.
Self-adaptive camera follow
- Camera smoothly follows a appointed target.
- Automatically adjust follow distance according to occlusion, to make sure that the view of camera can not be broken.
Adventure game story framework
- A tree-based data structure is deigned where each node needs to be unlocked to continue next plot or level.
- Each node has GUI feedback and can be extended for specific feature such as camera control.
- A highlight shader is used to label the interactive object in hint mode.
Download literature review&technical document.
Human Skin Rendering, or Character Rendering, is a promising next-generation computer graphics and game techonology.
As a new researcher, I choose this topic, not only my own great interest but also one of future development of game engine, and start my "fresh" research under the supervision of Professor Xubo Yang, Digital Art Laboratory, Shanghai Jiao Tong University.
My work concentrates on two parts:-
Skin Color Determination
- Monte Carlo Method is used in light transport in multi-layered tissues.
- Three-layered skin model, including sebum, epidermis and dermis, is adopted.
- Absorption and Scattering is the two main simulation sections.
- An adjustable parameter model is applied, including melanin volume fraction and eumelanin fraction to pheomelanin in epidermis, and hemoglobin fration in dermis.
- The difference of linear absorption(faster) and exponential absorption(more accurate) is analyzed.
-
Result for Spectral Power Distribution(SPD) and skin color:
Change in skin color as melanin volume fraction f.mel from 1.3% to 43%. The eumelanin fraction is fixed at 0.7, hemoglobin fraction is fixed at 0.5%. # fraction.melanin = 1.3% fraction.melanin = 6.3% fraction.melanin = 16% fraction.melanin = 43% Linear absorption Exp absorption Change in skin color as hemoglobin fraction f.h from 0.1% to 10%. The melanin fraction is fixed at 1.3%, with eumelanin fraction at 0.5. # fraction.hemoglobin = 0.1% fraction.hemoglobin = 0.5% fraction.hemoglobin = 2% fraction.hemoglobin = 10% Linear absorption Exp absorption Change in skin color as eumelanin fraction f.emel from 0 to 1. The melanin fraction is fixed at 43%. The hemoglobin fraction is constant at 0.1%. # fraction.eumelanin = 0 fraction.eumelanin = 0.33 fraction.eumelanin = 0.67 fraction.eumelanin = 1 Linear absorption Exp absorption
-
Subsurface Scattering Simulation
- A naive BSSRDF model is applied.
- Coordinate system is converted into local normal-binormal-tangent system when a ray hits the human mesh. Therefore light transport is simulated in the normalized local system and the result is converted back to the world system.
- Result, or outgoing point mainly, is projected into the real 3-D mesh, fulfilling the task of BSSRDF model.
- This method is used primarily for eliminating the diffuculty of light transport in real and arbitrary 3-D mesh.
- An assumption that skin is semi-infinite is made to prove the correctness of this BSSRDF model, and the result in highly curve position(nose, ear) is largely biased as a result. Transparence is reduced as well under this model.
-
Result with high resolution and for different human races:
- 500-Sampled result.
Skins for different human races are simulated with our model. For Caucasian, f.m = 1.3%, f.em = 0.7 and f.h = 0.5%. For Asian, f.m = 16%, f.em = 0 and f.h = 1%. For African, f.m = 43%, f.em = 0.7 and f.h = 1%. Caucasian Asian African
- Considering the non-chromatic factors(wrinkle, pore) is the most prior improvements, which can enhance the render quality greatly.
-
BSSRDF model needs to be totally changed. Two ways:
- Using pratical model
- Simulating light transport in real 3-D mesh
-
Despite the acclerate techniques of K-D Tree, multi-thread and cache, the render time is terrible largely due to Monte Carlo Method. Also two ways:
- Speeding up Monte Carlo Method itself
- Building a real BSSRDF model for looking up the in-out radiation distribution
Final Project of Computer Graphics
Ray Tracing is a basic render method other than rasterization-based render method(GPU architecture).
Theoretically, it needs more time to render but can creat more realistic result than rasterization does.
The most powerful ability of Ray Tracing is that as long as a basic tracing framework is established, any small change can make a direct improvement in the render result.
Therefore, let me show my work in the order from simple to complex to demonstrate the power of Ray Tracing.
Some scenes in the following render result is the same as Jacco Bikker's Article
- Phong Shading
- Reflection
- Refraction
- Hard Shadow
- Antialiasing
- Soft Shadow
- Real Reflection
- Texture&Bump Map&Model

- Phong Shading is a classical light model.
- Ambient, Diffuse and Specular parts are combined to generate reflection in this model.
- An acceptable result.

- Reflection is added.
- Result is greatly enhanced by a just 12-line patch.

- Refraction is added.
- Result is awesome, and some really advanced effects such as magnifier-glass can be directly got.

- The Beer¨CLambert law generalizes the realtion between the absorption of light and properties of the material through which the light is traveling.
- The direct visual effect of Beer's Law is that the transparent part(two big balls) is a little dimmed compared to result without Beer's Law.

- Shadow is added.
- The render result becomes more and more realistic.
- The word "Hard" means the sharp division between shadowed part and non-shadowed part, which is very different from real shadow(Such shadow is called soft shadow and the simulation of it can be seen in the following tab).

- Antialiasing is added.
- Serration effect is largely eliminated.(Please compare it to the previous result(Hard Shadow Tab).)

- This result is rendered with a point light and just for comparison with the following two results.
- ...But it's a new scene!

- The reason why the shadow is "hard" is the point light source, which means only one sample on the light source.
- Soft shadow can be implemented by turning the point light source to area light source and multi-sampling on the area.
- The result is great, but despite the relatively high sample rate of 8 * 8 = 64, the banding effect(Please see the small result above) is still detectable.

- Monte Carlo Sampling is used on the area light source.
- With the same sample rate of 64, the banding effect is completely eliminated.


- Real reflection, which means that due to the roughness of the material, the reflected ray can redirect to arbitrary directions in a cone area, rather than a mirror reflection on a smooth surface, is added.
- Monte Carlo Sampling is used as well.
- The reflected parts are blurred. The left ball is rougher than the right ball, therefore the left ball is more blurry than the right one.
- As a type of super-sampling, real reflection also accomplishs part function of antialiasing.

- Texutre and model are added.
- Bump map is added(Please see the wrinkles in the small result).
- Multi-thread accleration.
- K-D tree accleation.
The two accleration technologies speed up the render process by more than 500%.
One interesting further improvement called incremental rendering is raised by me and Yuetao Xu, my classmate, during our discussion.
The main idea of incremental rendering is to change the tracing sequence of rays.
Use tree traversal as example. Common ray tracing, my method as well, uses depth first traversal: one primary ray is traced; if any, reflected secondary ray is traced; if any, reflected ray of the reflected ray is traced; ...; ...; if any, refracted ray is traced, ...; one primary ray is completely traced, trun to the next primary ray.
Inremental ray tracing uses a concept of priority first traversal: every primary rays are traced first to maintain every pixel has color, and generated secondary rays are not traced temporarily and are put in an ordered container by their father ray's result weight.
Then the secondary ray in the head of the container is traced and added to the result. If any new secondary rays are generated, the traced ray's result weight is multiplied by it's father ray's result weight as the new result weight of the new generated rays, and these new rays are dynamically added to the container by their weight.
The new head of the container is traced then. Tracing will conitnue until the number of traced rays reaches a threshold or the result is good enough.
The motivation of increamental rendering is our discovery of huge difference of the contribution to render result among rays.
The common method usually record the trace depth of a ray and set a max-depth value. One ray traveling in the material with high reflectivity, high refractivity and low absorption will trace the same times as another ray traveling in the material with low reflectivity, low refractivity and high absorption, and their contributions to the result is definitely different.
Incremental ray tracing can largely eliminate this phenomenon and give those "powerful" rays more chances to "express themselves".
Therefore the final result will be exquisite in those bright and complex areas such as a mirror group, and be mediocre in those glommy and simple areas such as a plane or an opaque object.
This online benchmark evaluates the performance of WebGL on the specific browser you are using.
Therefore little sense will be made if you try to compare the scores on different machines.
Comparsion between different broswers under exactly the same hardware and OS is the aim of this benchmark.
Color, Move, Light and Shadow, Texture, the four most important features in GL are evaluated.
I cooperate with Shengsi Tong, Xiaoning Liu and Jianyu Gu to develop the benchmark, and I take charge of Light&Shadow Section, integration of four sections and front end GUI.
Color Section

Move Section

Light and Shadow Section

Texture Section

Why not have a try, see how your broswer supports WebGl.
Final Project of Game Programming
Lock(Powered by Unity3D) tells a story about a young man who loses his memory, wakes up in a strange house and finally gets his memory back as the progress of the game goes on.
It is hard to classify the game type of Lock for elements of different game types are combined, ACT, AVG, PUZZLE, RPG.
Generally, Lock is an Adventure Game like Tomb Raider(2013), but classical puzzle elements like those in Machinarium(2009) are added.
I lead a group of three and cooperate with Jingjie Yang(Scene Designer) and Yunjia Ge(Play Designer) and act as programmer simultaneously.
Main play points of Lock include, but are not limited to:
Mysterious adventure story
|
![]() |
Well-designed decryption clues
|
![]() |
Good balance between puzzle and plot
|
|
Simple but interesting combat system
|
![]() ![]() |
Enhanced visual effects of self-developed shaders
|
![]() |
Beautiful environment
|
![]() |
Final Project of Digital Image Processing(DIP)
This course project is a researh project of DIP's lecturer Kai Xiao. The primary goal of this assignment is to find the potential methods. The research objective is precisely segmenting plant photos taken by a fixed HD camera.
An artificial neural network(ANN) algorithm is developed:- ANN is trained by Backpropagation(BPNN).
- Color space and position information are adopted as eigenvector.
- Mean-Shift algorithm is used as preprocessing to blur the image and generate data block for later training and segmenting.
- Target image sample and Segmenting result(6 parts):
- Trunk of trees
- Upper left small tree leaves
- Upper right narrow tree leaves
- Bottom left orchid leaves
- Dead leaves on the ground
- Small green plant on the ground
- Trunk of trees
- Upper left small tree leaves
- Upper right narrow tree leaves
- Bottom left orchid leaves
- Dead leaves on the ground
- Small green plant on the ground
- The NN training algorithm is stable and with high accuracy of segmenting. One further improvement is adding texture information, such as co-occurrence matrices, into current eigenvector system.
Qing Ying is an Interactive Multimedia Creating and Sharing Platform. The word "Interactive Multimedia" refers to the unique, animation-style form of works in Qing Ying.
Qing Ying is a HTML5-based platform, therefore creating, watching, sharing and all the functions can be done online with only one raw broswer supporting HTML5.
No Flash. No plug-in.
The slogan of Qing Ying is "Simple, Share, Cooperate, Interact". Qing Ying intends to enable everyone to create a piece of multimedia work without any diffculties.
I cooperate with Yuetao Xu, Cheng Gu and Xiaoning Liu to design and develop Qing Ying, and Qing Ying win 2nd Prize in the 5th Intel Cup National Collegiate Software Innovation Contest and participate in the 6th National Undergraduate Innovation Program as well.
Then two main functions of Qing Ying is watching, or playing, and creating, or edting:
-
Qing Ying Player
-
Qing Ying Advanced Editor
Download a dated video to have a direct feeling about Qing Ying.
Qing Ying is an online platform with complete functions, and an English User Manual which can be downloaded or read online is available.
Any showy literal introduction is no better than a just 5-minute try. So having read so many texts, let's go to Qing Ying and have your first impression.
A more stable link is under preparation. If any, I am sorry for the inconvenience.
Source code is available here or you can checkout it by: svn checkout http://light-shadow.googlecode.com/svn/trunk/ light-shadow-read-only