3D Rendering to SVG
I’d been thinking about this for a little while. A rendering engine was an enticing project and I anticipated some interesting complexities. I had decided that I would use SVG as the medium. As with 2D drawings, HTML5 Canvas seems to get all the attention for 3D drawings (WebGL) but, in both cases, there are pros and cons of each technology. Notable reasons for choosing SVG may be:
- If you need to interact with the components, SVG elements natively support mouse events (like
So, first off, here’s the result. I’ll explain how it works afterwards.
As mentioned above, there are a number of good reasons to use a vector drawing. A use case which immediately springs to mind is architectural drawings. The drawings need not be vastly complex, certainly in terms of fairly regular shapes and textures. Also, a developer could add interactive elements to an application so that hovering over a wall or joist would show you the materials, etc. That would be much more difficult with WebGL or similar as the hover events aren’t there to help.
How it works
The maths behind the engine is fairly simple but quite fun. First, a camera is created, which consists of a point in space and a rectangular plane at some point in front of it. The plane is, in effect, the lens of the camera and I refer to the point as the origin. Then, to plot a point the following algorithm is ran:
- A vector is constructed from the point being plotted to the camera’s origin
- The point that the vector intercepts the lens is calculated
- The intercept point is converted into a 2D co-ordinate, relative to the lens
- That co-ordinate is scaled up to the screen and a point is plotted there
So, to draw a plane one merely needs to convert four points in space to be four co-ordinates on screen, then an SVG path can be used to join the dots.
A bit more detail
The interesting part of this project is projecting the point to be plotted onto the lens. I had an idea of how this would work: I wanted to exploit the fact that the dot-product of two vectors at a right angle is always 0. Moreover, a decrease in the dot product implies that the angle is closer to a right angle.
So, take any two points on the lens, La and Lb, which could be corners and a second vector, v, from the camera origin to the point being plotted; then, there exists some λ which minimizes α:
Pictorially, that is:
Therefore, λ minimizes α as λ v approaches the intersect between v and the lens plane. To find the value of λ I simply implemented a “split the difference” algorithm. That is, try λ equal to 0.5; is that better than λ equal to 1? Then try 0.25 and 0.75, which is best? Ect, ect.
Once you have the vector λ v it is quite easy to map the point onto the screen using a bit of trig’ and some vector maths. Then you’re there. If you want more details, best check out the code on my Github page.
The future of the project
This was never meant to be a production library, just a proof of concept. I think that a robust SVG-based rendering engine could have a number of sound use cases but it would take a lot more time than I have available to get to the point where this could be used in production. A number of challenges, just off the top of my head, include:
- textures – difficult to transform in 3D if you’re using SVG
- lighting – this could be implemented using dot products between the normal of the plane and a vector from the plane’s centre to a point of light
- z-index of polygons – i.e. if the example above were opaque, the closest planes should be visible above the farthest ones.
I’d love to know what other people think and if there is any work in this direction already…