A Qt Scenegraph

A Qt Scenegraph

Posted by Gunnar Sletta on May 18, 2010 · 21 comments

Source code can be found here: http://qt.gitorious.org/qt-labs/scenegraph

A few months ago, I wrote a series of blogs about the graphics stack in Qt and how to make the most of it. In the results I posted there, we could often see that our OpenGL graphics system did not perform as well as one would expect from a hardware accelerated API. When we look at what games can get out of the same hardware it illustrates the fact even more. It doesn’t mean that it is impossible to get really smooth apps with our current graphics stack. There are enough really nice looking QML demos to prove that, but we just believe that we should be able to do something even better.

几个月前,我写过一系列关于qt图形栈的及怎么其充分利用它的一系列博客。从结果来看,我们的opengl图形系统并没有达到我们期望从硬件加速API得到的性能。当我们和跑在相同硬件上的游戏对比,这个差距更明显了。这个不表示不可能从我们现有的图形栈得到平滑的程序。已经有很多好的QML demos已经证明了,但是我们相信我们可以做的更好。

After many attempts at improving the OpenGL performance of the imperative drawing model we have today, we’ve decided to take a step back and see what can be achieved with a different model; a scene graph. There already exist a few scene graph systems out there and while we have looked at some of them, we have not based ourselves on any,
because the problem they try to solve are different. For instance, it is rare to draw meshes larger than 50 vertices in a UI, while in games that number is so low as to be almost unheard of. Another reason is that we want to try out things and learn what works and what doesn’t from the Qt perspective.

在许多改进OpenGL性能的尝试后,我们打算换种不同的模型,a scene graph。我们不在任何已有的一些scene graph systems 之上开发,因为他们要处理的问题和我们不一样。比如,UI中很少有大雨50个顶点的mesh,但是游戏中这个数目却是太少了,可能都没听说过。另一个原因是我们想从qt的方面尝试并学习哪些可能,哪些不可能。

Before going into details of our scene graph, here is an example that shows some of what we can do with it:

Photos example, showing our scene graph running on a Nokia N900, shot with another N900. There are about 120 unique image files. It starts out in the Wii-inspired thumb-nail view, then we browse single images with a page turn effect. Finally we zoom in on an image and then return to the main view.

To get optimal performance from hardware rendering, we must minimize the number of state changes; that is, don’t change textures, matrices, colours, render targets, shaders, vertex and index buffers etc. unless you really need to. Big chunks of data should also be uploaded to video memory once and stay there for as long as the data are needed, rather
than being sent from system memory each frame. To achieve this, we need to know as much as possible about what is going to be rendered.

为了从硬件渲染得到更佳的性能,我们必须建设state changes,及,不改变textures,matrices,colours,render targets, shaders, vertex and index buffers etc,除非必须。

大的数据必须放在video memory,并在数据需要时候尽可能长的呆在那里,而不是每一frame都从system memory 送出。为了达到这个目标,我们要尽可能多的知道我们需要render什么。

Our scene graph is a tree composed of a small set of predefined node types. The most significant is the GeometryNode which is used for all visible content. It contains a mesh and a material. The mesh is rendered to screen using the shaders and states defined by the material. The GeometryNode is in theory capable of representing any graphics primitive we have in QPainter.

我们的scene graph 是一个树,由预先定义的node types组成。最重要的是GeometryNode ,所以可见内容都要用它。它包含mesh和material。GeometryNode 从理论上可以表示QPainter中的任何图元。

In addition, we have TransformNode which will transforms its sub-tree using a full 4×4 matrix, meaning it is 3D capable. Finally, we have theClipNode, which clips its children to the clip it defines.

另外,我们又TransformNode ,可以用来变换它的子树。最后,我们还有theClipNode,用来剪裁。

The tree is rendered using a Renderer subclass which will traverse the tree and render it. Since the tree is composed of predefined types, the renderer can make full scene optimizations to minimize state changes and make use of the Z buffer to avoid overdrawing, etc. The renderer will typically not render the scene in a depth-first or breadth-first manner, but rather in the order that it decides is the most optimal. As this can vary from hardware to
hardware, the renderer can be reimplemented to do things differently, as required on a per-case base.

树通过Renderer 子类渲染。它遍历树并渲染他。由于树是由predefined types组成,renderer可以全局优化,减少state changes和利用Z buffer。renderer不会depth-first or breadth-first 渲染,而是选择最佳的顺序。

The filebrowser example from our source tree. In the example above, I end up scrolling through a directory with 600 files.

In the example above, we are using the default renderer implementation which will reorder the scene based on materials. All the background gradients will be drawn together, then all the text, etc. This minimizes the state changes, resulting in better performance. The default renderer will also use the Z-buffer, and draw all opaque objects front to back followed by drawing all transparent objects back to front. This minimzes overdraw, especially when we have
opaque pixels “on top”, and again helps to improve performance.

上面这个例子,我们用默认的renderer实现,它根据materials重排了渲染顺序。所有的background gradients 一起画,然后是所有的text,等等。这个减少了state changes,从而得到了更好的性能。默认的renderer也使用了z-buffer,从前到后画不透明的物体,然后从后往前画透明的物体。这个减少了重绘,特别是不透明的像素在最上面的时候。

The graph is written to target OpenGL 2.0 or OpenGL ES 2.0 exclusively. We did this because OpenGL (ES) 2.0 is available pretty much everywhere and it provides the right set of features.

 

In addition to reordering and Z-buffer based overdraw reduction, we also have the benefit that geometry persists from frame to frame. Text is supported as an array of textured quads that are created once, then just drawn with each call. This compares favourably with the QPainter::drawText() function where text is laid out and converted to textured quads for every call.

Since we are targeting OpenGL (ES) 2.0 we can expose shaders as a core part of the API. This opens up a number of features that are difficult to make fast in our current QPainter stack.

The particles example, illustrating how it is possible to do a particle system that runs primarily on the GPU.

The metaballs example, illustrating how its possible to do custom fragment processing.

As we’re only interested in the graphics component, our example code implements animations and input in and ad-hoc manner, as these are not within the our scope.

由于我们只光线graphics component,我们代码在实现animations and input 用了特别的形式,他们不在我们的范围之内。

 

One might question were QPainter fits with this. The answer is that it doesn’t – not directly anyway. There is an example, paint2D, which shows one way of integrating the two. Another way would be to paint directly into the context of the scene graph before or after the scene graph renders.

So if you find this interesting, feel free to check out the code and play around with it. It’s rather sparse on documentation for now, but we’ll improve on that as time passes.

Enjoy!

http://labs.qt.nokia.com/2010/05/18/a-qt-scenegraph/

原文地址:https://www.cnblogs.com/cute/p/2199062.html