13 Dec 2023

Qt Widgets Rendering Pipeline

Qt Widgets is one of the GUI toolkits developed by the Qt company. It is built following desktop software guidelines, so it is a powerful tool to create desktop software with complex GUIs using C++.

To be able to manipulate and display GUI objects on the screen, it relies on a graphics pipeline composed of several steps, responsible for transforming the visual description of GUI objects into pixels displayed on the screen.

I'll describe all the steps of the Qt Widgets graphics pipeline in this article, to give you a clear idea of how it works.

User Interface

In Qt Widgets, every user interface element is represented as an object. They derive from the QWidget base class, which provides the properties and behavior that are shared between all UI objects, such as position, dimensions, event handling and painting. Objects are arranged in a tree, so every node is the parent of several children, and has a pointer to its parent. The tree structure is convenient when traversing the widgets to paint on the screen.

When creating a user interface, you can choose between using the primitive controls provided by Qt Widgets directly, creating a new widget composed out of the primitive widgets, or creating your fully customized control.

Widgets specify how they react to events delivered by the event system. Suppose we create a button inside a window, and then that the user presses the left mouse button over it. First the widget receives a mouse press event, which sets an internal variable saying that the element was pressed. Then the widget schedules a repaint of itself on screen, to change its visual representation to a sunken version of itself.

The full repaint of the UI system is implemented using the painter's algorithm. The repaint manager traverses the widget tree from the root to the leaf nodes with a DFS, painting first the container widgets, then painting the children widgets from the back to the front, so that front widgets are painted over the back widgets if they overlap. But the widgets don’t handle the actual drawing themselves, delegating this responsibility instead to the QStyle class.

Styling

One architectural aim of Qt Widgets is that widgets should be able to change their appearance, to be in harmony with the look and feel of the platform they're running on, or to adopt a custom look defined by the application developers. To achieve this, when a widget processes the paint event it delegates the painting task to a QStyle object or the convenience object QStylePainter, which just provides some syntactic sugar on top of QStyle. The image below shows the handling code for the paint event of a QPushButton.

QStyle will then transform the logical description of the UI objects into the 2D primitives that compose them. Styles can be customized by subclassing from QStyle, so every custom style has its own derived class. For representing buttons, it will instantiate a QPainter and draw rectangles with strokes, borders and fills, text, and optionally an image. This leads us into the next step of the pipeline.

Painting System

QPainter is a fully featured 2D rendering engine that can draw geometric shapes, text and images. It is not unique in the software industry. Other 2D rendering APIs include Windows' GDI and Direct 2D, Apple's Quartz 2D, Google's Skia, Cairo and the HTML5 Canvas element.

2D graphics displays provide the most widely used technology for creating user interfaces. Adopted in desktops, tablets and smartphones, they have mostly replaced command line interfaces, which today remain restricted to a technical audience. As a consequence, every GUI toolkit relies upon a 2D graphics library, using either a publicly available rendering library or building a custom one.

The primitives that can be drawn include rectangles, polygons, points and lines, with different fill patterns, strokes and colors. Text is also supported. Lines can vary their end caps and joint styles. Object coordinates can be rotated, scaled and translated.

QPainter draws the 2D primitives according to the strategy defined by the rendering backends. Implemented by the QPaintDevice and QPaintEngine pairs, the backends enable the GUI to be rendered using different APIs and on different surfaces. The backends that come out of the box enable the user to render on the screen, in SVGs, PDFs and even printers. The backend used for drawing on the screen is currently the software rasterizer.

Rasterizer

Before Qt 4.0, QPainter used the native OS APIs to convert strokes into pixels, such as GDI on Windows. However the discrepancies of rendering from one platform to another, and the slow progress of some native APIs gave birth to a new backend inspired by the anti-grain geometry library (AGG): the Qt software rasterizer. Since the rendering could now happen with the same code in every platform, the feature set, quality and performance would be consistent. This led to the software rasterizer eventually becoming the default backend in Windows, Mac OS and X11.

The software rasterizer uses CPU operations to do the heavy work of the rasterizer, and is coded purely in C++. Some well known algorithms used by rasterizers are Bresenham's line drawing algorithm and the midpoint ellipse algorithm. They are also responsible for drawing anti-aliased graphics using techniques such as supersampling.

During a short period of experimentation, QPainter could also draw every widget in an application using OpenGL, which allowed it to leverage the GPU for rendering. But enabling it did not play well with the widgets architecture and could even slow down the application. Widgets draw different types of graphical elements one after the other, changing states rapidly. Since OpenGL excels in rendering lots of data of the same type, but is slow when changing states, this created a performance penalty for Qt Widgets applications. This led to the code being removed, and further consolidated the software rasterizer.

Screen Output

When drawing widgets to the screen, the operation has to be completed as fast as possible, otherwise the monitor might display the scene while it is still being drawn. This issue is solved using the technique known as double buffering. Older toolkits didn't provide support for double buffering out of the box, so GUIs could flicker while still being drawn.

Every Qt widget is part of a native window represented by QWindow. QWindow is responsible for receiving system events and for storing the image used as a back buffer, contained in QBackingStore. During a repaint, widgets render their visuals in this image. Then, when the repaint is finished, the image gets flushed to the screen using a platform specific API. In Windows, the flush operation generally calls GDI's hardware accelerated BitBlt. The native graphics API is then responsible for communicating with the graphics driver that will drive the monitor to display the image. Finally, the user is able to see the image.

Conclusion

This article gave you an overview of how a complex and mature GUI library interacts with 2D graphics primitives to create the user interfaces that we use every day. Knowing how GUI libraries work will help you be more effective when writing your GUIs, and when developing custom components or libraries for your application.

Discuss on Hacker News and /r/cpp.