|Why glxgears is slower with Kernel Modesetting (and why it doesn't matter)
||[Mar. 13th, 2009|04:43 pm]
Will Woods, Fedora Testing Guy
or: "glxgears is not a 3d benchmark"|
One interesting fact came out of yesterday's Intel KMS Test Day. Everyone noticed that glxgears is much slower under KMS/DRI2 than it was before (e.g. in Fedora 9 or 10). A typical result was ~1000FPS in F10, but only ~440FPS in Rawhide. So what's changed?
Here's the deal, as best as I understand it (thanks to krh/halfline/ajax for trying to explain it to me): glxgears is rendering an insanely simple scene - so simple that the actual 3D rendering time is basically zero. So the only thing glxgears really tests is the performance of glXSwapBuffers() - basically, how fast we can push render buffers into the card. This operation is slower with DRI2, but - roughly speaking - unless it was an order of magnitude slower (e.g. glxgears drops from 1000FPS to under 100FPS) it wouldn't make any real difference.
So if you're going to be comparing "3D performance" - please, don't bother with glxgears. ajax has suggested a couple of things that might make better benchmarks:
- The mesa-demos package has a few useful things - teapot in particular is nicely simple.
- sierpinski3d and glblur from xscreensaver-gl-extras also work well.
- extremetuxracer is, in my opinion, the most fun way to benchmark 3D.
If you see major (100% or more) performance differences between KMS and non-KMS in those apps, it's probably worth investigating further.
I'm a tad bit concerned that 3D performance is being disregarded. There are a handful of (perhaps poorly reported) performance bugs open on FDO that have been summarily dismissed by Intel devs.
I cannot play any modern (released in the last 10 years) game on DRI2. Some 60% of the CPU is consumed with in-kernel and in-Xorg computation of the OpenGL scene--most likely memory management overhead. And this is on a brand-spankin' new laptop with GM45 graphics. Something is very wrong and it appears that no reports regarding this are being taken seriously. (Not saying that gaming is a serious concern.)
2009-03-16 07:42 pm (UTC)
Re: Hrm... not the whole story
We're not saying 3D performance is being disregarded. We're saying *glxgears* performance is being disregarded, because it means almost nothing. The alternate tests Will suggests are all 3D apps, intended to test 3D performance. So, what he's saying is that if you test with apps that give a useful indication of real-world performance, and you observe very bad performance, this will be considered important.
And what I'm saying is that at least two such reports exist and appear to not *yet* have been given any attention. Intel has a great track record so far, though. So, I really am not that worried about it.
2009-03-16 07:59 pm (UTC)
Re: Hrm... not the whole story
Ah, OK. I think in that case it's just good old-fashioned maintainer overload, then. The intel maintainers are also the intel developers and there's a lot of work going into the driver this cycle, and there's rather a long bug list as well. Unfortunately it's not surprising not all bug reports have been picked up yet :\
On my long to-do list is to develop something like yellow rose into a proper benchmark...Yellow rose still fails as a benchmark as it does no texturing...
I still don't understand this FPS race: it means nothing if my system will show 1000 fps by rendering Quake3 scenes or yellow roses
since the human eye won't be able to actually see more than 30 frames per second; we're watching movies at 25 fps and all of them are looking better than a 1000 fps game.
It still gives an indication whether things are working or not. So a variation between 800 vs 840 doesn’t say much. But when the fps gets down to 100 you know something is really wrong. I still use glxgears for that purpose.