X Integration test suite
Peter Hutterer
peter.hutterer at who-t.net
Thu Sep 20 07:42:26 PDT 2012
On Thu, Sep 20, 2012 at 04:16:15PM +0200, Matt Dew wrote:
>
> On 08/30/2012 06:13 AM, Peter Hutterer wrote:
> >On Wed, Aug 29, 2012 at 04:22:33PM -0700, Chase Douglas wrote:
> >>On 08/29/2012 03:36 PM, Peter Hutterer wrote:
> >>>On Wed, Aug 29, 2012 at 01:14:35PM -0700, Chase Douglas wrote:
> >>>>On 08/28/2012 03:57 AM, Peter Hutterer wrote:
> >>>>>One of the things I've spent quite a bit of time on over the last weeks
> >>>>>is a test suite. Chase tried to get some integration tests into the X
> >>>>>server repo a while ago but I think a standalone repo is best (for now
> >>>>>anyway).
> >>>>>
> >>>>>I've pushed the current set of tests to
> >>>>>http://cgit.freedesktop.org/~whot/xorg-integration-tests/, with a
> >>>>>lengthier explanation here:
> >>>>>http://who-t.blogspot.com.au/2012/08/xorg-integration-test-suite.html
> >>>>>
> >>>>>Long story short, the xorg integration tests (XIT) are built on
> >>>>>googletest and xorg-gtest, i.e. written in C++. Most of them write out a
> >>>>>config, fire up a server, and then either check the log or query the
> >>>>>server for state. A few tests use evemu devices to send events.
> >>>>>
> >>>>>Input testing is relatively simple with these tests since we can emulate
> >>>>>virtually anything, but I'm not sure yet if such test cases can scale to
> >>>>>output tests as well. Or even if there are tests that can be fully
> >>>>>automatic without anyone staring at the screen. I know mesa already has
> >>>>>such tests.
> >>>>>
> >>>>>Any feedback is appreciated, I intend to talk about this a bit more at
> >>>>>XDC, but in the meantime you can see if it is useful. Right now, the
> >>>>>bigger issues I'm facing are the build system and scalability if we end
> >>>>>up with a ton of tests. And the ifdef hell that already started with
> >>>>>tests covering different X server versions, RHEL support, etc. Any
> >>>>>epiphanies on how to avoid a train wreck would be appreciated.
> >>>>
> >>>>Overall, I'm quite pleased with how things have progressed :).
> >>>>
> >>>>As for the run time issue, I assume you are mostly hitting it during
> >>>>device tests. Could you find a way to set up all the input config
> >>>>blocks you need into the same context, and then start the tests
> >>>>against a single server?
> >>>
> >>>most of the tests (so far) require a specific InputDevice section that may
> >>>or may not be CorePointer, or require some xorg.conf option to toggle the
> >>>defaults. there are plenty of normal bug tests that don't need that, though
> >>>then we do run the chance of having state-dependent tests. instead of
> >>>starting with a clean slate for all of them.
> >>
> >>For those that require specific and different InputDevice sections,
> >>you could use MatchProductName (or whatever it is) on the name of
> >>the device, and use different named devices for each test.
> >>
> >>Normal tests should be run on the same server instance. If there is
> >>any state dependency, that's also the sign of a bug. In the end,
> >>we'd want to be running all of the test in a sequential order, and
> >>then in a random order. The sequential order will be a stable test
> >>suite, the random order will give us clues if some server state gets
> >>messed up as we go.
> >
> >the problem with random orders are that they're hard to reproduce. It
> >doesn't help me when a random-order test fails if I can't run the tests in
> >exactly the same order to reproduce. so we'd have to store or print the seed
> >somewhere.
>
> Generate an output file (xml, json, whatever) than can be fed
> straight back in that contains seed, and all other pertinent info.
It is covered by googletest, look for section "Shuffling the Tests" here
http://code.google.com/p/googletest/wiki/AdvancedGuide
Cheers,
Peter
More information about the xorg-devel
mailing list