#title #ubuntu-testing: QA Community Roundtable Meeting started by balloons at 16:10:27 UTC. The full logs are available at http://ubottu.com/meetingology/logs/ubuntu-testing/2012/ubuntu-testing.2012-06-21-16.10.log.html . == Meeting summary == *UTAH Demo *UTAH Demo ''LINK:'' http://www.screenleap.com/ubuntuqa (balloons, 16:55:11) ''LINK:'' http://www.screenleap.com/ubuntuqa (balloons, 16:59:23) ''LINK:'' http://packages.qa.dev.stgraber.org/qatracker/milestones/224/builds/16281/testcases/1306/results (balloons, 17:12:23) ''LINK:'' https://launchpad.net/~ubuntu-testcase is the team (balloons, 17:17:56) ''LINK:'' https://bugs.launchpad.net/ubuntu-qa-website/+bug/994816 (GridCube, 17:47:45) ''LINK:'' http://iso.qa.ubuntu.com/qatracker/milestones/219/builds/17586/testcases (balloons, 17:52:03) Meeting ended at 18:00:10 UTC. == Votes == == Action items == * (none) == People present (lines said) == * balloons (175) * hggdh (53) * superm1 (30) * GridCube (15) * Effenberg0x0 (13) * tgm4883 (4) * meetingology (3) * cariboo907 (1) * patdk-wk (1) * mrand (1) * highvoltage (1) == Full Log == 16:10:27 #startmeeting QA Community Roundtable 16:10:27 Meeting started Thu Jun 21 16:10:27 2012 UTC. The chair is balloons. Information about MeetBot at http://wiki.ubuntu.com/meetingology. 16:10:27 16:10:27 Available commands: #accept #accepted #action #agree #agreed #chair #commands #endmeeting #endvote #halp #help #idea #info #link #lurk #meetingname #meetingtopic #nick #progress #rejected #replay #restrictlogs #save #startmeeting #subtopic #topic #unchair #undo #unlurk #vote #voters #votesrequired 16:11:03 hm? 16:11:43 Julien and Phil from ubuntu both don't seem to be around ;-( 16:11:57 checking on kubuntu folks then we'll get started 16:12:44 balloons: you had scheduled 2 hours. Maybe they will still make it 16:13:56 ok, pm's sent to everyone ;-) 16:14:05 so, let's start 16:14:36 [TOPIC] UTAH Demo 16:14:48 huh, bot doesn't like topics 16:15:03 well, anyways, first item on the agenda was to demo UTAH 16:15:18 balloons, perhaps #topic 16:15:21 don't think we can get a demo together, but hggdh can explain a little bit about what it is 16:15:29 #topic UTAH Demo 16:15:40 i thought there was going to be a google hangout or somethign to show it off? 16:15:52 hmm, meetingology apparently is a liar when it comes to available commands 16:15:53 no, no demo right now, unfortunately 16:16:36 superm1, hggdh should be able to demonstrate how vm's are used in automated testing. but sadly, not a UTAH demo atm 16:16:39 so for now everyone stare at http://www.enchantedlearning.com/usa/states/utah/map.GIF and imagine as hggdh describes :) 16:16:51 :-) 16:17:01 I like the green. Very vivid. 16:17:32 UTAH (Ubuntu Testing Automation Harness) is the new baseset we are moving to for automated testing on Ubuntu 16:18:15 it will be (when fully implemented) flavour-agnostic, so it can be used for all *ubuntu 16:18:44 right now, beginning of development, it only supports VMs (via libvirt) 16:18:44 is the input it takes an ISO image? 16:19:03 yes, it uses ISO images to build the target test environment 16:19:23 specifically, "desktop" ISO images, not alternate 16:19:30 the installation is preseeded, so there is no input required. 16:19:51 not only desktop, but also alternate and server 16:19:58 okay i see 16:20:13 sorry, i can hold off questions until the end if you would like, i just realized this might be a bit rude to interject 16:20:16 (which is to say, ubiquity and debian-installer based installs) 16:20:38 superm1: no, please shoot the questions as we go 16:20:51 I do not mind :-) 16:21:07 Ok. will you have a place to put preseeds in this tool for the different flavours? 16:21:21 this, on the other hand, means that you might have to adjust the preseeds to your specific needs 16:21:25 yes, we will 16:21:29 i know at least mythbuntu and ubuntu studio do have custom preseeds 16:21:31 Ok 16:22:07 being able to adjust preseeds is pretty much a requirement if you want to automate tests 16:22:37 so the unfortunate flaw in doing it this way that comes to mind is that sometimes you will have bugs that are exposed only when the installation is ran in an interactive mode or only in an automatic mode etc 16:23:12 so it can't be a complete replacement to lots of user testing, but instead a valuable supplement 16:23:19 UTAH was originally called 'uath'. But we found that (1) nobody could pronouce it, and (2) it means 'fear, horrorĀ“ in ancient Gaelic 16:23:57 generically speaking, automated testing *cannot* replace actual hands-on 16:24:28 it is just a way of getting the bits that do not directly depend on user input tested, and out of the way 16:24:43 but we still need to have manual installs, and testing 16:25:01 hggdh: Just to be clear, can you mention some cases in which UTAH will raise a flag? 16:26:05 yes 16:27:35 let's say you are testing upgrades from desktop -- it is easily automated: having an existing (say) Precise install, you run 'sudo do-release-upgrade -d', and use pexpect to drive the answers 16:28:08 in this case, we are looking for failures to upgrade -- missing pre-reqs, new package version fails to install, etc 16:28:52 of course, we cannot check for positioning of windows, and text visibility in windows, etc. But we will get the "backoffice" errors 16:29:55 or you are testing a specific package with the testset provided by it (say, mysql, or even coreutils). So you install the image at the version of Ubuntu you want, and run these tests 16:30:30 (for coreutils, you need to _build_ the package again, coreutils tests are intermixed with the build process) 16:30:55 but you will get errors because of a change on libraries 16:31:34 what kind of failures get raised in the upgrade testing? will bugs get filed? 16:31:39 or you want to check on ecryptfs -- we have a set of tests for it, and they are fully automated 16:32:18 no bugs are opened automagically. We still need to look at the failures and identify the cause 16:32:55 hggdh: got it. Some debugging skills are needed by a human tester. 16:33:13 the point here is to weed out false positives -- errors caused by test code, not by what is actually being tested 16:33:23 Effenberg0x0: always 16:33:59 when you are testing you have to look for real and false positives and negatives 16:34:24 usually, we assume the negatives (which is to say, the expected results) are correct 16:35:07 but, every so often, you should look at your "correct" results, and verify they are indeed correct 16:35:12 there will be some sort of notification mechanism for those flavors interested when things fail and need some further intervention? 16:36:38 UTAH has a provision to alert users. YOu can set it and use it. In our case, most of the UTAH tests will be run via Jenkins (http://jenkins-ci.org/), so we use Jenkins to do the alerting 16:37:11 another point we all should be careful on is on destructive/non-destructive tests 16:37:35 I personally define a destructive test as a test that unilaterally changes your system configuration 16:38:07 look at 'change your system configuration' as 'can destroy your system' 16:38:44 it does not matter if it actually completely borks, but it might -- for example -- change the DNS resolution 16:39:29 will flavours be able to run utah in the canonical jenkins instance, or need to set up their own? 16:39:41 on the other hand, this Monday we had a ecryptfs test that actually forces a reboot of the system: a piece of the kernel goes haywire, and I/O to ecryptfs cannot be guaranteed to work until the reboot 16:40:46 superm1: I cannot really answer that, sorry. But... we -- right now -- do not have the resources to guarantee test space for the flavours 16:41:10 OK 16:41:35 Daviey warned that jenkins is a PIA to get setup 16:41:38 so, at least right now, please do not expect we will be able to run tests for other than Ubuntu 16:42:04 hggdh: The typical ISO-Test (Install/Partitioning, writing MBR/Grub, installing/removing drivers/kernel modules like VGA) is handled by UTAH? 16:42:18 Effenberg0x0: yes indeed 16:42:36 superm1: not really a pita, but it does have a learning curve 16:43:06 the easiest way of using UTAH would be to deploy VMs 16:43:32 can UTAH be ran without jenkins then on its own via VM deployments? 16:43:44 deploying bare-metal will, of course, require bare-metal and additional packages/networks (so that MAAS, for example, can be deployed) 16:44:06 superm1: yes, it can, and this is how I was starting to test it 16:44:13 ah great 16:44:16 if I can interject, it would also be good to throw some links at you for this : https://launchpad.net/utah 16:44:24 all you need is libvirt and friends 16:44:42 balloons: thank you, did not really have time to prepare 16:44:56 so really need to just check it out and start playing to see where questions crop up 16:44:58 there's a wiki off that page with more info and a picture 16:45:46 (and I am, right now, testing a new set of kernel SRU tests, and the KVM I am using is driving me nuts, popping up on my monitor every time the machine thinks about doing something 16:45:59 additionally, if you have further specific questions once your playing with it, there's a mailing list setup for it now ubuntu-utah-dev@lists.ubuntu.com 16:46:30 superm1: yes. Not only to learn, but to tell us where we, ah, did something wrong 16:46:57 great, thanks! just need to find some time to actually use it and experiment now :) 16:47:08 the code resides at... 16:47:52 it's linked above hggdh https://launchpad.net/utah, lp:utah 16:48:05 heh 16:48:22 hggdh: How do we prioritize/filter UTAH-Testing bug reports, amidst the constant flow of new bug reports on LP (correct/incorrect ones) everyday? 16:48:35 so please brz branch, and play -- or install the daily utah package from the PPA 16:49:23 Effenberg0x0: what we are doing internally, is to tag all bugs we find (via whatever test process, including manual testing) as a qa finding 16:49:52 fyi, there's a daily and stable ppa.. you can find them here: https://launchpad.net/~utah 16:50:00 link to stable: https://launchpad.net/~utah/+archive/stable 16:50:34 hggdh, OK 16:51:22 ok anymore questions on utah for hggdh ? 16:53:15 alright if not, next up is testcase management 16:53:22 which you get to listen to me for ;-) 16:53:46 I'll include visuals.. 16:53:57 I'm wondering if it makes sense for me to type it or speak it 16:54:09 I'm concerned if I only have the video it won't make the log 16:54:37 I can screenshare and type to you all, or you can view a live feed of me speaking with my desktop :-) 16:55:09 let's try the type and view 16:55:11 http://www.screenleap.com/ubuntuqa 16:56:16 hopefully everyone can see my screen now? 16:56:27 OK here balloons 16:56:29 yup 16:56:42 OK here too 16:56:56 As you know the qatracker just went through an update to bring testcase management 16:57:20 I'm going to show you how it works from a behind the scenes admin perspective 16:57:29 we'll use the staging site on dev.stgraber.org 16:58:00 so I have a couple products laid out here, mimicking the iso.qa.ubuntu.com tracker 16:58:29 testcases and pages look pretty similar, except for the addition of the extra links 16:59:02 clicking on a test (http://packages.qa.dev.stgraber.org/qatracker/milestones/224/builds/16281/testcases/1305/results) you can see there are some new links, and the testcase is now included inline 16:59:09 hi highvoltage! 16:59:21 hi balloons :) 16:59:23 http://www.screenleap.com/ubuntuqa 16:59:29 you can watch and follow along 16:59:56 ok, so that's nice having the testcase in there 17:00:13 you'll also notice the boilerplate text on the bottom for submitting your result and filing a bug 17:00:49 the bug reporting instructions are currently set at a product level 17:01:11 the testcases are defined and then grouped into testsuites which can then be used by any product 17:01:14 product meaning "ubuntu" "mythbuntu" etc? 17:01:21 superm1, yes 17:01:25 or, a package 17:01:35 like the calls for testing of the kernel 17:01:40 I'll show that quickly 17:02:17 you can see we've had a call for testing for the kernel, and it's had 2 versions 17:02:35 silly me called the 3.4 version precise1, because I was still learning the tool 17:02:57 so, there's one testcase in here, smoke test 17:03:10 similar boilerplate text 17:03:16 we're looking at (http://packages.qa.dev.stgraber.org/qatracker/milestones/223/builds/16283/testcases/1301/results) 17:03:45 filing a bug on this package has some further instructions than normal, and includes a link 17:03:56 that link is tagging the bug as well 17:04:31 if we look at installation instructions for the package, we get a howto install and howto uninstall 17:05:08 finally the detailed infromation on the testcase has the history of what the testcase looked like 17:05:18 this is useful for when we update a case, but have old results 17:05:26 the results and case will match up 17:05:57 So you can see some of my earlier attempts and playing with formatting, etc 17:06:20 ok, so let's take a look at the admin side of things 17:07:02 As you can see, we allow you to define a template for new testcases 17:07:21 heh, I changed it a bit 17:07:30 does that mean we need to work through an admin like you to make our own test cases? 17:07:55 err, actually.. there 17:08:10 the template has gone thru a couple revisions before going live 17:08:24 superm1, no it doesn't mean you'll need me to help you make testcases 17:08:29 we'll get to that in a min ;-) 17:09:13 ok, so anyways, that's the example template for a testcase as decided upon last dec by the qa community 17:09:31 it could change of course if we decided to change it.. and if so, we could update the template at that time 17:09:32 :-0 17:09:46 ok, so let's look at the testcases quickly 17:10:16 you can see my smoke test mostly follows the template -- barring it has no expected results. 17:10:28 but the admin side, you simply title it, and add the testcase 17:10:37 notice we do use some html / drupal markup 17:10:50 some is allowed and we use it to help display as well as for machine parsing 17:11:21 here's an example of an isotest testcase 17:12:00 this wiki page http://testcases.qa.ubuntu.com/Install/DesktopFree has been converted into the testcase you see 17:12:23 http://packages.qa.dev.stgraber.org/qatracker/milestones/224/builds/16281/testcases/1306/results 17:12:50 ok, so that's testcases more or less 17:12:57 now, we can organize testcases into testsuites 17:13:16 you see the 'free software only' testcase is part of the ubuntu desktop suite 17:13:32 well.. 'ubuntu desktop extras' actually :-) 17:13:43 there are 3 tests defined in here you can see 17:13:51 I can set the order, weight and status as expected 17:14:25 now, let's go assign our testcases/testsuites to a product 17:15:02 you can see I've linked both the 'ubuntu desktop' and 'ubuntu desktop extras' testsuites to the ubuntu desktop amd64 product 17:15:09 for precise 17:15:47 now, some of this may be rather foreign to all of you, but there is a guide to this interface which I'll link to at the end 17:16:06 the new stuff I am showing you is not yet in that guide. since, it's new (as in this week new :-) ) 17:16:39 now to answer superm1's question a little bit, there is a new role to the admin interface 17:17:03 we will have a testcase admin role which will allow you to define and manage the testcases 17:17:13 I've setup a team on lp to do this 17:17:24 anyone who is on the team will have access to manage testcases 17:17:56 https://launchpad.net/~ubuntu-testcase is the team 17:18:18 Phil graciously agreed to help trial out this new interface 17:18:39 you'll notice the team is restricted 17:18:54 my goal is not to prevent anyone from contributing, but some control is needed 17:19:16 for anyone running the tests, I would like them to be able to suggest new tests or improvements by filing a bug against ubuntu-qa 17:19:52 like any other community, make a few contributions and it will make sense to make you an admin should you wish to be 17:20:07 this is all VERY new, so I'd love input from all of you on how to shape this 17:20:27 suffice to say, the flavors should all have at least one person who has this access 17:20:34 that's what i was just going to say 17:20:41 i'm glad tgm4883 volunteered for mythbuntu 17:20:47 :-) 17:20:48 hahaha 17:21:50 I'd like everyone on the team to also make sure the tests stay maintained and not fall into dis-use or out of date 17:22:05 so it's a bit of responsibility, but not too much I don't think 17:22:16 more or less if your active in testing, you would be doing / have done this anyway 17:22:42 so questions? 17:23:01 Balloons: Doesn't it sort of overlap with UTAH? 17:23:16 besides the testcase management piece, the new qatracker should be able to support all of our testing needs (insomuch as we want to have a test and record results) 17:23:30 it's my hope we can consolidate what we're doing by using it 17:23:33 Effenberg0x0, how so? 17:23:52 This is intended for manual testing, UTAH is intended for automated testing 17:24:15 of course, over time we will continue to close the gap on that.. how/where the results get recorded, etc 17:24:49 I know, I mean some automated tests might kill the need for some manual test-cases 17:24:59 How to keep things paired up 17:25:16 Effenberg0x0, my longer term view is to automate away everything that makes sense 17:25:45 the pairing up if you will of what is automated vs manual happens via our communication 17:26:12 Ok, so the community looks at test-cases and define what's still valid or not, got it 17:26:14 this cycle once we have migrated all of our pre-existing tests over I would like to see us take a review of them 17:26:32 I took a work item personally to review the testcases for iso testing to ensure they make sense, are needed, etc 17:26:57 but yes, as a community, on an on-going basis, we should help frame what is needed for manual testing and where it makes sense for us to help 17:27:16 example of this is the kernel tests. you notice the "smoke tests" are rather simple and light 17:27:32 the intense tests are being automated by hggdh and the canonical and kernel teams 17:27:57 the manual testing piece of that is getting it out to many more different workloads and bits of hardware 17:28:20 Ok 17:28:50 anything else? 17:29:08 If not we'll migrate onto the next piece and i'll shut down the screenshare for the time being 17:30:05 ok, so we talked about the new tracker and testcases 17:30:27 briefly I wanted to mention milestones.. Kate asked me to remind the flavors that you can skip milestones 17:30:49 but if you do commit to doing a milestone she asks you do it with full force.. aka, you see it through to the end ;-) 17:31:26 feel free to interrupt me at any time btw 17:31:40 next up we wanted to discuss collaboration 17:31:52 i haven't kept up with the thread about abolishing milestones, but what if that happens? 17:32:07 have you thought about how the tracker would scale for that? 17:32:11 superm1, yes, that thread has grown and been quite a discussion 17:32:58 from a tools point of view, our "milestones" can remain the same.. We can nothing but dailies for iso testing if needed, and calls for testing are already of variable length 17:33:22 as far as what's going to happen, I am not completely sure as it's still be discussed 17:34:32 however, from a ubuntu perspective we're being asked (again, we were asked at UDS / before UDS) to test more regularly; meaning not just at milestones 17:35:19 from that perspective nothing in theory has changed.. but the cadence of exactly when we do this "regular" testing is being suggested to be 2 weeks 17:35:26 i see, okay 17:35:35 the schedule we adopted after UDS was pretty much the same.. about every 2 weeks 17:35:44 and i understand that flavours can follow the cadence they would like in this regime too 17:36:03 yes, of course flavors can choose to follow the cadence or not 17:36:20 I recommend adopting a cadence that works for the flavor 17:36:26 considering the devs, testers, etc 17:36:41 hence, adopting every ubuntu milestone doesn't always make sense.. 17:36:52 (i could speak very very unoficially for xubuntu) 17:36:55 I liked the LTS only approach some flavors have thought about as well 17:37:01 GridCube, hello :-) 17:37:05 :) 17:38:02 on colloborating, part of us all getting together is to talk about needs we might have and how we can work together to benefit each other 17:38:25 maintaining testcases in mutual fashion and sharing them across flavors as it makes sense, is one such example 17:39:03 but I also think we can do things like collaborated calls for testing (like we have done with the kernel testing), or on specific packages that mutliple flavors use like firefox 17:39:29 whatever it is.. floor is open to whomever has a need or idea for colloberation 17:40:22 o/ 17:40:28 yes GridCube 17:40:39 brb, type away :-) 17:41:05 hello, i present myself, im a bug tester and support collaborator for xubuntu, i've been so for over a year now, never participated here tho. 17:42:22 i have proposed a small change on the qa tracker a while ago, it was that the tests cases should have some sort of area, where all the reported bugs for that particular case on previous days could be seen 17:42:54 wait, what 17:43:08 * tgm4883 scowls at superm1 17:43:15 ;) 17:43:27 that delay! 17:43:27 epic 17:43:37 GridCube, this exists: http://iso.qa.ubuntu.com/qatracker/reports/defects 17:44:46 additionally when you look at a testcase, it has a list of previously reported bugs 17:44:52 but only for that build I believe 17:45:04 only for that build 17:45:05 regardless, it's worth filing a bug to discuss how it might look 17:45:12 I like the idea 17:45:26 the wishlist bug is already there 17:45:30 im trying to find it 17:45:40 GridCube, ahh, ok, point it out to me.. I'll subscribe 17:46:35 ok 17:47:19 give me a sec 17:47:29 ok, so anything else.. This last piece is general Q & A time. hggdh is still around (though dealing with his flailing computer) feel free to ask any more questions 17:47:45 https://bugs.launchpad.net/ubuntu-qa-website/+bug/994816 17:48:15 :) 17:48:26 it appears like that was implemented? 17:48:49 ahh I think you mean https://bugs.launchpad.net/ubuntu-qa-website/+bug/994812 17:49:16 ahm, it was like "granted" but no one actually implemented it 17:49:25 I subbed.. I'll look into it. 17:49:25 balloons, yes those two go thogether :) 17:49:28 thanks for the suggestion 17:50:21 no problem, i just though it would make reporting and testing easier, because people would know what to look at 17:50:35 I am here 17:51:00 ok, so if there's no more questions we can discuss the testcase management stuff quickly. 17:51:25 The old wiki needs converted, of which I have done a couple.. In addition to converting I placed them into the new templated format 17:52:03 you can see everything in powerpc and amd64+mac has the new testcase format 17:52:03 http://iso.qa.ubuntu.com/qatracker/milestones/219/builds/17586/testcases 17:52:30 I take that back, heh, 'Live Session' doesn't 17:53:03 So I will be going through and doing the same with the other testcases used by the ubuntu iso's 17:53:12 however, many of those testcases are used by the flavors as well 17:53:57 for instance, the xubuntu desktop i386 testcases 17:54:05 they use those same 4 tests 17:54:29 the only one not converted is the wubi (windows installer) testcase 17:55:03 convert that and you can convert that iso over 17:55:40 of course, the flavors can then pick and choose which tests to keep in common and which to write for themselves 17:55:58 you can pull any testcase you wish and include it with any other combination of testcases to go into a testsuite 17:56:21 the examples I gave that are live actually are 2 testsuites, so that even the testsuites can be shared across multiple isos 17:57:04 so for example, the xubuntu desktop i386 iso can use the 'ubuntu desktop' testsuite which contains all 4 testcases it's using, barring the wubi testcase 17:57:34 write the wubi testcase and then add it to a testsuite and include it on the iso 17:57:40 I trust that all makes sense :-) 17:58:36 So, please ping me to get access to help out in this area. Your iso's and testcases will remain usable as-is, (you can see the legacy mode on the tracker), but can be converted now at any time 17:59:09 let me know who is interested in joining this new team and I'll help get them going on how to use the tool, etc 18:00:00 and with that I think we're done. ;-) Thanks to you all for coming out. I appreciate your time. And thanks for suggesting we meet during the cycle. I think it was helpful 18:00:10 #endmeeting Generated by MeetBot 0.1.5 (http://wiki.ubuntu.com/meetingology)