## Template for LP Production Meeting logs. Just paste xchat log below and the format IRC line will take care of formatting correctly #format IRC #startmeeting Meeting started at 10:00. The chair is matsubara. Commands Available: [TOPIC], [IDEA], [ACTION], [AGREED], [LINK], [VOTE] Welcome to this week's Launchpad Production Meeting. For the next 45 minutes or so, we'll be coordinating the resolution of specific Launchpad bugs and issues. [TOPIC] Roll Call New Topic: Roll Call ni! * herb__ has quit (Client Quit) me me (allenap is sick) (or "coo", if anyone knows about kin dza dza ;) gary_poster, Chex, bigjools: hi hello sinzui, hi me and hi :-) me me apologies from Stuart and Ursula me hello [TOPIC] Agenda New Topic: Agenda * Actions from last meeting * Oops report & Critical Bugs & Broken scripts * Operations report (mthaddon/Chex/spm/mbarnett) * DBA report (stub) * Proposed items [TOPIC] * Actions from last meeting New Topic: * Actions from last meeting * matsubara to trawl logs related to high load on edge on 2009-09-09 ~1830UTC and ping Chex about it * matsubara to email the devel list about the new ErrorReportingUtility method * done * matsubara to file a bug to have the HWSubmissionMissingFields oopses as informational only (note to self: see bug 438671 for more details) * filed https://bugs.edge.launchpad.net/malone/+bug/446660 * matsubara to look in lp-production-configs for the new oops prefixes. * all QA contacts to inform their teams about the new QA column and what they should do about it. * Chex to email the list about the new QA column in https://wiki.canonical.com/InformationInfrastructure/OSA/LPIncidentLog Launchpad bug 438671 in checkbox "HWSubmissionMissingFields OOPS on +hwdb/+submit" [Undecided,Confirmed] https://launchpad.net/bugs/438671 Launchpad bug 446660 in malone "HWSubmissionMissingFields exceptions should be updated to be informational only" [High,Triaged] I still haven't checked the high load logs. Chex or mthaddon, did you notice high loads after the 2009-09-09? matsubara: we still have been seeing some high loads, yes I did look the new prefixes on lp-productions-configs. I need to update oops-tools to recognize those I also noticed that some oops prefixes will conflict with existing ones, so I need to sort that out with... losas I guess [action] matsubara to file a bug on oops-tools to recognize new oops prefixes and sort out conflicting prefixes with losas ACTION received: matsubara to file a bug on oops-tools to recognize new oops prefixes and sort out conflicting prefixes with losas Chex, re: the high load, could you take on the task of analysing the logs? my idea was to correlate information from the app servers logs with the apache logs and see if that could shed some light. mthaddon emailed the list about the new QA column, so everyone, read it and spread the word to your teams, please. Chex: yes sure, I can look at that. matsubara: it has just been discussed in the TL call as well, flacoste will champion the process matsubara: ^^ I mean.. matsubara: (about QA Info column on LP incident log) Chex, cool, thanks a lot. ping me if you need any info on that danilos, cool. thanks! [action] Chex to check app server logs and apache logs to see if it can shed any light in the high load issue. ACTION received: Chex to check app server logs and apache logs to see if it can shed any light in the high load issue. [TOPIC] * Oops report & Critical Bugs & Broken scripts New Topic: * Oops report & Critical Bugs & Broken scripts we're seeing a bunch on DisconnectionErrors which are not informational only which means, the Retry mechanism is not enough for those cases. matsubara: are these the ones on the xmlrpc server? or something else? gary_poster, yes, most of them on xmlrpc server but there are a few, like OOPS-1383I246, in login.launchpad.net https://lp-oops.canonical.com/oops.py/?oopsid=1383I246 matsubara: right. I investigated and could not duplicate the ones in the xmlrpc server. Kicking the xmlrpc server made them go away. There's a bug number which I can get in a moment. After discussing with flacoste, I think the best we can hope for is to figure out a way to add more diagnostic information should the problem happen again gary_poster, ok, I take this is a foundations taks then. let me know the bug number please (or I'll file a new one for the more diagnostic info needed issue, if that's not what the bug you mentioned is about) bug 450593 . Stuart has a follow up: check with losas if there were any unusual activity ATM Launchpad bug 450593 in launchpad-foundations "Lots of DisconnectionErrors on xmlrpc server - staging" [Undecided,New] https://launchpad.net/bugs/450593 I think a comment saying that we should address by adding diagnostic information in case there is a repeat would be sufficient. I'll do that. thanks gary_poster apart from that we have a bunch of oopses that will need fixing given the new zero oops policy. Ursula will keep an eye on those for now and let the teams lead which ones are happening more frequently we had some script failures last week the main one seems to be the branch-puller which was already discussed in the list checkwatches failed on the 13th, but since no other email came out, I assume it was a blip. adeuring, can you confirm? matsubara: erm, I have no idea... and the product-release finder and update-cache failed to run on the 14th matsubara: I'll ask Graham sinzui, do you know what's up with the product release finder script? who's owns the update-cache script? s/'s// thanks adeuring I don't know; looking [action] adeuring to check with gmb about checkwatches failure ACTION received: adeuring to check with gmb about checkwatches failure matsubara: No, but I think the issue is not that it failed, bu that a long process prevented it from running sinzui, right, that'd explain. could you check that's the root cause and reply to the list? matsubara: okay maybe the update-cache failure happened for the same reason matsubara: I don't see an update-cache script in the LP tree. (I do see variants like update-download-cache) just a reminder to everyone, if a script fails and your team owns that script, please reply to the failure email saying that someone is taking a look at it. gary_poster, all I see is: "The script 'update-cache' didn't run on 'loganberry' between 2009-10-14 04:00:08 and 2009-10-14 22:00:08 (last seen 2009-10-13 11:36:51.345188)" not sure which script that one is monitoring. for the critical bugs section, we have 4 bugs, 3 fix committed and 1 in progress danilos, the one in progress is assigned to henning but he's on vacation is it really critical? matsubara: I'd have to check, sorry for not being on top of this (I also looked for update-cache in lp-production-configs. not there either.) gary_poster, I think it's cronscripts/update-pkgcache.py. IIRC, the losas script monitoring tool uses the script name defined in LaunchpadCronScript [action] danilos to check bug 438039, assess if it's really critical. if it's is, land a fix, if it's not, update the importance ACTION received: danilos to check bug 438039, assess if it's really critical. if it's is, land a fix, if it's not, update the importance Launchpad bug 438039 in rosetta "bzr branch import script oopses sometimes" [Critical,In progress] https://launchpad.net/bugs/438039 matsubara: oh ok, thanks. that script is either the one salgado was talking about that he owns, or something for soyuz, seems to me. it's traditionally maintained by soyuz bigjools: ok, thanks but in the new world order it could be registry bigjools, can you confirm that update-cache failure described in the "Subject: Scripts failed to run: loganberry:productreleasefinder, loganberry:update-cache" refers to the update-pckg.py and reply back to the email sent to the list? ok, you just did :-) it doesn't look like update-packagecache errr ah it is bigjools, it's the only script that has update-cache string in cronscripts/ sorry got confused by seeing productreleasefinder [action] bigjools to investigate update-cache failure and reply back to the list ACTION received: bigjools to investigate update-cache failure and reply back to the list bigjools, you might want to coordinate with sinzui since he'll check the product release failure one and suspects it might have failed because of a long running process matsubara: is there an oops? only an email that it did not run bigjools, nope and it was a one-off? bigjools: it did not start, and that is 99% of the time the fault of a long running process ok * sinzui really does not think about the issue until it happens two days in a row and me sinzui, perhaps the script monitoring should have such a feature but anyway, sorry for taking so long on this section thanks eveyrone [TOPIC] * Operations report (mthaddon/Chex/spm/mbarnett) New Topic: * Operations report (mthaddon/Chex/spm/mbarnett) hello everyone hi sorry, notes failure: - LP Ship-it progress: ; LP shipit is live on the new servers ; Nigel Pugh is now in charge of approving CPs to those servers ; We are still working on the new front-ends for LP Login and LP itself - Buildd-manager DB restart issue/bugs: Bugs 451351 & 451349 have been Launchpad bug 451351 in soyuz "buildd-manager doesn't give us a good way of determining it's in a failed state" [High,Triaged] https://launchpad.net/bugs/451351 filed to address this issue, any movement to fix this problem? - QA column in Incident Log: Tom sent a email to LP list on Oct 12, has anyone reviewed the email and have comments/concerns about it? Chex, are oops reports from those new servers going to be rsync'ed to devpad? such oopses are supposed to be included in LP oops summaries? LP Incidents of note: ; Applied: CP 9660 to lpnet, CP 9679 to lpnet ; Small LP outage (8 mins) : App servers (and librarians) didn't reconnect & had to be restarted after LP DBs were restarted: Bug filed: 451093 and thats our report for this week. sorry for the troubles there Chex: I am looking into 451351 but don't expect anything soon, it's a hard problem * noodles775 has quit ("Leaving") matsubara: I am not sure on the status of oops summaries on the new servers, I will check on that Chex, cool, thanks. bigjools: ok, thanks, just looking for status of progress. Chex, danilos mentioned that QA column things was discussed today in the TL meeting and flacoste will champion the process. * andrea-bs has quit (Remote closed the connection) matsubara: ok, that is great to hear. Chex, thanks for the report let me move on as we are overdue [TOPIC] * DBA report (stub) New Topic: * DBA report (stub) The new replica to become the master for the authentication service has been taken offline, as the hardware was showing signs of strain keeping up with Launchpad's write load. The hardware is being beefed up to cope. The alternative is to just put the authdb replication set on this server and have the authentication service appservers connect to the main launchpad databases for the data they need to pull from the lpmain repl ication set. Nothing else to report. that came from Stuart. any questions about dba's report? ok, I'll take that as a no :-) [TOPIC] * Proposed items New Topic: * Proposed items no new proposed items Thank you all for attending this week's Launchpad Production Meeting. See https://dev.launchpad.net/MeetingAgenda for the logs. sorry for overrunning #endmeeting Meeting finished at 10:51.