14:59:11 <knesenko> #startmeeting oVirt Infra
14:59:11 <ovirtbot> Meeting started Mon Nov 11 14:59:11 2013 UTC.  The chair is knesenko. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:59:11 <ovirtbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
14:59:21 <knesenko> #chair dcaroest eedri o
14:59:21 <ovirtbot> Current chairs: dcaroest eedri knesenko o
14:59:24 <knesenko> #chair dcaroest eedri obasan
14:59:24 <ovirtbot> Current chairs: dcaroest eedri knesenko o obasan
15:00:07 <knesenko> #unchair o
15:00:07 <ovirtbot> Current chairs: dcaroest eedri knesenko obasan
15:00:18 <knesenko> Rydekull: here ?
15:00:26 <knesenko> #topic Hosting
15:00:32 <knesenko> let's start
15:00:41 <knesenko> hello all !
15:00:48 <knesenko> I hope you are doing good
15:00:53 <knesenko> so ...
15:00:53 <dcaroest> hi! ovirt03 is still unreachable :(
15:00:57 <knesenko> dcaroest: :(
15:01:04 <knesenko> that's what I tried to ask
15:01:12 <knesenko> ))
15:01:24 <knesenko> so we are still blocked on rackspace migration
15:01:38 <eedri> knesenko, what's the status?
15:01:47 <eedri> knesenko, why we keep getting problems with ovirt03 server?
15:02:19 <eedri> knesenko, problems with the hardware there?
15:02:32 <knesenko> eedri: because there are some network issues there
15:03:08 <dcaroest> yep, something is messed up there
15:03:08 <knesenko> i tried to configure bridge on it, and i was disconnected
15:03:18 <knesenko> and from there we can't connect to thhe server
15:03:30 <eedri> knesenko, didn't they move it to the same network as 1/2?
15:03:33 <knesenko> also there were some issues with the VPN connection
15:03:42 <knesenko> yes they are ..
15:04:02 <knesenko> but something gone wrong with the bridge creation and we lost connectivity
15:04:24 <knesenko> so dcaroest didn't managed to reboot the server from PM
15:04:30 <eedri> knesenko, what are they saying?
15:04:41 <knesenko> rackspace guys found the issue with PM , and they fixed that
15:04:59 <knesenko> and still we can't connect to ovirt3
15:05:02 <knesenko> right dcaroest ?
15:05:27 <dcaroest> they rebooted the machine, but after that we are not able to connect through ssh or console or anything yet
15:05:41 <eedri> dcaroest, so they have an open ticket on it now?
15:06:05 <dcaroest> I'm in the middle of writing it
15:07:33 <knesenko> #info we need to push more on ovirt03 fix
15:07:50 <dcaroest> now they have an open ticket
15:07:52 <knesenko> #action dcaroest reply to rackspace ticket and ask them to fix the issue
15:07:57 <knesenko> dcaroest: thanks
15:08:22 <knesenko> obasan: anything new with monitoring ?
15:08:23 <eedri> knesenko, what's the next step once it's ready?
15:08:31 <obasan> knesenko, nope. had a busy week.
15:08:32 <orc_orc> I do not know the rackspace approach -- is there a way to get an 'out of band' console?
15:08:47 <knesenko> eedri: add it into the setup , add gluster storage
15:08:55 <knesenko> and start migrating VMs to it
15:09:02 <knesenko> obasan: ok thanks
15:09:30 <knesenko> orc_orc: like a power management console ?
15:09:48 <orc_orc> well -- like: virsh consle acme
15:09:56 <orc_orc> console*
15:10:16 <orc_orc> so one could self-repair a bad set of netrwok settings
15:10:42 <knesenko> #chair orc_orc
15:10:42 <ovirtbot> Current chairs: dcaroest eedri knesenko obasan orc_orc
15:10:53 <knesenko> orc_orc: we have something, but it doesn't works
15:11:00 <knesenko> rackspace should fix it
15:11:07 <tontsa> if you rent dedicated servers you should get IPMI or similar solution
15:11:17 <orc_orc> knesenko: hmmm ..
15:12:20 <knesenko> ok what next here ?
15:12:20 <dcaroest> orc_orc: they use DRAC for that, but we can't even reach the DRAC web
15:12:40 <orc_orc> dcaroest: perhaps ask for a proxy to it?
15:13:48 <knesenko> orc_orc: let's see what they have to say
15:13:54 <orc_orc> * nod *
15:14:11 <dcaroest> orc_orc: well... we have the vpn and that works (I can connect to ovirt02/01) but there's a little mess with the networks and we can't reach 03 nor it's DRAC interface, thay will have to sort that out anyhow
15:15:35 <knesenko> ok anything else on hosting ?
15:16:32 <knesenko> #topic Puppet and Foreman
15:16:43 <knesenko> dcaroest: the stage is yours
15:17:33 <dcaroest> unfortunately nothing new here :/, the r10k patch is still on the air
15:17:40 <knesenko> ok
15:17:52 <knesenko> I noticed that we don't have epel repo class
15:17:57 <knesenko> Am I right ?
15:18:58 <dcaroest> let me see, but I believe you ;)
15:19:26 <knesenko> I didn't found it
15:20:41 <dcaroest> I don't thinks there's any
15:21:00 <dcaroest> nop, there isn't
15:21:25 <knesenko> #action knesenko create epel repository puppet class
15:21:37 <knesenko> anything else here ?
15:21:39 <knesenko> eedri: ?
15:21:41 <knesenko> obasan: ?
15:21:46 <obasan> knesenko, not on my end
15:21:50 <eedri> hosting?
15:21:51 <orc_orc> ... I have been working on getting ovirt under nest on Centos 6, so IU can have local puppet and foreman under ovirt instances working and clocked by a kernel macking the kvm_intel nest module enabled
15:21:51 <ovirtbot> orc_orc: Error: ".." is not a valid command.
15:21:56 <orc_orc> ... I have been working on getting ovirt under nest on Centos 6, so IU can have local puppet and foreman under ovirt instances working and clocked by a kernel macking the kvm_intel nest module enabled
15:23:04 <obasan> orc_orc, plz tell me if you had problems. I have an experience with this
15:23:23 <orc_orc> obasan: thank you -- I shall
15:23:36 <knesenko> ok good
15:23:44 <knesenko> #topic Jenkins
15:23:49 <knesenko> hello eedri
15:23:50 <knesenko> :)
15:24:40 <eedri> knesenko, sorry, in parallel here
15:25:11 <dcaroest> I have a proposal: http://ci.openstack.org/jenkins_jobs.html
15:26:02 <obasan> dcaroest, this is a neat solution. that we're already familiar with. it could be helpful if we scale our env
15:26:10 <dcaroest> It's something that we use at work, it let's you define jenkins jobs in yaml files (that can be included between them)
15:26:39 <dcaroest> I changed the other day a lot of jobs to use the whitelists manually... I dont wont to do that again ;)
15:28:32 <knesenko> dcaroest: agree
15:28:41 <knesenko> abjections ?
15:28:46 <knesenko> objections /?
15:28:50 <knesenko> dcaroest: +1
15:29:00 <obasan> dcaroest, +1
15:29:39 <knesenko> #action create a basic templates for jenkins jobs based on http://ci.openstack.org/jenkins_jobs.html
15:30:22 <eedri> +1
15:30:43 <eedri> will simplify our jobs management immensly
15:32:36 <YamakasY> orc_orc: have you seen Barbapapa ?
15:32:53 <knesenko> #info knesenko is working on new upgrade job that will support parameters
15:33:14 <orc_orc> YamakasY: Barbapapa was mentioned before -- it is unfamiliar fo me, but I bookmarked it
15:33:22 <knesenko> The code is ready, ewoud had some comments so I need to fix them ...
15:33:44 <orc_orc> it looked as though my grandchildren and I should watch Barbapapa together
15:34:57 <YamakasY> orc_orc: barbapapa is kewl!
15:35:02 <YamakasY> yesterday he was a boat
15:35:10 <eedri> knesenko, will it run on default on nightlies?
15:35:16 <eedri> knesenko, if it doesn't given params?
15:35:22 <knesenko> eedri: yes
15:35:25 <eedri> knesenko, +1
15:35:46 <knesenko> actually the plan is that publish job will trigger this job
15:36:07 <eedri> knesenko,+1 with exceptioon
15:36:07 <knesenko> eedri: but yes, there are default values
15:36:23 <eedri> knesenko, after publish job is done, there is another scripts that runs on resources.ovirt.org
15:36:32 <eedri> knesenko, that take around 10-15 to recreate repos
15:36:38 <knesenko> hm ...
15:36:40 <knesenko> ok ...
15:36:48 <eedri> knesenko, so we'll need to see how to make sure we're taking latest rpoms
15:37:02 <eedri> knesenko, trigger by URL might be an option
15:37:17 <eedri> knesenko, if we can monitor changes to the yum repo for e.g
15:39:21 <knesenko> eedri: will think about it
15:39:36 <orc_orc> eedri: watch the timespamp on the repodata directory in question and it will tell you when there is a new transactionset
15:39:53 <eedri> orc_orc, yea, might be a good indication
15:41:18 <orc_orc> eedri: that is a pull (polling) method - a push method would be to have to have a local select on the directory and 'curl' a rebuild requiest out to the scheduler
15:42:00 <knesenko> orc_orc: i will be glad if you will help me with that
15:42:22 <orc_orc> knesenko: * nod *  I am in channel all the time -- please ping me when you wish to work through it
15:42:30 <jonar> Hi
15:42:32 <knesenko> orc_orc: once I will finish with hte job, I"ll ping you and will try to implement it
15:42:36 <orc_orc> knesenko: inotify can do this for you
15:42:55 <jonar> not sure if anyone is still here who I was talking to earlier regarding importing an image from a crashed ovirt
15:43:00 <knesenko> #action knesenko  ping orc_orc once I finish with upgrade_params job
15:43:18 <eedri> orc_orc, you mean inotify on the server and triggering the job from the resources.ovirt.org side
15:43:35 <orc_orc> eedri: yes -- I do that a lot as it is less loady than a poll loop
15:44:04 <orc_orc> select is almost always a better solution than poll
15:44:17 <eedri> orc_orc, agree
15:44:41 <eedri> orc_orc, jenkins also has a way triggering jobs from git/gerrit w/o polling, via hooks
15:45:24 <orc_orc> eedri: makes sense: 'hook' is a nomenclature for a 'push' type trigger
15:46:32 <eedri> orc_orc,yea.
15:46:34 <eedri> knesenko, ok, let's continue
15:47:44 <knesenko> eedri: anything else on jenkins ?
15:48:27 <knesenko> #topic Other issues
15:48:37 <knesenko> lets review some tickets ?
15:49:42 <dcaroest> sure!
15:49:50 <eedri> knesenko, +1
15:50:08 <knesenko> https://fedorahosted.org/ovirt/report/1
15:52:01 <knesenko> dcaroest: can you take a look on this one - https://fedorahosted.org/ovirt/ticket/93
15:52:01 <knesenko> ?
15:52:26 <knesenko> eedri: still relevant ? - https://fedorahosted.org/ovirt/ticket/84
15:52:33 <daniell> will it be possible to host more than 2 datacenters with local storage on 3.3.1? 3.3 doesn't let you attach more than two hosts if you are using local storage DCs.
15:52:35 <dcaroest> knesenko: nop, I was not aware of that
15:52:40 * eedri looking
15:52:53 <knesenko> obasan: please this one for u - https://fedorahosted.org/ovirt/ticket/83
15:53:02 <eedri> knesenko, i think it's relevant
15:53:19 <eedri> knesenko, since mvn doesn't ship with centos afaik, unless someone knows otherwise
15:53:28 <eedri> knesenko, at least it's not shipped with rhel
15:53:35 <knesenko> eedri: ok ... please assign it to someone
15:53:41 <knesenko> eedri: also this one - https://fedorahosted.org/ovirt/ticket/88
15:53:50 <knesenko> eedri: seems like we are missing ubuntu slave ...
15:54:11 <knesenko> eedri: and I think make sense to install it only after rackspace migration
15:54:14 <eedri> knesenko, yes - maybe we can reinstall f19-vm03
15:54:26 <eedri> knesenko, since it's already down for some time, looks like we can handle f19 load without it
15:54:43 <knesenko> eedri: +1 , please comment in the ticket
15:54:55 <knesenko> dcaroest: https://fedorahosted.org/ovirt/ticket/92 - how hard is that ?
15:54:58 <orc_orc> as to #84 maven -- I do not see it in EPEL either
15:55:22 <eedri> orc_orc, what i proposed is to add a link from the maven dir jenkins installs to PATH
15:55:27 <eedri> orc_orc, but it's quite ugly
15:55:55 <eedri> orc_orc, i'm trying to think on another option - maybe to wget mvn tar.gz from deploy it
15:56:02 <orc_orc> ick
15:56:09 <eedri> orc_orc, or build mvn ourselfves
15:56:12 <orc_orc> unversioned and un-reproducable
15:56:19 <dcaroest> knesenko: it can be tricky, for security issues, are the slaves 100% isolated?
15:56:39 <eedri> orc_orc, the problem arise when you need mvn from a 'shell cmd' job
15:56:40 <knesenko> dcaroest: yes ... NAT
15:56:48 <eedri> orc_orc, and not via standard maven jobs
15:57:06 <knesenko> dcaroest:we can use the key as parameter in foreman
15:57:22 <dcaroest> knesenko: the problem is that if we use the public puppet repo for 'internal' hosts, those hosts should be isolated from the 'internal' networks
15:57:44 <knesenko> dcaroest: they are isolated
15:57:55 <knesenko> dcaroest: they are in guest network ...
15:58:09 <knesenko> obasan: did you moved the new slave you added to the guest netowrk ?>
15:58:20 <dcaroest> knesenko: then it should be easy :)
15:58:36 <knesenko> dcaroest: can you take care of that ?
15:58:42 <obasan> knesenko, I did not add any slaves recently
15:58:47 <knesenko> orc_orc: want to take something ? :)
15:58:52 <dcaroest> reassigned
15:58:53 <knesenko> obasan: ok
15:59:19 <knesenko> orc_orc: btw, if you want permissions to machines etc., just send email to infra and will vote
15:59:20 <orc_orc> knesenko: I wanted to get my setup working first so I could start docoing puppet deployment recipies
15:59:28 <orc_orc> knesenko: will do
15:59:29 <knesenko> orc_orc: +1
15:59:33 <dcaroest> we have response for ovirt03, they will investigate
15:59:45 <orc_orc> but I really want much more to be packaged and in Fedora, and license verified
15:59:46 <knesenko> dcaroest: great :)
16:00:16 <orc_orc> (my personal goal set centers on this) ... or EPEL
16:00:34 <dcaroest> nice, I'd like to see it on fedora!
16:00:36 <knesenko> orc_orc: what I really want is to build all ovirt related project in fedora koji
16:00:37 <orc_orc> knesenko: which bug did you ahve in mind?
16:00:39 <knesenko> build system
16:00:56 <knesenko> orc_orc: but its not related to the infra
16:00:57 <orc_orc> knesenko: a noble goal, but it needs to be 'free'
16:01:12 <knesenko> orc_orc: what do you mean be saying free ?
16:01:13 <orc_orc> and some in ovirt is not under acceptable licenses, I think
16:01:38 <orc_orc> there was a special exception on some jar as I recall, recently mentioend
16:02:17 <knesenko> hmmm....that's interesting
16:02:24 <knesenko> eedri: do you know something about that ?
16:02:37 * orc_orc looks
16:02:41 <knesenko> eedri: that we have some license issues with ovirt ?
16:02:52 <eedri> knesenko, i'm not familiar with any licenses issues under ovirt
16:03:03 <eedri> knesenko, best to ask on arch/infra
16:03:04 <knesenko> eedri: me too
16:03:08 <eedri> knesenko, dneary will know
16:03:37 <eedri> orc_orc, can you send email on it to infra if you have the info?
16:03:45 <orc_orc> eedri: I shall
16:03:51 <eedri> orc_orc, +1
16:05:19 <eedri> i think knesenko got disconnected
16:05:45 <knesenko> ok guys we are out of time
16:06:05 <knesenko> orc_orc: i would like to take the pkging conversation offline
16:06:12 <knesenko> i am interested to hear what you have to say
16:06:15 <orc_orc> knesenko: noted
16:06:19 <knesenko> anything else ?
16:06:31 <knesenko> thanks everyone
16:06:42 <knesenko> please try to work on your tasks if you have time !
16:06:44 <knesenko> thanks !
16:06:49 <knesenko> #endmeeting