Skip to content
This repository has been archived by the owner on Jul 16, 2020. It is now read-only.

Weekly Meeting 2017 01 12

Kristen Carlson Accardi edited this page Jan 12, 2017 · 2 revisions

Agenda

  • Weekly summary of development activity
  • opens

##Minutes

#ciao-project: Weekly Meeting

Meeting started by kristenc at 17:01:10 UTC. The full logs are available at ciao-project/2017/ciao-project.2017-01-12-17.01.log.html .

Meeting summary

  • Roll Call (kristenc, 17:01:23)

  • Weekly update (kristenc, 17:05:33)

  • Opens (kristenc, 17:22:22)

    • rbradford wants to use github.com/pkg/errors (kristenc, 17:24:39)
    • AGREED: We will allow use of github.com/pkg/errors in ciao (kristenc, 17:33:33)
    • ACTION: rbradford to file a janitorial github issue to migrate ciao to use the new package (kristenc, 17:35:57)
    • AGREED: bare returns are still ok in new code (kristenc, 17:39:14)
    • mcastelino wonders should be make some of the implicit internal constants in controller parameters (kristenc, 17:40:16)
    • ACTION: mcastelino to send a specific list of parameters he feels should be configurable (kristenc, 17:46:13)
    • obedmr has a deployment solution for setting a local development environment (CIAO on top of Docker Containers) (kristenc, 17:50:08)
    • ACTION: obedmr and albertom to be on the agenda for next week with overview of proposed merge of dev env and production env. (kristenc, 18:08:46)

Meeting ended at 18:09:12 UTC.

Action Items

  • rbradford to file a janitorial github issue to migrate ciao to use the new package
  • mcastelino to send a specific list of parameters he feels should be configurable
  • obedmr and albertom to be on the agenda for next week with overview of proposed merge of dev env and production env.

Action Items, by person

  • albertom
    • obedmr and albertom to be on the agenda for next week with overview of proposed merge of dev env and production env.
  • mcastelino
    • mcastelino to send a specific list of parameters he feels should be configurable
  • obedmr
    • obedmr and albertom to be on the agenda for next week with overview of proposed merge of dev env and production env.
  • rbradford
    • rbradford to file a janitorial github issue to migrate ciao to use the new package
  • UNASSIGNED
    • (none)

People Present (lines said)

  • kristenc (92)
  • tcpepper (47)
  • markusry (44)
  • mcastelino (26)
  • rbradford (21)
  • albertom (21)
  • obedmr (17)
  • mrkz (12)
  • ciaomtgbot (3)
  • btwarden (1)
  • jvillalo (1)
  • sameo (1)
  • mrcastel (1)

Generated by MeetBot_ 0.1.4

.. _MeetBot: http://wiki.debian.org/MeetBot

###Full IRC Log

17:01:10 <kristenc> #startmeeting Weekly Meeting
17:01:10 <ciaomtgbot> Meeting started Thu Jan 12 17:01:10 2017 UTC.  The chair is kristenc. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:01:10 <ciaomtgbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
17:01:10 <ciaomtgbot> The meeting name has been set to 'weekly_meeting'
17:01:23 <kristenc> #topic Roll Call
17:01:26 <kristenc> 0/
17:01:27 <rbradford> o
17:01:28 <btwarden> o/
17:01:34 <tcpepper> o/
17:01:35 <jvillalo> o
17:01:53 <markusry> o/
17:01:54 <mrkz> o./
17:02:45 <albertom> o/
17:03:24 <sameo> o/
17:05:07 <kristenc> today we'll be following the new agenda we talked about last week. Our first topic is the weekly overview of ciao development.
17:05:24 <kristenc> then we'll do opens since we didn't get any agenda items sent to the mailing list.
17:05:33 <kristenc> #topic Weekly update
17:05:56 <kristenc> We are actively planning for our next development sprint, which will start officially January 23rd. We have agreed that for 2017 (until it is proven to be a bad idea) we are going to move to having 1 2 week planning sprint followed by a 6 week development sprint. For January, we are needing to take an extra week for the planning sprint due to holidays.  Sprints will be 6 weeks long, and we’ll have features we are working on developing durin
17:05:56 <kristenc> g that sprint, but some features will obviously take longer than one sprint to complete, and that’s ok.
17:06:11 <kristenc> btw - feel free to interject questions as they come up.
17:06:24 <kristenc> For our first sprint of 2017, we are going to be working on the following features:
17:06:36 <kristenc> Quotas and Limits - Rob will be driving this feature. We have not been prioritizing quotas and limits on resources, and we now have enough resources we are allocating that we should spend some time designing this and implementing it.
17:06:50 <kristenc> Evacuation - Mark will be driving this feature. We want to be able to evacuate a node so that an admin can perform upgrades on the host OS or whatever.
17:06:59 <kristenc> Ciao cluster persistent data - Tim will be driving this feature. We want to start moving our ciao data into our ceph cluster for reliability and also as a first step towards migration.
17:07:08 <kristenc> Custom Workloads - Kristen will be driving this feature. We want to be able to have public and private workloads in ciao and stop using canned test workloads.
17:07:20 <kristenc> Deployment - Alberto will be driving this feature. We want to allow an option for users to deploy parts of ciao via containers. He will also of course be maintaining the existing deployment scripts to track new development (custom workloads for example)
17:07:34 <kristenc> Configuration & Upgrade - Marcos is driving this feature. We want to continue our configuration development with the next phase, which is upgrading. He will also be maintaining the current config work to keep up with new development as required.
17:07:46 <kristenc> Per tenant images - Obed is driving this feature. We want to allow public and private images in ciao.
17:07:57 <kristenc> K8s undercloud - Manohar is driving this feature. We want to be able to use ciao as an undercloud for k8s clusters. This is still in the design phase
17:08:12 <kristenc> This list is not to say these are the only things we care about, just that this is what we’ve prioritized for the near term. One day maybe we’ll have other contributors with their own priorities. And it goes without saying that if something changes strategically with ciao, such as we get a customer or partner for example, we might have to adjust our priorities.
17:09:00 <kristenc> We are going to start using github projects to track feature development. This will make it easy to see which issues are related to a specific feature and will be less cumbersome than using the github labels like we were doing.
17:09:27 <kristenc> does anyone have any questions or comments about this/
17:09:29 <kristenc> ?
17:09:41 <mrkz> ack, no questions
17:09:50 <tcpepper> what's the timeline of this first 2017 sprint
17:10:43 <tcpepper> Jan 23 + 6weeks is?
17:10:51 <kristenc> tcpepper, starts Jan 23rd. then goes for 6 weeks. This sprint we're doing a 3 week planning followed by a 6 week development, but in the future we will do hopefully a 2 week planning + 6 week cycle.
17:11:19 <tcpepper> does that work out to March 3 as the end or 10?
17:12:14 <kristenc> tcpepper, ends the 3rd of March
17:12:32 <tcpepper> eg: if I'm brainstorming something clever, so I be ready to show a PoC Mar 3 or 10 for the start of the plan sprint
17:12:36 <tcpepper> ok got it thanks
17:12:57 <mrcastel> o/ sorry I am late
17:13:22 <kristenc> mrcastel, the minutes will be posted, we are just covering the weekly development update.
17:14:25 <kristenc> ok. our development status this week.
17:14:35 <kristenc> Mark continues to work on re-vendoring our 3rd party packages. Good news, the gophercloud maintainer is back and they have resumed working on the feature request we asked for (the keystone CRUD). In addition to us being able to re-vendor when they fix the time bug, we can hopefully also pick up these identity apis soon and use them to interface programmatically with keystone rather than with the openstack cli. This will be useful for bat testing.
17:14:55 <kristenc> Most merges this week has been bug fixes and minor enhancements.
17:15:05 <kristenc> The BAT failures on the release cluster have not been resolved still.  I’ve also apparently run out of disk space on some piece of the release cluster and am having failures in other places as a result. So we haven’t had a release in a few days. I will work towards resolving this when I can.
17:15:27 <kristenc> I need to make an appt with btwarden to talk about migrating the release cluster this year (hopefully soon!)
17:16:01 <markusry> kristenc: Is there a bug for these BAT failures?
17:16:22 <kristenc> markusry, no - I wasn't able to even characterize them very well. It isn't always the same test.
17:16:39 <kristenc> and it only happens 100f the time. Usually in storage, but not always.
17:16:45 <markusry> Okay.
17:16:56 <kristenc> so - I'm not sure if this is a real problem, or an issue with the release cluster.
17:17:19 <markusry> Well, rbradford is having a periodic failure with one of the storage BATS in singlevm
17:17:24 <markusry> I suspect that is a real bug
17:17:30 <rbradford> kristenc, i CCed you on the issue.
17:17:31 <markusry> but haven't investigate yet.
17:17:40 <markusry> I think it's always the same issue though
17:17:52 <rbradford> my new error reporting work my show it up
17:18:03 <rbradford> *might show it up in the logs.
17:18:04 <kristenc> I've seen failure in at least 2 separate bat tests.
17:18:06 <markusry> kristenc: If you need some help with the failures let me know
17:18:19 <kristenc> rbradford, cool - I will look at what you've done.
17:19:11 <tcpepper> if there's any shared docu (github issue?) on the failures and what diagnosis has been done thus far I can look also since initial triage was pointing at code I've deeply had my hands in during Nov. / Dec.
17:19:14 <kristenc> markusry, you are welcome to debug it whenever you want. I just run the bat test over and over again until I get a failure. I was working on capturing better logs, but got distracted.
17:19:45 <markusry> OKay, I'll take a look if I get bored with evacuation
17:19:47 <kristenc> tcpepper, I don't have anything yet. hopefully we'll get more info with these logs.
17:20:17 <mcastelino> kristenc, there were storage test case failures on travis too (but sporadic)
17:20:37 <kristenc> so perhaps it's a real bug then.
17:20:53 <markusry> I think there's definitely at least one real bug
17:21:44 <kristenc> well - I might have to resolve the disk space issue before anyone can keep debugging on release cluster.
17:22:18 <kristenc> ok, this is all I have for this first topic.
17:22:22 <kristenc> #topic Opens
17:23:29 <rbradford> i want to start using github.com/pkg/errors
17:23:29 <mcastelino> kristenc, one open on controller.. should be make some of the implicit internal constants parameters
17:24:39 <kristenc> #info rbradford wants to use  github.com/pkg/errors
17:24:57 <kristenc> rbradford, what does this entail?
17:25:52 <kristenc> also - markusry had some set of things we should look at when assessing a third party package I thought. wondering if it's going to be safe to vendor.
17:26:15 <rbradford> kristenc, no, it's compatible with golang errors.
17:26:34 <markusry> It should be fine
17:26:37 <mcastelino> markusry, coming to vendoring.. should we also vendor test dependencies
17:26:53 <markusry> Yes we probably should
17:27:09 <tcpepper> rbradford: what benefits does it bring?
17:27:12 <kristenc> rbradford, what does the package provide us vs. "errors"?
17:27:27 <markusry> But let's wait for the new vendoring tool
17:28:49 <markusry> Which should be out any day now
17:29:41 <obedmr> I have a quick open: As a side project, I've been working on a deployment solution for setting a local development environment (CIAO on top of Docker Containers) https://github.com/obedmr/docker-ciao . It's more stable now and more features are coming.  This solution is using official ClearLinux docker images (https://hub.docker.com/u/clearlinux/)  Feel free to play or comment on it.
17:29:56 * mrkz is using ^
17:30:21 <markusry> What are the advantages over singlevm?
17:30:40 <kristenc> can we talk about this after we're done talking about the new errors package?
17:30:43 <obedmr> vms are launched in your dev box and not in a VM
17:30:45 <kristenc> I don't want to get confused.
17:30:48 <mrkz> not constrained RAM I'd say for nodes
17:30:55 <markusry> Sure.
17:30:55 <mrkz> yes, we can follow-up later
17:31:08 <kristenc> we are still waiting for rbradford to tell us why we need this package.
17:31:16 <markusry> So a big yes for me for the errors package
17:31:23 <markusry> It's written by one of the go maintainers
17:31:25 <obedmr> agree
17:31:26 <rbradford> kristenc, it gives you the ability to wrap errors in a predictable way
17:31:28 <markusry> and is very widley used
17:32:02 <markusry> There's a nice talk by the author explaining the usage
17:32:04 <tcpepper> how about a problem statement for starters?
17:32:10 <rbradford> kristenc, so you an create a chain of errors, so you'd replace return err with return errors.Wrap(err, "thing went wrong")
17:32:10 <mcastelino> markusry, can the errors be serialized and send back over SSNTP?
17:32:19 <mcastelino> that would be really helpful to debug
17:32:24 <kristenc> ah - I like that!
17:32:29 <rbradford> tcpepper, so you don't get just "File not found" in your log
17:32:29 <markusry> GopherCon 2016: Dave Cheney - Dont Just Check Errors Handle ...
17:32:34 <kristenc> chain of errors would be super useful.
17:32:43 <markusry> They can be convered to strings
17:32:46 <tcpepper> ok that makes sense
17:33:05 <kristenc> sounds like a useful package. fine with me.
17:33:13 * tcpepper agrees
17:33:33 <kristenc> #agreed We will allow use of github.com/pkg/errors in ciao
17:33:58 <kristenc> rbradford, we probably need to just convert errors over time?
17:33:59 * tcpepper wonders how a list of errors gets you more to handling errors in code
17:34:05 <tcpepper> but I will watch the video
17:34:14 <rbradford> kristenc, i've got some code already using in my branch
17:34:23 <rbradford> kristenc, focussing on datastore for now
17:34:44 <rbradford> i have an editor search for err .?:=
17:34:52 <kristenc> ok, so a gradual migration from golang "errors" to this new errors package seems fine to me.
17:34:54 <rbradford> and i'm addressing places
17:34:55 <rbradford> right
17:35:04 <rbradford> gradual is best
17:35:27 <tcpepper> I think we should formalize that intent with a janitorial github issue
17:35:57 <kristenc> #action rbradford to file a janitorial github issue to migrate ciao to use the new package
17:36:16 <mcastelino> tcpepper, agree... it will specially be helpful in the case of libsnnet and maybe ssntp as these are libraries and the caller today gets no context to the error
17:36:27 <kristenc> and shall we enforce in our reviews that people only use the new package, or will we continue to allow developers to use "errors"?
17:36:51 <tcpepper> I'd make it a suggestion during review, not required
17:37:00 <rbradford> kristenc, i think in code reviews we should check that errors being returned are useful
17:37:07 <markusry> Certainly in new code
17:37:09 <rbradford> sometimes bare return of err is fine.
17:37:30 <markusry> and fmt.Errorf is also fine for the first error
17:37:33 <tcpepper> b/c if it's required on review of a sprinkled addition of code, the adjacent code unmodified code either looks weirdly inconsistent or the patchset become much larger
17:37:46 <rbradford> there are some short helper functions in datastore/sqlitedb where you'd get duplication
17:37:47 <tcpepper> ie: do you require a mod make a touched file completely consistent with the new way?
17:37:56 <rbradford> tcpepper, no
17:38:07 <rbradford> tcpepper, there are 160 bare returns in sqlite3db.go
17:38:07 <kristenc> definitely not
17:38:28 <kristenc> that is one thing I hated about kernel development, and I don't want to do it. hold code hostage to some other change.
17:38:41 <tcpepper> exactly
17:38:44 <tcpepper> it's counter productive
17:38:56 <tcpepper> that's why I say log it as janitorial work and keep moving
17:39:14 <kristenc> #agreed bare returns are still ok in new code
17:39:29 <tcpepper> relatively soon we'll have a minority of those and start thwacking them en masse
17:40:16 <kristenc> #info mcastelino wonders should be make some of the implicit internal constants in controller parameters
17:40:24 <kristenc> mcastelino, which constants?
17:41:34 <mcastelino> kristenc, the choice of tenant subnet for example
17:42:15 <kristenc> mcastelino, it does seem to me a good idea to make that configurable. It will require some rework of the algorithm though, so that one specifically will not be trivial.
17:42:33 <mcastelino> kristenc, the controller today controls most/all of the parameters which are not explicitly specified in Ciao and I assume the number of such constants will increase over time
17:42:55 <kristenc> mcastelino, we originally though said that we should make as few things configurable as possible.
17:42:56 <mcastelino> if you look at k8s the controller has 10s of config params
17:43:16 <kristenc> we don't want zillions of config  params.
17:43:20 <kristenc> philosophically.
17:43:21 <mcastelino> kristenc, agree... we can make them all optional, with smart defaults.. but should allow override
17:43:22 <tcpepper> what's the upside of exposing that to the user versus keeping it hidden?  ie: operator complexity is a concern to me.
17:43:54 <mcastelino> tcpepper, the downside is that in some cases the constants we have chosen may not work for a user's setup
17:44:00 <tcpepper> I don't see configurability in and of itself as an upside
17:44:16 <mcastelino> mrkz, pointed one of the cases out yesterday
17:44:27 <tcpepper> mcastelino: do you have examples?  are they ones a simple/minimal orchestrator would need to support?  Or could they be left to the full OpenStack?
17:44:47 * tcpepper 's internet was in and out yesterday..didn't see
17:44:54 <mcastelino> tcpepper, let me send out a note of the things that matter for networking at leat
17:45:15 <kristenc> i think the networking ones are going to be tough to keep constant. people have such variability in their infrastructure setup.
17:45:59 <tcpepper> including tenant subnet?
17:46:13 <kristenc> #action mcastelino to send a specific list of parameters he feels should be configurable
17:46:27 <kristenc> with justification.
17:46:29 <tcpepper> would somebody really say, "I'm looking at Ciao and like it, but I want my tenant Foo subnet to be X"
17:46:29 <mcastelino> tcpepper, just to scare you https://kubernetes.io/docs/admin/kube-apiserver/
17:46:56 * tcpepper doesn't see "subnet" string on that page
17:47:13 <tcpepper> anyway, I'll read mcastelino's email and keep an open mind
17:47:27 <tcpepper> despite mcastelino's scaring me ;)
17:47:50 <kristenc> yes - and remember his proposal is to have these be not required config params.
17:48:00 <kristenc> meaning - if you don't config them, you'll get a sane default.
17:48:14 <tcpepper> sure.  I just want a strong justification for moving past only having a sane default.
17:48:19 <kristenc> it does add code complexity though.
17:48:23 <kristenc> yes - me too.
17:48:38 <tcpepper> config options are backed by more dynamic code with higher maintenance cost and higher chance of breakage due to missed test variations
17:49:01 <tcpepper> we could always just move to Neutron and done
17:49:06 <tcpepper> ;P
17:49:31 * tcpepper ducks from the virtual eggs and tomatoes
17:50:08 <kristenc> #info obedmr has a deployment solution for setting a local development environment (CIAO on top of Docker Containers)
17:50:34 <kristenc> I believe markusry wanted to know if there was any advantage to this environment vs. ciao-down + singlevm.
17:50:50 <mcastelino> kristenc, can we have a overview presentation on this
17:51:05 <obedmr> sure,
17:51:14 <markusry> I'm also worried that we're duplicating effort
17:51:22 <kristenc> me too.
17:51:30 <markusry> But perhaps there's good reason to have both
17:51:31 <tcpepper> obedmr: is their automation that does the docker compose for these containers?  ie: do you use the ciao ansible scripts?
17:51:38 <mcastelino> also it adds needing docker to the host if I am not mistaken
17:51:50 <mcastelino> whereas ciao-down makes things fully host independent
17:52:04 <kristenc> obedmr, do you want to prepare something for next week? or do you have time to quickly discuss now?
17:52:05 <obedmr> VMs run in hosts. A little faster than singlevm, containers start faster than vms, you can boot more vms
17:52:06 <markusry> both ciao-down/singlevm and containers I mean
17:52:26 <markusry> obedmr: Is there anything you can't do?
17:52:51 <mcastelino> obedmr, also does it need any changes to host firewall rules.. specially a locked down host
17:52:58 <obedmr> at this moment, just dealing with internet on booted vms, but, that's only matter of net config
17:53:07 <mcastelino> something like tcpepper's FC firewall config
17:53:39 <mrkz> the only downside of obed's solution it's that needs a bit more work from the newtorking part (e.g: I have not been able to get it to work on F24)
17:53:43 <obedmr> markusry: I've not had any issue with all current ciao capabilities
17:54:09 * tcpepper worries about the maintenance of the container images
17:54:25 <tcpepper> and don't VM's start in milliseconds?  Do we really need to save much there?
17:54:31 <markusry> Presumably these are something you create locally?
17:54:32 <kristenc> obedmr, I guess the question is - what is the advantage of your solution vs. what we have?
17:54:38 <obedmr> tcpepper: they're not too complex, they're basically shipping the ciao component binary
17:54:44 <obedmr> and dependencies
17:54:55 <kristenc> I don't think we need 2 development environments unless there's a good reason.
17:54:58 <markusry> Where do they come from
17:55:02 <markusry> ?
17:55:12 <albertom> kristenc: i think obed work can be the foundation to thenext phase of ciao-deployment
17:55:17 <obedmr> kristenc: for me, it has been faster, it also uses docker-compose which automates most of the work
17:55:19 <albertom> that is deploy ciao components in containers
17:55:29 <kristenc> albertom, that part makes sense - to use as POC for deployment.
17:55:40 <obedmr> for developers that use Docker for development, they will love it
17:56:03 <albertom> just that obed does it in a slighly different way that i was thinking of... but if we work togueter we can come with a solution that can work for both production environment and development environmens
17:56:13 <obedmr> kristenc: and yes, that's the foundation next deployment phase
17:56:18 <tcpepper> obedmr: even if it's not complex, if it's different than singlevm setup.sh and different than the ansible scripts, it's yet another place we need to maintain for new feature enablement
17:56:37 <albertom> not yet another way.. but the next way
17:56:47 <obedmr> tcpepper: ^^
17:56:49 <tcpepper> so we'd not use ansible anymore?
17:56:56 <kristenc> sounds like you are proposing we do away with singlevm in the future?
17:56:58 <markusry> Or singleVM?
17:57:02 <albertom> maybe yes but for different tasks
17:57:06 <tcpepper> and do away with deploying Ciao bare metal
17:57:11 <mrkz> tcpepper: If I'm following albertom corretly, this would replace most of the ansible stuff/maintenance
17:57:19 <markusry> We did talk about this in the past but mcastlino I think said we'd always want single vm
17:57:20 <albertom> liek create certiicates deploy containers to the bazzillions hosts of the cluster, etc
17:57:29 <markusry> Not sure what the reason was though
17:57:45 <albertom> and replace the shipped binaries with compiled binaries when ciao_dev = True
17:57:49 <mrkz> also, I used obedmr's solution when testing kristenc's solution for https://github.com/01org/ciao/issues/712 when singlevm was still unable to test that
17:57:53 <mcastelino> my one other concern is networking will evolve/improve/change over time.. and keeping firewall rules open in a host based env will be hard
17:57:57 <albertom> so not getting rid of ansible but removing tasks from it
17:57:58 <mcastelino> so we will need ciao-down for sure
17:58:09 <mcastelino> we can do away with single VM scripts and setup
17:58:48 <markusry> Ands that's no bad thing
17:59:02 <markusry> The setup.sh needs quite a bit of work
17:59:13 <kristenc> no - I actually like the idea of using a developer environment that is closer to our production.
17:59:17 <mrkz> I'm with mcastelino on the networking part, it's still a bit clunky and would need lot of work there
18:00:08 <mcastelino> mrkz, I disgaree with the word "clunky" we are trying to be as close to a real cluster from a networking point of view in our current setup on a single machine
18:00:22 <mrkz> but I definetly <3 to use docker-ciao solution
18:00:30 <mcastelino> and I want to keep that intact to a large degree to ensure we get right networking coverage
18:00:50 <mrkz> mcastelino: oh, I mean into docker-ciao current networking setup, as we've struggled a bit to get it to work in other distros
18:01:37 <mrkz> (e.g: docker-ciao doesn't work on my fedora box, but it does on my CLR box)
18:01:40 <kristenc> so tell me if I'm concluding correctly - do we agree we don't want to maintain more than one developer environment? and we also agree that if we want to use the containerized approach, it needs to address the networking requirements we have?
18:02:01 <mcastelino> kristenc, agree
18:02:02 <kristenc> we also agree we aren't attached to setup.sh or verify.sh
18:02:06 <markusry> Or have we agreed that we're just going to merge everything
18:02:08 <obedmr> yep
18:02:13 <albertom> yep
18:02:18 <kristenc> no - I don't want to just merge everything.
18:02:18 <markusry> containers on ciao-down would solve the network issues
18:02:22 <tcpepper> commonality is good, but...
18:02:31 <markusry> long term I mean
18:02:35 <tcpepper> I'm not sold on the "everything shall be a container" path
18:02:43 <kristenc> ah - you meant merge conceptually :), not as in PR merge.
18:02:48 <markusry> Right
18:03:06 <markusry> Well I guess we can't really decide here
18:03:12 <kristenc> albertom, obedmr you guys need to put together something that would help us see your vision :)
18:03:19 <obedmr> sure
18:03:20 <albertom> tcpepper: controller can be a container... and compute and netwokr could be bare metal servers (prod environment)
18:03:28 <albertom> same scripts with ciao_dev = True
18:03:44 <albertom> would deploy network and compute containers with compiled binaries
18:03:49 <albertom> so yu can test the latest changes
18:03:58 <albertom> (the contaienr part is to have them all in one machine)
18:04:04 <mcastelino> albertom, last question.. can we test deployment scripts /packaging with this approach?
18:05:05 <rbradford> would having services in containers allow you reprovision more easily? e.g. if your NN had just exploded could it be swapped for an existing CN?
18:05:27 <tcpepper> mcastelino: I think the proposition is that deploy becomes:  build new containers, docker pull that container.
18:05:35 <obedmr> rbradford: yeah, that's the beauty of containers
18:05:47 <albertom> but there no need to build a container
18:05:57 <albertom> for every deployment
18:06:09 <albertom> a generic contaienr would do attaching the certs with -v
18:06:13 <tcpepper> mcastelino asked about testing
18:06:18 <albertom> and attaching the compuled bianries (When in dev mode) with -v as well
18:06:28 <tcpepper> in testing you'd surely build a test specific container for any changed component?
18:07:36 <albertom> tcpepper: no need to, as i said, if a component changes you could just use the same container and replace the binary with the newest changes
18:07:56 <markusry> kristenc: Maybe we should end the meeting
18:08:01 <tcpepper> albertom: that's not a reliable, reproducible, stable approach imho
18:08:01 <markusry> So as not to lose the logs
18:08:03 <kristenc> I don't want to miss meeting bot's exit, so I want to see if I can conclude temporarily - sounds like what we need are more specifics on how the proposed solution would work before we can make any kind of discussion.
18:08:08 <kristenc> markusry, yes.
18:08:14 <kristenc> so in conclusion:
18:08:46 <kristenc> #action obedmr and albertom to be on the agenda for next week with overview of proposed merge of dev env and production env.
18:08:52 <kristenc> sound good?
18:08:56 <obedmr> agree
18:08:59 <albertom> agree
18:09:04 <markusry> yep
Clone this wiki locally