Feature #11094
Deploy a VPN between the monitoring host and Lizard
100%
Description
As stated in Feature #10760, the best option to interconnect our different hosts is to use a VPN. Tinc seems to be the simpler option for that.
Subtasks
Related issues
Related to Tails - |
Resolved | 2015-12-15 |
History
#1 Updated by bertagaz 2016-02-09 11:41:48
- related to
Feature #10760: Decide how to manage ecours and other systems with Puppet added
#2 Updated by bertagaz 2016-02-09 13:27:16
- Deliverable for changed from 269 to 268
#3 Updated by bertagaz 2016-02-11 17:29:59
- Assignee changed from bertagaz to intrigeri
- % Done changed from 0 to 50
- QA Check changed from Dev Needed to Ready for QA
- Feature Branch set to puppet-tails:feature/11094-tails-vpn
Pushed a branch that deploys a VPN using tinc. Sorry that’s not very agile, 1 commit for a bunch of files, but that’s not a gigantic manifest. :/
I’ve tested it locally several time, and it goes fine. Hope you won’t find too many corner cases or whatever. That was my first defines and exported resources.
#4 Updated by intrigeri 2016-02-13 12:44:21
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Info Needed
For now I’ll stay at the big picture level, since there is an implementation decision that I don’t understand.
All this seems to be pretty generic code to manage tinc, which is great! I see nothing in there that is specific to Tails nor to our infra, so I wonder why 1. this lives in our tails module; 2. why we have to write our own stuff to start with. The commit message doesn’t give me any hint in this respect. I see that our friends at immerda maintain a tinc module, and there’s at least another one.
Do we have specific reasons to write & maintain our own code here?
I bet that the (tinc + Puppet) experience and skills you gained while writing this code puts you in a pretty good position to evaluate other, pre-existing modules: you now know exactly what they need to do, and I trust you’ll be able to find quickly if they are adequate for our needs, or not.
#5 Updated by bertagaz 2016-02-13 19:29:14
- Assignee changed from bertagaz to intrigeri
intrigeri wrote:
>
> All this seems to be pretty generic code to manage tinc, which is great! I see nothing in there that is specific to Tails nor to our infra, so I wonder why
> 1. this lives in our tails module;
We could move it to another module without problems.
> 2. why we have to write our own stuff to start with. The commit message doesn’t give me any hint in this respect.
That’s commit messages, I don’t believe this kind of decisions should be encoded there. I should have mentioned it here rather.
> I see that our friends at immerda maintain a tinc module, and there’s at least another one.
I found the immerda one, but it has no commits since more than a year, which seems to say it’s not that maintained. It also seems to be more targeted to CentOS than Debian. I’m not so sure they still use it, so I thought something generic for our usage was enough.
Didn’t find the other one in my research, good catch. It’s readme says “This module has had some amount of testing with Debian 7 ‘Wheezy’ and Ubuntu 14.04 ‘Trusty’.”. Same, no commits since like 10 months, and it doesn’t seem to have been updated for more recent versions of Debian/tinc. For example, my code is taking care of using the backports version if needed.
> Do we have specific reasons to write & maintain our own code here?
See above.
> I bet that the (tinc + Puppet) experience and skills you gained while writing this code puts you in a pretty good position to evaluate other, pre-existing modules: you now know exactly what they need to do, and I trust you’ll be able to find quickly if they are adequate for our needs, or not.
Same.
Btw, I pushed some polishing commits on top in the feature branch recently.
#6 Updated by intrigeri 2016-02-14 13:28:18
- Assignee changed from intrigeri to bertagaz
>> Do we have specific reasons to write & maintain our own code here?
> See above.
I’m very much unconvinced by this reasoning.
Is it an option to continue this discussion in 10 days?
#7 Updated by bertagaz 2016-02-14 14:15:40
- Assignee changed from bertagaz to intrigeri
intrigeri wrote:
> I’m very much unconvinced by this reasoning.
I’m not myself very much convinced by the idea of using a third party puppet module that will likely result in becoming the new upstream. If we have to maintain code, the one I’m proposing is simpler. Note that the immerda one is using bridges as interface for its VPN, which is not what I did.
> Is it an option to continue this discussion in 10 days?
Well, 10 days sounds too much to me, but we’ll have an occasion to do so next week.
#8 Updated by intrigeri 2016-02-15 10:04:18
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Info Needed to Dev Needed
> I’m not myself very much convinced by the idea of […]
I’ll refrain from arguing on this ticket, since it seems futile given apparently it’s too late to change this implementation decision, at least for the first iteration.
> but we’ll have an occasion to do so next week.
Let’s not count on it. It won’t be my top priority, and the way this “discussion” started doesn’t make me wish to have it in a rush, so I suggest you just go ahead without blocking on me.
Just another design question in passing, and then I’ll let you go on working on it without interfering further. My understanding is that we are setting up a VPN primarily so that we can manage hosts with our puppetmaster accross an unstrusted network. It looks like this creates an interesting chicken’n’egg situation. So I’m curious about what the setup story for a new host, that needs the VPN to connect to the puppetmaster, looks like. I guess this ticket is not the best place to answer me: just document how this would be done, in the place where we keep such documentation usually, and point me to it, OK?
#9 Updated by bertagaz 2016-02-17 13:46:05
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Info Needed
intrigeri wrote:
> Let’s not count on it. It won’t be my top priority, and the way this “discussion” started doesn’t make me wish to have it in a rush, so I suggest you just go ahead without blocking on me.
Ok, it seems it won’t happen as proposed anyway. Sorry if my answers have been rough enough to make you feel insecure in this discussion.
> Just another design question in passing, and then I’ll let you go on working on it without interfering further. My understanding is that we are setting up a VPN primarily so that we can manage hosts with our puppetmaster accross an unstrusted network. It looks like this creates an interesting chicken’n’egg situation. So I’m curious about what the setup story for a new host, that needs the VPN to connect to the puppetmaster, looks like. I guess this ticket is not the best place to answer me: just document how this would be done, in the place where we keep such documentation usually, and point me to it, OK?
OK, working on this document right now. Meanwhile, I think there’s some things left to decide:
- what IP addressing range will we use inside the VPN network? I propose ‘192.168.1.0/24’ which fits into RFC1918 smallest reserved IPv4 address space for private network.
- Shall we use another FQDN for systems that are connected through this VPN? I’m not sure about that right now. At the moment, I guess if we configure Lizard to resolve ecours.t.b.o to its private VPN address, and ecours to also resolve lizard.t.b.o and puppet-git.t.b.o to their private VPN addresses, then we should be good. That’s a bit outside of my theorical knowledge and there may be corner cases I don’t see (like regarding routing inside the VPN). So maybe we don’t have to make a decision right now on this, but rather wait that the VPN is getting deployed to see if we need to resolve this question.
#10 Updated by bertagaz 2016-02-17 15:12:37
Just pushed a new file in our sysadmin internal repo.
It describes the steps to do by hand to configure the VPN connection once one installed a new machine, just before being able to run the puppet client. I don’t think we have much other possibilities with this chicken and eggs problem. Might be that there will be some troubles, but I think the Ecours deployment will help in catching them and refine this documentation.
#11 Updated by bertagaz 2016-02-20 22:45:22
Ping? Shall I wait for a ack on the documentation part before going on, given my availability is already short for the end of the month as stated already?
#12 Updated by intrigeri 2016-02-20 23:18:36
> Ping? Shall I wait for a ack on the documentation part before going on, given my availability is already short for the end of the month as stated already?
I thought I had made it clear that you should not block on me here, sorry. Please go ahead.
#13 Updated by intrigeri 2016-02-22 13:09:31
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Info Needed to Dev Needed
#14 Updated by bertagaz 2016-02-29 16:32:15
- Status changed from Confirmed to In Progress
- Assignee changed from bertagaz to intrigeri
- % Done changed from 50 to 70
- QA Check changed from Dev Needed to Ready for QA
Ok, I’ve just finished to setup the VPN between the monitoring host and lizard. Works fine, puppet agent is connecting through the VPN to puppet-git.lizard, yay!
#15 Updated by intrigeri 2016-03-01 19:09:23
> * what IP addressing range will we use inside the VPN network? I propose ‘192.168.1.0/24’ which fits into RFC1918 smallest reserved IPv4 address space for private network.
I’m glad you ended up picking something that’s less common => this has better chances to work if we ever need to connect to our VPN nodes that e.g. are behind NAT in home/office contexts (ARM dev boards come to mind). Great!
> * Shall we use another FQDN for systems that are connected through this VPN? I’m not sure about that right now. At the moment, I guess if we configure Lizard to resolve ecours.t.b.o to its private VPN address, and ecours to also resolve lizard.t.b.o and puppet-git.t.b.o to their private VPN addresses, then we should be good.
If I got it right, the underlying (though unstated) problem here is: how do we make sure that the connections we really want to go through the VPN indeed goes through it, instead of straight over the less safe Internet. Correct?
I don’t think that to enforce this kind of security, we should rely purely on tweaking /etc/hosts
to point to VPN IPs, because 1. it’s easy to get it wrong and have some future change overwrite this; 2. software may resolve DNS without honoring /etc/hosts
. So I prefer that:
- we use names that don’t have another valid meaning on the Internet currently (e.g.
*.lizard
, even though we have no guarantee that it doesn’t become a valid TLD some day); - we make sure that what we don’t want to access without the VPN, because it would feel too dangerous, is simply not made available outside of the VPN ⇒ let’s use monitoring to enforce that (I’ll let you update the blueprint + create tickets as needed). So far that’s only the Icinga2 and Puppet connections, right?
Is there any other underlying problem you had in mind while asking this question?
#16 Updated by intrigeri 2016-03-01 20:26:13
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
Regarding using existing modules vs. NIH’ing around, I have a few things to say. I don’t expect this to change what you decided a few weeks ago: my understanding is that it was too late when the code was submitted already (which is another problem, but let’s not dive into it here and now; I totally can do the same myself). This is rather about future design/implementation choices.
First, we’re not doing a very good job at maintaining our existing Puppet codebase. We’re quite good at adding to it, including the occasional dirty workaround that we want to fix “later”, but in practice we rarely invest any time into refactoring, updating and cleaning up stuff unless we really have to (e.g. when we ported lots of stuff to Jessie). This explains in great part why I’m extremely doubtful when I see us write code from scratch instead of reusing existing one.
Regarding whether this or that existing piece of code is maintained or not. Tons of our own Puppet code has been unmodified for 10+ months, and that’s because it works fine, not because it’s unmaintained :) Whenever you wonder if existing code is maintained or used, before drawing any conclusion about whether we should write our own, or will become the new upstream, I suggest you ask its authors (and potential users) if indeed they still rely on it. We can talk to immerda people, they don’t bite. Regarding this specific module by immerda: the last large set of commits came in when the module was updated for RHEL 7. I doubt they’ll need to touch it at all until they upgrade to RHEL 8, which is not released yet, so the lack of activity is not surprising to me.
Specifically regarding Puppet code published by immerda: the great thing is that they write high-quality code, actively maintain the modules they use, and more importantly perhaps they are much more on top of Puppet things than us, e.g. they have started converting their stuff to be compatible with the “future” parser (Puppet 4.x) a while ago — we did not, which blocked at least one pull request I’ve sent to some other upstream Puppet module already; and converting our code to the new Puppet language will definitely imply some serious maintenance cost to all the custom code we are writing (call it technical debt or not). So in some cases, improving immerda modules to add Debian support can imply that we maintain the Debian part only, instead of the whole thing. I think it’s worth considering this option a bit closer next time.
Let me add that it’s not only about writing or maintaining code: the reviewing effort increases quite a bit when writing stuff from scratch. In this case, now I have to review a complete new Puppet VPN management design + implementation (and then we’ll have to go through the follow-up work + improvements + review + discussion), which would not be the case if we were based on e.g. immerda’s module: I could trust their design to some extent, not only because they are way better than any of us at Puppet (which is indeed the case), but also because it’s code that presumably has been working for years in the real world :)
Enough talk about how the past and the future, I’ll now switch to the actual code review.
All in all it looks good, and the design sounds sane. Keep in mind that I know nothing about tinc, though.
Why do we store the VPN hosts’ private key online in Git? It seems to me that we can just create new pairs of keys when needed, or get them from our backups (=> backup config, maybe).
The $ip_address
, $vpn_address
and $vpn_subnet
business is quite confusing => please document what these parameters mean.
In tails::vpn::instance
, the handling of $external_ip_address
is overly complicated: just use $::ipaddress
as the default parameter value?
It seems that we never set $use_shorewall
to true, and instead we have manually configured ecours-to-lizard-vpn-udp
and ecours-to-lizard-vpn-tcp
. I don’t get it. Can we make up our mind and settle on one of those? (I guess the least manual one might be nicer.)
Why is vpn-to-fw
’s policy set to ACCEPT? We’re quite stricter for the vmz zone, and I like it this way.
Looks like rsa_key.pub
should be 0600 instead of 0700, no?
Regarding the up/down scripts:
- Any particular reason to use the mostly deprecated
ifconfig
androute
commands, insteadip(8)
? - Maybe use
set -e
?
#17 Updated by bertagaz 2016-03-10 18:51:04
- Target version changed from Tails_2.2 to Tails_2.3
#18 Updated by bertagaz 2016-03-17 11:53:25
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Ready for QA
intrigeri wrote:
> Regarding using existing modules vs. NIH’ing around, I have a few things to say. I don’t expect this to change what you decided a few weeks ago: my understanding is that it was too late when the code was submitted already (which is another problem, but let’s not dive into it here and now; I totally can do the same myself). This is rather about future design/implementation choices.
>
> First, we’re not doing a very good job at maintaining our existing Puppet codebase. We’re quite good at adding to it, including the occasional dirty workaround that we want to fix “later”, but in practice we rarely invest any time into refactoring, updating and cleaning up stuff unless we really have to (e.g. when we ported lots of stuff to Jessie). This explains in great part why I’m extremely doubtful when I see us write code from scratch instead of reusing existing one.
>
> Regarding whether this or that existing piece of code is maintained or not. Tons of our own Puppet code has been unmodified for 10+ months, and that’s because it works fine, not because it’s unmaintained :) Whenever you wonder if existing code is maintained or used, before drawing any conclusion about whether we should write our own, or will become the new upstream, I suggest you ask its authors (and potential users) if indeed they still rely on it. We can talk to immerda people, they don’t bite. Regarding this specific module by immerda: the last large set of commits came in when the module was updated for RHEL 7. I doubt they’ll need to touch it at all until they upgrade to RHEL 8, which is not released yet, so the lack of activity is not surprising to me.
>
> Specifically regarding Puppet code published by immerda: the great thing is that they write high-quality code, actively maintain the modules they use, and more importantly perhaps they are much more on top of Puppet things than us, e.g. they have started converting their stuff to be compatible with the “future” parser (Puppet 4.x) a while ago — we did not, which blocked at least one pull request I’ve sent to some other upstream Puppet module already; and converting our code to the new Puppet language will definitely imply some serious maintenance cost to all the custom code we are writing (call it technical debt or not). So in some cases, improving immerda modules to add Debian support can imply that we maintain the Debian part only, instead of the whole thing. I think it’s worth considering this option a bit closer next time.
>
> Let me add that it’s not only about writing or maintaining code: the reviewing effort increases quite a bit when writing stuff from scratch. In this case, now I have to review a complete new Puppet VPN management design + implementation (and then we’ll have to go through the follow-up work + improvements + review + discussion), which would not be the case if we were based on e.g. immerda’s module: I could trust their design to some extent, not only because they are way better than any of us at Puppet (which is indeed the case), but also because it’s code that presumably has been working for years in the real world :)
I admit this choice is questionable, and probably not the right one. I think it was a matter of context mostly: at that time I had a severe need for this VPN to be deployed in the next days. I chose the fastest path writing this < 250 lines of code, rather than contacting immerda and having communication overhead, getting into adapting their quite complex code. In the end, that’s not so much code, and was a good exercise to get into puppet features I’m relying a lot now with the puppet monitoring manifests.
In any way this choice is a stand against immerda’s puppet code quality. On the contrary, if you read both puppet Tinc code, you’ll find them close, as immerda’s code has inspired mine. Ours is simplified, without the extras like automatic keys generation for Tinc nodes immerda’s one has. Given the level of puppet code in the immerda puppet module, I also felt a bit less at ease to propose patches to support our use case. :)
Anyway, a switch from one to the other shouldn’t be too hard, but proposing patches will probably take time. I’ve created Bug #11253 to track this, but I won’t have time to work on this right now.
> Why do we store the VPN hosts’ private key online in Git? It seems to me that we can just create new pairs of keys when needed, or get them from our backups (=> backup config, maybe).
So puppet won’t handle this file in your proposal?
> The $ip_address
, $vpn_address
and $vpn_subnet
business is quite confusing => please document what these parameters mean.
Right, commit puppet-tails:7dce398
> In tails::vpn::instance
, the handling of $external_ip_address
is overly complicated: just use $::ipaddress
as the default parameter value?
Done. commit puppet-tails:216bc52
> It seems that we never set $use_shorewall
to true, and instead we have manually configured ecours-to-lizard-vpn-udp
and ecours-to-lizard-vpn-tcp
. I don’t get it. Can we make up our mind and settle on one of those? (I guess the least manual one might be nicer.)
Was a leftover of the implementation step, forgot it during the deployment. Unneeded indeed! Done commit puppet-tails:1d3c1fc
> Why is vpn-to-fw
’s policy set to ACCEPT? We’re quite stricter for the vmz zone, and I like it this way.
Good question. That was a bit bold. Fixed commit puppet-lizard-manifests:1121e40
and followers
> Looks like rsa_key.pub
should be 0600 instead of 0700, no?
Hmm, but it is already, isn’t it?
> Regarding the up/down scripts:
>
> * Any particular reason to use the mostly deprecated ifconfig
and route
commands, instead ip(8)
?
I’m old school, remember? :)
Commit puppet-tails:0df6ee8
> * Maybe use set -e
?
Commit puppet-tails:e8759e9
#19 Updated by intrigeri 2016-03-22 11:41:38
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
>> Why do we store the VPN hosts’ private key online in Git? It seems to me that we can just create new pairs of keys when needed, or get them from our backups (=> backup config, maybe).
> So puppet won’t handle this file in your proposal?
Right (except I’m not really proposing anything, I’m asking why it’s done this way and not differently).
>> The $ip_address
, $vpn_address
and $vpn_subnet
business is quite confusing => please document what these parameters mean.
> Right, commit puppet-tails:7dce398
You probably rewrote history since I can’t find this commit. But anyway, I found the doc in tails::vpn::instance
.
I’m still a bit confused: it looks like we’re conveying the network prefix size info twice, in vpn_address
(since it’s CIDR) and in vpn_subnet
’s netmask. Did I get it right? If there’s a cheap way to avoid info duplication (best), or to ensure that duplicated info is kept is sync (worse), then please use it. Otherwise, please clarify this in the doc, so users of this code can’t pass contradictory parameter values.
>> Looks like rsa_key.pub
should be 0600 instead of 0700, no?
> Hmm, but it is already, isn’t it?
Looks like we’re not on the same page. I’m on that one:
$ ssh lizard.tails.boum.org sudo ls -l /etc/tinc/tailsvpn/rsa_key.pub
-rwx------ 1 root root 776 Feb 29 08:54 /etc/tinc/tailsvpn/rsa_key.pub
Everything else I’ve looked at seems great, yeah.
#20 Updated by bertagaz 2016-03-31 15:27:28
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Ready for QA
intrigeri wrote:
> >> Why do we store the VPN hosts’ private key online in Git? It seems to me that we can just create new pairs of keys when needed, or get them from our backups (=> backup config, maybe).
>
> > So puppet won’t handle this file in your proposal?
>
> Right (except I’m not really proposing anything, I’m asking why it’s done this way and not differently).
Well, we’re already spreading some of our secrets this way. I thought it was better integrated with puppet with this, compared to using fresh new ones. Because then we’d have to find a way to spread the new public keys everywhere. Also storing them here gives us backups for free.
> >> The $ip_address
, $vpn_address
and $vpn_subnet
business is quite confusing
>
> I’m still a bit confused: it looks like we’re conveying the network prefix size info twice, in vpn_address
(since it’s CIDR) and in vpn_subnet
’s netmask. Did I get it right? If there’s a cheap way to avoid info duplication (best), or to ensure that duplicated info is kept is sync (worse), then please use it. Otherwise, please clarify this in the doc, so users of this code can’t pass contradictory parameter values.
I’ve noticed that too, but didn’t really found a way to automate this part. Well, we could use some ruby code to parse CIDR and turn it into a netmask (or the opposite) for places where it is needed in this format, but I thought it was probably overwhelming to go that path.
But I’ve documented that a bit better hopefully in commit puppet-tails:059c195
> >> Looks like rsa_key.pub
should be 0600 instead of 0700, no?
>
> > Hmm, but it is already, isn’t it?
>
> Looks like we’re not on the same page. I’m on that one:
Hmm, right. I should have noticed though that in fact this file is not at all handled by our puppet module, and is here probably as a leftover of the key generation process. The public key is already shipped in the hosts configuration files in Tinc, and is enough for it to work. In fact I’ve deleted them on both hosts, and yeah, puppet don’t care about this file.
> Everything else I’ve looked at seems great, yeah.
So, RfQA again! :)
#21 Updated by intrigeri 2016-04-06 13:15:43
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Info Needed
> intrigeri wrote:
>> >> Why do we store the VPN hosts’ private key online in Git? It seems to me that we can just create new pairs of keys when needed, or get them from our backups (=> backup config, maybe).
>> > So puppet won’t handle this file in your proposal?
>> Right (except I’m not really proposing anything, I’m asking why it’s done this way and not differently).
> Well, we’re already spreading some of our secrets this way. I thought it was better integrated with puppet with this, compared to using fresh new ones.
In general I would agree, and often we distribute secrets this way because it makes it easier to replicate some piece of infrastructure.
But here it’s a bit different: if I got it right, we need to manually set up those keys before we can even use Puppet on a new host, so also adding these files to Puppet seems to add work in the situations I can think of, instead of removing any. That’s why I was asking what the advantage of also adding these keys to Puppet was, so that we can check if it’s worth the (possibly minor) security risk. At this point of the discussion, I still fail to see the practical advantage we get out of taking that risk => please clarify.
> Because then we’d have to find a way to spread the new public keys everywhere.
I agree it’s good that the pubkeys are distributed with Puppet. I was asking about the private keys only.
> Also storing them here gives us backups for free.
We backup /etc
on all hosts already, so that’s not a new thing.
>> >> The $ip_address
, $vpn_address
and $vpn_subnet
business is quite confusing
>> I’m still a bit confused: it looks like we’re conveying the network prefix size info twice, in vpn_address
(since it’s CIDR) and in vpn_subnet
’s netmask. Did I get it right? If there’s a cheap way to avoid info duplication (best), or to ensure that duplicated info is kept is sync (worse), then please use it. Otherwise, please clarify this in the doc, so users of this code can’t pass contradictory parameter values.
> I’ve noticed that too, but didn’t really found a way to automate this part. Well, we could use some ruby code to parse CIDR and turn it into a netmask (or the opposite) for places where it is needed in this format, but I thought it was probably overwhelming to go that path.
OK, I agree.
> But I’ve documented that a bit better hopefully in commit puppet-tails:059c195
Thanks! I’ve then cleaned up trailing white-space introduced by that commit.
>> >> Looks like rsa_key.pub
should be 0600 instead of 0700, no?
>>
>> > Hmm, but it is already, isn’t it?
>>
>> Looks like we’re not on the same page. I’m on that one:
> Hmm, right. I should have noticed though that in fact this file is not at all handled by our puppet module, and is here probably as a leftover of the key generation process. The public key is already shipped in the hosts configuration files in Tinc, and is enough for it to work. In fact I’ve deleted them on both hosts, and yeah, puppet don’t care about this file.
Now I wonder if this will reappear on every new host we add to our VPN. We’ll see.
#22 Updated by bertagaz 2016-04-20 07:43:06
- QA Check changed from Info Needed to Dev Needed
intrigeri wrote:
> In general I would agree, and often we distribute secrets this way because it makes it easier to replicate some piece of infrastructure.
>
> But here it’s a bit different: if I got it right, we need to manually set up those keys before we can even use Puppet on a new host, so also adding these files to Puppet seems to add work in the situations I can think of, instead of removing any. That’s why I was asking what the advantage of also adding these keys to Puppet was, so that we can check if it’s worth the (possibly minor) security risk. At this point of the discussion, I still fail to see the practical advantage we get out of taking that risk => please clarify.
Hmm, I think I got it now. Makes sense as we may have to generate them manually. I didn’t catch the subtility previously. I’ll remove them and the corresponding code.
> Now I wonder if this will reappear on every new host we add to our VPN. We’ll see.
Nop, I don’t think so.
#23 Updated by bertagaz 2016-04-21 06:51:47
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Ready for QA
bertagaz wrote:
> Hmm, I think I got it now. Makes sense as we may have to generate them manually. I didn’t catch the subtility previously. I’ll remove them and the corresponding code.
Done.
#24 Updated by intrigeri 2016-04-25 03:51:06
- Assignee changed from intrigeri to bertagaz
- QA Check changed from Ready for QA to Dev Needed
> Done.
sysadmins.git:VPN.mdwn
needs to be updated, looks good otherwise!
#25 Updated by bertagaz 2016-04-25 04:02:27
- Assignee changed from bertagaz to intrigeri
- QA Check changed from Dev Needed to Ready for QA
intrigeri wrote:
> sysadmins.git:VPN.mdwn needs to be updated, looks good otherwise!
Right, good catch. Pushed a change fixing that.
#26 Updated by intrigeri 2016-04-25 04:16:40
- Status changed from In Progress to Resolved
- Assignee deleted (
intrigeri) - % Done changed from 70 to 100
- QA Check changed from Ready for QA to Pass
So we’re done here, congrats!