Adds connection-settings (for remote DB support) when creating DB resources.
Connection-settings allows a hash of options that can be used
when connecting the a remote DB (such as PGHOST, PGPORT, PGPASSWORD
PGSSLKEY) and a special option DBVERSION indicating the version
of the remote database.
Including
- Puppet updates
- Documentation updates
- RSpec unit test updates
- RSpec acceptance test updates
- Some test coverage for connection-settings
- Working acceptance test...
Basic vagrant setup:
* Two boxes, server and client
* Runs puppet code to on server to setup a postgres server that allows all connections and md5 connections, creates db puppet to look at
* Runs puppet code on client to make a server that a psql command can be run against puppet db on other server
* Does some fancy stuff to get the fact of the IP from the first server to connect to
- Backwards compatible, with deprecation warnings around old parameters
Currently there is no resource to use for creating the recovery.conf
This resource can create a recovery.conf for replication with all
currently supported parameters
* Add param `validcon_path` in `postgresql::client` (defaults to
previous hard coded value).
* Add tests for this new parameter.
All tests runs successfully on Scientific Linux 6.4
```
$ bundle exec rake spec SPEC_OPTS='--format documentation'
[...]
Finished in 2 minutes 10.5 seconds (files took 1.4 seconds to load)
201 examples, 0 failures
```
Added $service_reload parameter to params.pp, in order to allow
a reload of the postgresql server on OpenBSD, which apparently doesn't
have a service binary.
This is likely to be a controversial change so I wanted to put some
explanation of our reasoning into the commit message. This gets
kind of complex so I'll start with the problem and then the reasoning.
Problem:
We rely heavily on the ability to uninstall and reinstall postgres
throughout our testing code, testing features like "can I move from the
distribution packages to the upstream packages through the module" and
over time we've learnt that the uninstall code simply doesn't work a lot
of the time. It leaves traces of postgres behind or fails to remove
certain packages on Ubuntu, and generally causes bits to be left on your
system that you didn't expect.
When we then reinstall things fail because it's not a true clean slate,
and this causes us enormous problems during test. We've spent weeks and
months working on these tests and they simply don't hold up well across
the full range of PE platforms.
Reasoning:
Due to all these problems we've decided to take a stance on uninstalling
in general. We feel that in 2014 it's completely reasonable and normal
to have a good provisioning pipeline combined with your configuration
management and the "correct" way to uninstall a fully installed service
like postgresql is to simply reprovision the server without it in the
first place. As a general rule this is how I personally like to work
and I think is a good practice.
WAIT A MINUTE:
We understand that there are environments and situations in which it's
not easy to do that. What if you accidently deployed Postgres on
100,000 nodes? When this work is finished I'm going to take a look at
building some example 'profiles' to be found under examples/ within this
module that can uninstall postgres on popular platforms. These can be
modified and used in your specific case to uninstall postgresql. They
will be much more brute force and reliant on deleting entire directories
and require you to do more work up front in specifying where things are
installed but we think it'll prove to be a much cleaner mechanism for
this kind of thing rather than trying to weave it into the main module
logic itself.