Fixes#964
- Updated the `ZoneConnection` model to allow specifying the key algorithm.
- Added an `Algorithm` to the protobuf file, defaults to HMAC-MD5
- Updated JSON serialization to serdes the algorithm
- Updated the Portal to allow the user to specify the algorithm when connecting to a zone or managing a zone
Supported algorithms are:
```
case object HMAC_MD5 extends Algorithm("HMAC-MD5.SIG-ALG.REG.INT")
case object HMAC_SHA1 extends Algorithm("hmac-sha1.")
case object HMAC_SHA224 extends Algorithm("hmac-sha224.")
case object HMAC_SHA256 extends Algorithm("hmac-sha256")
case object HMAC_SHA384 extends Algorithm("hmac-sha384.")
case object HMAC_SHA512 extends Algorithm("hmac-sha512.")
```
**Note: needs some tests**
Introduces the concept of a `Backend` into VinylDNS. This will allow support for any DNS backend in the future, including AwS Route53 for example. This is consistent with other "provider" things for dynamic loading of classes (Notifier, Repository, Queue, etc.)
The initial implementation builds on what we have already, that is when creating a zone one can choose a `backendId` that is configured in the `application.conf`. If no `backendId` is specified, we attempt to map like we do today, so the exact same functionality.
We expand that by allowing one to map a `backendId` to a different provider (like aws).
After this PR:
1. If someone specifies a zone connection on a zone, it will work exactly like it does today, namely go through the `DnsBackend` to connect.
2. If someone specifies a `backendId` when setting up a zone, the naive mapping will take place to map that zone to the `Backend` implementation that is configured with that `backendId`. For example, if you have configured a backend id `aws` that connects to Route53, and you specify `aws` when connecting the zone, it will connect to it in Route 53 **Note: we still do not support zone create, but that is much closer to reality with this PR, much much**
3. If someone specifies NEITHER, the `defaultBackendId` will be used, which could be on any one of the backend providers configured.
To start, there is a new `vinyldns.core.domain.backend` package that contains the main classes for the system. In there you will find the following:
- `BackendProvider` - this is to be implemented by each provider. Adds a means of pre-loading zones, and providing connections to zones.
- `Backend` - provides connectivity to a particular backend instance. For example, a particular DNS Authoritative server. This is where the real work happens of interacting with whatever backend. For example, `DnsConnection` implements this to send DDNS messages to the DNS system. Consider this the "main" thing to implement, where the rubber meets the road, the meat and potatoes
- `BackendProviderLoader` - to be implemented by each provider, knows how to load it's single instance `BackendProvider`, as well as possibly pre-loading configured `Backends` or anything else it needs to do to get ready. It provides a dynamic hook via the `def load` method that is called by the `BackendLoader` to load a specific `Backend`
- `BackendResolver` - the main, default, BackendResolver. It holds all `BackendProvider` instances loaded via the `BackendLoader` and provides right now a naive lookup mechanism to find `Backend`s. Really, this is more of a `Router` or `Resolver`, as in the future it could use more advanced techniques to finding connections than right now
- `BackendConfigs` - used by the `BackendRegistry` as the entrypoint into configuration for all backends
- `BackendProviderConfig` - a single backend provider configuration, specifies a `className` that should be the `BackendProviderLoader` implementation to be loaded, and a `settings` that is passed into the `BackendProvider` to load itself. This is consistent with other providers.
- `BackendResponse` - uniform responses across all providers to the rest of the VinylDNS System
**Workflow**
During initialization of the system:
1. The `BackendResolver` loads the `BackendConfigs` from the application configuration. This contains configuration for ALL backends
2. The `BackendResolver` utilizes the `BackendLoader` to dynamically load each backend individually. If any backend cannot be loaded, it will fail.
3. The `BackendLoader` creates a new instance of each `className` for each `BackendConfig`, this points to the `BackendProviderLoader` implementation which takes care of loading the specific `BackendProvider` provided the configuration
4. The `BackendProviderLoader` does any initialization necessary to ensure it is ready. In the case of `Route53`, it will pre-load and cache all hosted zones that are available for the AWS account that is configured. For Route53, a single `Route53Backend` is setup right now. For `DnsBackend`, a connection (server, port, tsig key) is setup for each DNS Authoritative system to integrate with.
During runtime of the system:
1. When anything is needed, the `BackendResolver` is consulted that will determine how to lookup the `Backend` that is needed. This is done right now by naively scanning all `BackendProvider` instances it has to say "can anyone connect to this zone". More intelligent discovery rules can be added in the future
2. Once a `Backend` is obtained, any operation can be performed:
1. `ZoneConnectionValidator` uses `zoneExists` and `loadZone` to validate a zone is usable by VinylDNS
2. `RecordSetChangeHandler` uses `resolve` and `applyChange` to apply changes to the DNS backend
3. `ZoneSyncHandler` and `DnsZoneViewLoader` use `loadZone` in order to load records into VinylDNS
**What else is here**
- Provided an implementation of a backend provider for DNS via `Backend`
- Updated all of VinylDNS to use `Backends` instead of hard coded to DNS
- Provided an implementation of a backend provider for AWS Route 53 as an example to follow for other providers
**Example configuration**
```
vinyldns {
backend {
default-backend-id = "r53"
backend-providers = [
{
class-name = "vinyldns.route53.backend.Route53BackendProviderLoader"
settings = {
backends = [
{
id = "test"
access-key = "vinyldnsTest"
secret-key = "notNeededForSnsLocal"
service-endpoint = "http://127.0.0.1:19009"
signing-region = "us-east-1"
}
]
}
}
]
}
}
```
The BLOB type is juuuust too small for the previous value. Reduce the
number of characters by two -- the purpose of the test is still being
fulfilled. A migration to MEDIUMBLOB or LARGEBLOB will be required if we
need this to be bigger.
* Revert "support DeleteRecord in New DNS Change form (#791)"
This reverts commit cbaa13e647fd68f1db83968bc6ec52dc5cf7341d.
* Revert "[DeleteRecord] Remove multi-record config (#836)"
This reverts commit 807f6760d92ed3838fa8b8f0d816dafe8ce46bb7.
* Revert "add DeleteRecord info to the docs (#792)"
This reverts commit f19f293cf754a2c96d35c1a9fdb0fd1cf3bba2cb.
Major overhaul of func tests to allow them to run in parallel. Major changes include:
1. Consolidate all separate test fixtures into a single test fixture in the `shared_zone_test_context`
1. Add `xdist` to allow running tests in parallel
1. Add hooks in main `conftest.py` to setup the test fixture before workers run, and tear it down when workers are finished
1. After fixture is setup, save state in a local `tmp.out` so the workers will use that state instead of trying to recreate the fixture.
1. Add a `utils.generate_record_name` which generates a unique record name in order to avoid conflicts when running tests in parallel
1. Add a `pytest.mark.serial` for func tests that just cannot be run in serial
1. Tests are now run in two phases, first we run in parallel, and if that is successful, we run the serial tests
1. Add a `--teardown` flag, this allows us to reuse the test fixture between the two phases parallel and serial
Changes in this pull request:
- `GlobalACLs` - captures logic around testing a user's `AuthPrincipal` for access to a zone
- `AccessValidations` - modified the `getAccessLevel` to consult the `GlobalACLs` for a user to determine if the user has access. `AccessValidations` now also takes `GlobalACLs`
- `VinylDNSConfig` - load the `GlobalACLs` from the config file
- `Boot` - load two separate `AccessValidations`. One is used exclusively for batch changes that _will_ consult the configured global acls. The other one used by the normal record set interface will not consult the global acls. This is a TODO for cleanup
When a scheduled change is submitted, if there are no hard errors advance to PendingApproval status.
* `BatchChange` - changed the calculation of the batch change status; if it is pending approval and the scheduled time is set it will be `Scheduled`
* `MySqlBatchChangeRepository` - updated the de-serialize to consider scheduled time so when "getting" a batch change, the status returned will appropriately be `Scheduled`
* `BatchChangeService` - updated the `buildResponse` method to consider scheduled time. If no fatal errors, manual review enabled, and scheduled then goto PendingApproval
Add a scheduled change feature flag. If a user submits a batch change that is scheduled, and the feature flag is disabled, then raise an error.
* `BatchChangeValidations` - added a class member variable that holds the feature flag. Added a function `validateScheduledChange` that has the business rule if the feature flag is disabled and scheduled time is set then raise an error
* `Boot` - pass the config value into the `BatchChangeValidations` constructor
* `BatchChangeRoute` - added error handler for `ScheduleChangeDisabled` error
* `VinylDNSConfig` - add a feature flag
* `api/reference.conf` - ensure that the scheduled batch change enabled flag is set to `false` by default (it is off)
Add scheduled time field to prepare for scheduled batch changes.
* `create_batch_change_test.py` - add a test that ensures posting and retrieving using the scheduled time field works as expected
* `BatchChangeProtocol.scala` - add scheduledTime field to `BatchChangeInput`
* `BatchChangeService.scala` - modify the `buildResonse` method, it does some logic to produce the resulting `BatchChange` entity, and we need to ensure that the `scheduledTime` field propagates through that logic
* `BatchChangeJsonProtocol.scala` - make sure that `BatchChangeInputSerializer` takes in `scheduledTime`, make sure that `BatchChangeSerializer` converts `scheduledTime` to json
* `BatchChange.scala` - add `scheduledTime` field
* `BatchChangeSummary.scala` - add `scheduledTime` field
* `V3.18__ScheduledChange.sql` - add a `scheduled_time` field to the `batch_change` table; add an index on `scheduled_time` as well to support querying by scheduled time
* `MySqlBatchChangeRepository` - make sure that save and get methods support `scheduledTime`