Major overhaul of func tests to allow them to run in parallel. Major changes include:
1. Consolidate all separate test fixtures into a single test fixture in the `shared_zone_test_context`
1. Add `xdist` to allow running tests in parallel
1. Add hooks in main `conftest.py` to setup the test fixture before workers run, and tear it down when workers are finished
1. After fixture is setup, save state in a local `tmp.out` so the workers will use that state instead of trying to recreate the fixture.
1. Add a `utils.generate_record_name` which generates a unique record name in order to avoid conflicts when running tests in parallel
1. Add a `pytest.mark.serial` for func tests that just cannot be run in serial
1. Tests are now run in two phases, first we run in parallel, and if that is successful, we run the serial tests
1. Add a `--teardown` flag, this allows us to reuse the test fixture between the two phases parallel and serial
Following the same pattern we did for list groups, where we just set the default page size to 1000 when listing members.
- `service.groups.js` - modified the call to `getGroupMembers` to hard code the page size to 1000
- `service.groups.spec.js` - fixed unit tests
- `TestDataLoader` - added a sample group for dummy users that has all dummy users (200 of them) for simple testing.
Changes in this pull request:
- `GlobalACLs` - captures logic around testing a user's `AuthPrincipal` for access to a zone
- `AccessValidations` - modified the `getAccessLevel` to consult the `GlobalACLs` for a user to determine if the user has access. `AccessValidations` now also takes `GlobalACLs`
- `VinylDNSConfig` - load the `GlobalACLs` from the config file
- `Boot` - load two separate `AccessValidations`. One is used exclusively for batch changes that _will_ consult the configured global acls. The other one used by the normal record set interface will not consult the global acls. This is a TODO for cleanup
Add check to make sure that a scheduled change is not past due when approving.
* `BatchChangeErrors` - added `ScheduledChangeNotDue` error type that will be raised in the event someone approves a batch change (i.e. processes) a scheduled change too soon.
* `BatchChangeValidations` - added a `validateScheduledApproval` function that raises the `ScheduledChangeNotDue` if the scheduled date is not past due
* `BatchChangeRouting` - added `ScheduledChangeNotDue` to `handleErrors` to return a `Forbidden`
When a scheduled change is submitted, if there are no hard errors advance to PendingApproval status.
* `BatchChange` - changed the calculation of the batch change status; if it is pending approval and the scheduled time is set it will be `Scheduled`
* `MySqlBatchChangeRepository` - updated the de-serialize to consider scheduled time so when "getting" a batch change, the status returned will appropriately be `Scheduled`
* `BatchChangeService` - updated the `buildResponse` method to consider scheduled time. If no fatal errors, manual review enabled, and scheduled then goto PendingApproval
Add a scheduled change feature flag. If a user submits a batch change that is scheduled, and the feature flag is disabled, then raise an error.
* `BatchChangeValidations` - added a class member variable that holds the feature flag. Added a function `validateScheduledChange` that has the business rule if the feature flag is disabled and scheduled time is set then raise an error
* `Boot` - pass the config value into the `BatchChangeValidations` constructor
* `BatchChangeRoute` - added error handler for `ScheduleChangeDisabled` error
* `VinylDNSConfig` - add a feature flag
* `api/reference.conf` - ensure that the scheduled batch change enabled flag is set to `false` by default (it is off)
Add scheduled time field to prepare for scheduled batch changes.
* `create_batch_change_test.py` - add a test that ensures posting and retrieving using the scheduled time field works as expected
* `BatchChangeProtocol.scala` - add scheduledTime field to `BatchChangeInput`
* `BatchChangeService.scala` - modify the `buildResonse` method, it does some logic to produce the resulting `BatchChange` entity, and we need to ensure that the `scheduledTime` field propagates through that logic
* `BatchChangeJsonProtocol.scala` - make sure that `BatchChangeInputSerializer` takes in `scheduledTime`, make sure that `BatchChangeSerializer` converts `scheduledTime` to json
* `BatchChange.scala` - add `scheduledTime` field
* `BatchChangeSummary.scala` - add `scheduledTime` field
* `V3.18__ScheduledChange.sql` - add a `scheduled_time` field to the `batch_change` table; add an index on `scheduled_time` as well to support querying by scheduled time
* `MySqlBatchChangeRepository` - make sure that save and get methods support `scheduledTime`