- Simplify build config
- Add TTY check to Makefiles for running Docker containers
- Update `fs2` to latest patch
- Update `sbt-assembly` plugin
- Update portal to remove chatty console
- Update portal scripts to add license header
- Update prepare-portal/Gruntfile to combine js and css where applicable
- Remove unused gentelella files from final portal artifact
- Add support for shared zones to quickstart/docker images
- Consolidate built artifacts in `artifacts/` to make eventual release easier
- Fix broken links
- Fix formatting
- Add Makefile for running via docker
- Move README.md from `modules/docs/src/main/mdoc` to `modules/docs` to be consistent with `modules/portal`
- Move away from using multiple images for "quickstart" and instead use a single "integration" image which provides all of the dependencies
- Update `docker-up-vinyldns.sh` to support the new `integration` image
- Update `remove-vinyl-containers.sh` to more cleanly.. clean up
- Update `verify.sh` to more reliably run `sbt` targets
- Update `build/docker/api/application.conf` to allow for overrides and default to the `vinyldns-integration` image
- Update `build/docker/portal/application.conf` to allow overrides and use `vinyldns-integration` image
- Update `build/docker/portal/Dockerfile` to use `vinyldns/build:base-build-portal` to reduce need to download dependencies over and over
- Update `api/assembly` sbt target to output to `assembly` rather than some deeply nested folder in `**/target`
- Update documentation to reflect changes
- Move `docker/` directory to `quickstart/` to reduce confusion with the `build/docker` directory
- Move `bin/` to `utils/` since the files are binaries
- Add `.dockerignore` to root
- Remove old, unused scripts in `bin/`
- Remove old images from release
- `test` and `test-bind` are no longer necessary. Test images are in a different repo now
- Remove Docker image creation from sbt build config - actual `Dockerfile` files are easier to deal with
- Update scripts in `bin/` to utilize new Docker images
- Update documentation for changes
- Update all Docker Compose and configuration to use exposed ports on the `integration` image (19001, 19002, etc) both inside the container and outside to make testing more consistent irrespective of method
- Update FlywayDB dependency to v8 to fix a weird logging bug that showed up during integration testing. See: https://github.com/flyway/flyway/issues/2270
- Add `test/api/integration` Docker container definition to be used for any integration testing
- Move `module/api/functional_test` to `test/api/functional` to centralize the "integration-type" external tests and testing utilities
- Move functional testing and integration image to the `test/` folder off of the root to reduce confusion with `bin/` and `docker/`
- Update `dnsjava` library
- Add support for H2 database
- Update functional tests to support parallel runs
- Remove the ability to specify number of processes for functional tests - always 4 now
- Add `Makefile` and `Dockerfile` in `functional_test` to make it easier to run tests without spinning up multiple containers
We used to rely on `tut` for docs, however it is deprecated to be replaced with mdoc.
Moved to an `mdoc` folder structure and updated all of the links (what a pain).
Introduces the concept of a `Backend` into VinylDNS. This will allow support for any DNS backend in the future, including AwS Route53 for example. This is consistent with other "provider" things for dynamic loading of classes (Notifier, Repository, Queue, etc.)
The initial implementation builds on what we have already, that is when creating a zone one can choose a `backendId` that is configured in the `application.conf`. If no `backendId` is specified, we attempt to map like we do today, so the exact same functionality.
We expand that by allowing one to map a `backendId` to a different provider (like aws).
After this PR:
1. If someone specifies a zone connection on a zone, it will work exactly like it does today, namely go through the `DnsBackend` to connect.
2. If someone specifies a `backendId` when setting up a zone, the naive mapping will take place to map that zone to the `Backend` implementation that is configured with that `backendId`. For example, if you have configured a backend id `aws` that connects to Route53, and you specify `aws` when connecting the zone, it will connect to it in Route 53 **Note: we still do not support zone create, but that is much closer to reality with this PR, much much**
3. If someone specifies NEITHER, the `defaultBackendId` will be used, which could be on any one of the backend providers configured.
To start, there is a new `vinyldns.core.domain.backend` package that contains the main classes for the system. In there you will find the following:
- `BackendProvider` - this is to be implemented by each provider. Adds a means of pre-loading zones, and providing connections to zones.
- `Backend` - provides connectivity to a particular backend instance. For example, a particular DNS Authoritative server. This is where the real work happens of interacting with whatever backend. For example, `DnsConnection` implements this to send DDNS messages to the DNS system. Consider this the "main" thing to implement, where the rubber meets the road, the meat and potatoes
- `BackendProviderLoader` - to be implemented by each provider, knows how to load it's single instance `BackendProvider`, as well as possibly pre-loading configured `Backends` or anything else it needs to do to get ready. It provides a dynamic hook via the `def load` method that is called by the `BackendLoader` to load a specific `Backend`
- `BackendResolver` - the main, default, BackendResolver. It holds all `BackendProvider` instances loaded via the `BackendLoader` and provides right now a naive lookup mechanism to find `Backend`s. Really, this is more of a `Router` or `Resolver`, as in the future it could use more advanced techniques to finding connections than right now
- `BackendConfigs` - used by the `BackendRegistry` as the entrypoint into configuration for all backends
- `BackendProviderConfig` - a single backend provider configuration, specifies a `className` that should be the `BackendProviderLoader` implementation to be loaded, and a `settings` that is passed into the `BackendProvider` to load itself. This is consistent with other providers.
- `BackendResponse` - uniform responses across all providers to the rest of the VinylDNS System
**Workflow**
During initialization of the system:
1. The `BackendResolver` loads the `BackendConfigs` from the application configuration. This contains configuration for ALL backends
2. The `BackendResolver` utilizes the `BackendLoader` to dynamically load each backend individually. If any backend cannot be loaded, it will fail.
3. The `BackendLoader` creates a new instance of each `className` for each `BackendConfig`, this points to the `BackendProviderLoader` implementation which takes care of loading the specific `BackendProvider` provided the configuration
4. The `BackendProviderLoader` does any initialization necessary to ensure it is ready. In the case of `Route53`, it will pre-load and cache all hosted zones that are available for the AWS account that is configured. For Route53, a single `Route53Backend` is setup right now. For `DnsBackend`, a connection (server, port, tsig key) is setup for each DNS Authoritative system to integrate with.
During runtime of the system:
1. When anything is needed, the `BackendResolver` is consulted that will determine how to lookup the `Backend` that is needed. This is done right now by naively scanning all `BackendProvider` instances it has to say "can anyone connect to this zone". More intelligent discovery rules can be added in the future
2. Once a `Backend` is obtained, any operation can be performed:
1. `ZoneConnectionValidator` uses `zoneExists` and `loadZone` to validate a zone is usable by VinylDNS
2. `RecordSetChangeHandler` uses `resolve` and `applyChange` to apply changes to the DNS backend
3. `ZoneSyncHandler` and `DnsZoneViewLoader` use `loadZone` in order to load records into VinylDNS
**What else is here**
- Provided an implementation of a backend provider for DNS via `Backend`
- Updated all of VinylDNS to use `Backends` instead of hard coded to DNS
- Provided an implementation of a backend provider for AWS Route 53 as an example to follow for other providers
**Example configuration**
```
vinyldns {
backend {
default-backend-id = "r53"
backend-providers = [
{
class-name = "vinyldns.route53.backend.Route53BackendProviderLoader"
settings = {
backends = [
{
id = "test"
access-key = "vinyldnsTest"
secret-key = "notNeededForSnsLocal"
service-endpoint = "http://127.0.0.1:19009"
signing-region = "us-east-1"
}
]
}
}
]
}
}
```
The `EmailNotifierIntegrationSpec` does not work in Github actions for some reason, likely due to sending an email.
1. Added a `SkipCI` tag that will enable us to by default skip certain tests in CI.
2. Updated the `build.sbt` to by default exclude `SkipCI` tests
3. Added `taggedAs(SkipCI)` to the email integration test
- Add request tracing header for calls from `portal->API`
- Update request logging to include `trace.id`
- Tame down the monitor logging to prevent log pollution
Closes#976
* Updating dependencies
Updated almost all dependencies to current. There were some issues with
akka-http 10.1.11 so I stayed with 10.1.10 for the time being.
Func tests passed locally and manual review of the UI looks to be good
Significant changes are:
- `pureconfig` - this update had breaking syntax, so I had to update everywhere
we use pureconfig. Functionally it is the same, just different syntax
- `scalatest` - this was a big change, as scalatest has refactored out things
like Mockito and scalacheck. Many imports changed.
- `Java11` - formally moved everything to java 11. This required some new
dependencies like `javax.activation` and `java.xml.bind`
* Updating travis to JDK 11
* Finishing JDK 11 update
In order to update to JDK 11, needed to modify several docker things.
Removed timeout test that was causing issues as timeout tests here are not good
for running in travis.
Updated release process:
- `bin/release.sh` - added checks so we can only release from master, and can only release from upstream
- `build.sbt` - removed sbt publishing of docker images, we will now use `build/docker-release.sh` for that release
- `build/release.sh` -- renamed --> `build/docker-release.sh`
- `build/docker-release.sh` - added a version override to make it simple to force a version
A few specific build optimizations:
1. Consolidated `dockerComposeUp` to only use a single `root/docker/docker-compose.yml` instead of each module having its own docker-compose files. This eliminates additional waits for docker containers to startup and stop, as well as reduces memory consumption during the build
2. Cleaned up `VinylDNSSpec` - I noticed that this spec was taking 3 minutes to run! I discovered that the way we were mocking the `WSClient` was largely to blame. Was able to get tests to run in **16 SECONDS** using a library called `mock-ws`. This is where we see most savings.
3. Added back `dynamodb-local` instead of running it in `localstack`. Integration tests for dynamodb were very slow in localstack. This added an additional 20-30 second improvement.
After doing several tests locally running the following command...
```
> SECONDS=0; sbt verify; echo "DURATION = $SECONDS SECONDS"
```
Current master took 535 seconds to run; with these optimizations it took **211 SECONDS** - that is a 60% improvement.
The initial Travis builds reported a run time of 13 minutes as opposed to 19 minutes; this would save some 6 minutes off of Travis build times (or 30% improvement).
Major overhaul of func tests to allow them to run in parallel. Major changes include:
1. Consolidate all separate test fixtures into a single test fixture in the `shared_zone_test_context`
1. Add `xdist` to allow running tests in parallel
1. Add hooks in main `conftest.py` to setup the test fixture before workers run, and tear it down when workers are finished
1. After fixture is setup, save state in a local `tmp.out` so the workers will use that state instead of trying to recreate the fixture.
1. Add a `utils.generate_record_name` which generates a unique record name in order to avoid conflicts when running tests in parallel
1. Add a `pytest.mark.serial` for func tests that just cannot be run in serial
1. Tests are now run in two phases, first we run in parallel, and if that is successful, we run the serial tests
1. Add a `--teardown` flag, this allows us to reuse the test fixture between the two phases parallel and serial
* Fix docker releases
There was an issue starting the docker containers due to how native packager
works where we were seeing issues with the container being able to start.
The issue was that we were assuming a "daemon" user to run the containers under.
At some point this changed to "1001:0". As a result, there were not sufficient
privileges to start the containers because the "daemon" user was invalid or did
not have access to the scripts created by sbt native packager.
* `build.sbt` - update the user to "1001:0" for our custom install. Cleaned up the hardcoded references in the script extras to `/opt/docker` to use the variable `app_home` instead.
* `plugins.sbt` - updated to the latest sbt native packager
* Add MySqlRecordSetRepository
* Updated docker for mysql to use general_log for fun sql debug times
* Made sure to use rewriteBatchStatements to acheive new hights for bulk inserts
* `MySqlDataStoreProvider` support for the record set repo
* Externalize mysql.
* Updates based on feedback (pauljamescleary).
* Update publish settings for mysql.
* WIP
* Updates based on feedback (rebstar6).
* Update reference to MySQL.
* Use config file for API integration tests.
* Fixed scalikejdbc version.
* Add back application.conf
* Be more specific with MySQL settings.
* Update test config for MySQL module.
* Updates based on feedback (rebstar6).
Using the `all` command in sbt allows us to run certain tasks
in parallel. Not everything can be done in parallel, so we have to use
judgement here. Some things like dockerCompose cannot be done in parallel.
With the adjustments, local `;validate;verify` went from 12 minutes to 9
minutes, a savings of 3 full minutes!
**build.sbt**
Most changes are here.
Allow parallelExecution by default. The reason is that test suites will be run
in parallel. The play framework by default turns this off. Some of the changes
made in here around the `InMemoryBatchChangeRepository` were necessary when I
flipped this on.
We cannot run in parallel by default for IntegrationTest. The reason is that
several of the api integration tests use the same zone repo, so they wind up
stomping on each other.
Added a `killDocker` task that is much faster to run than `dockerComposeStop`
Enabled parallelExecution in IntegrationTest in dynamodb. The integration tests
do not conflict with each other here.
Changed the command aliases to use parallel `all`
**InMemoryBatchCHangeRepository**
This used to be a singleton, which prevents running unit tests in parallel.
Made this a class, and updated unit tests to use the class instead.
**logback-test.xml**
We were still logging in odd ways in places. Removed this to turn logging off
by default for tests.
**.jvmopts**
Kept running out of metaspace. Increased the memory needed to help slow that
down.
* Fixing portal build
The dynamodb sub module is not automatically packaged with the
portal universal distribution.
Need to modify sbt to make sure it is.
* Setting organization on dynamodb
Since this will be published to sonatype, has to be io.vinyldns
* Quiet build output
* set the traceLevel in the build to -1, which means no exceptions
will be output when running
* set the akka.logLevel in a few files to `OFF` so we don't
log anything during tests
* Quiet portal exceptions
Replace the repos in the portal with dynamodb and core
* Remove all data stores from the portal
* Use the user and user change repository from core and dynamodb
* Remove the UserAccount type, use core User instead
* Remove the UserChangeLog types, use core UserChange instead
* Clean up duplication in VinylDNS
* Moved `Module` to `modules.VinylDNSModule`. The reason is that
you cannot disable the "default" module for unit tests.
* Use mock configuration for VinylDNSSpec and FrontendControllerSpec.
The mock app configuration is what allows us to run without dynamodb
* Added a TestApplicationData trait to cut down on duplication
* IO startup for dynamodb stores (rather than unsafe throws)
* Update unit and integration tests in the dynamodb module
* update api module where dependent on dnamodb
* bin/release.sh script to check for required env variables, run tests, then run `sbt release`
* MAINTAINERS.md that describes steps needed to release
* implemented sbt release to run our release to docker and sonatype