We used to rely on `tut` for docs, however it is deprecated to be replaced with mdoc.
Moved to an `mdoc` folder structure and updated all of the links (what a pain).
This is a rather big change. There are a few significant issues with the way that the API config is presently loaded:
1. We use effectively global variables throughout the system, which is a bad practice in general
2. We have inconsistent loading of configuration values, some used at boot up, some used elsewhere
In addition, we get sporadic build failures due to how these "global config" values are loaded, based on timing and parallelism that are impossible to reproduce.
This PR addresses these issues:
1. Create a `VinylDNSConfig` that loads all configuration in one place
2. Create custom `ConfigReader` implementations that read config values (ideally we would have used pureconfig from the start to automatically read sane config values but here we are)
3. Segment config into different case classes. The groupings are not totally arbitrary, but I did my best at logical groupings of settings
4. Inject configuration elements (either via Class constructors or function arguments) at the appropriate time.
Functionally, nothing has changed, other than putting some standards around config loading.
May fix#1010
Introduces the concept of a `Backend` into VinylDNS. This will allow support for any DNS backend in the future, including AwS Route53 for example. This is consistent with other "provider" things for dynamic loading of classes (Notifier, Repository, Queue, etc.)
The initial implementation builds on what we have already, that is when creating a zone one can choose a `backendId` that is configured in the `application.conf`. If no `backendId` is specified, we attempt to map like we do today, so the exact same functionality.
We expand that by allowing one to map a `backendId` to a different provider (like aws).
After this PR:
1. If someone specifies a zone connection on a zone, it will work exactly like it does today, namely go through the `DnsBackend` to connect.
2. If someone specifies a `backendId` when setting up a zone, the naive mapping will take place to map that zone to the `Backend` implementation that is configured with that `backendId`. For example, if you have configured a backend id `aws` that connects to Route53, and you specify `aws` when connecting the zone, it will connect to it in Route 53 **Note: we still do not support zone create, but that is much closer to reality with this PR, much much**
3. If someone specifies NEITHER, the `defaultBackendId` will be used, which could be on any one of the backend providers configured.
To start, there is a new `vinyldns.core.domain.backend` package that contains the main classes for the system. In there you will find the following:
- `BackendProvider` - this is to be implemented by each provider. Adds a means of pre-loading zones, and providing connections to zones.
- `Backend` - provides connectivity to a particular backend instance. For example, a particular DNS Authoritative server. This is where the real work happens of interacting with whatever backend. For example, `DnsConnection` implements this to send DDNS messages to the DNS system. Consider this the "main" thing to implement, where the rubber meets the road, the meat and potatoes
- `BackendProviderLoader` - to be implemented by each provider, knows how to load it's single instance `BackendProvider`, as well as possibly pre-loading configured `Backends` or anything else it needs to do to get ready. It provides a dynamic hook via the `def load` method that is called by the `BackendLoader` to load a specific `Backend`
- `BackendResolver` - the main, default, BackendResolver. It holds all `BackendProvider` instances loaded via the `BackendLoader` and provides right now a naive lookup mechanism to find `Backend`s. Really, this is more of a `Router` or `Resolver`, as in the future it could use more advanced techniques to finding connections than right now
- `BackendConfigs` - used by the `BackendRegistry` as the entrypoint into configuration for all backends
- `BackendProviderConfig` - a single backend provider configuration, specifies a `className` that should be the `BackendProviderLoader` implementation to be loaded, and a `settings` that is passed into the `BackendProvider` to load itself. This is consistent with other providers.
- `BackendResponse` - uniform responses across all providers to the rest of the VinylDNS System
**Workflow**
During initialization of the system:
1. The `BackendResolver` loads the `BackendConfigs` from the application configuration. This contains configuration for ALL backends
2. The `BackendResolver` utilizes the `BackendLoader` to dynamically load each backend individually. If any backend cannot be loaded, it will fail.
3. The `BackendLoader` creates a new instance of each `className` for each `BackendConfig`, this points to the `BackendProviderLoader` implementation which takes care of loading the specific `BackendProvider` provided the configuration
4. The `BackendProviderLoader` does any initialization necessary to ensure it is ready. In the case of `Route53`, it will pre-load and cache all hosted zones that are available for the AWS account that is configured. For Route53, a single `Route53Backend` is setup right now. For `DnsBackend`, a connection (server, port, tsig key) is setup for each DNS Authoritative system to integrate with.
During runtime of the system:
1. When anything is needed, the `BackendResolver` is consulted that will determine how to lookup the `Backend` that is needed. This is done right now by naively scanning all `BackendProvider` instances it has to say "can anyone connect to this zone". More intelligent discovery rules can be added in the future
2. Once a `Backend` is obtained, any operation can be performed:
1. `ZoneConnectionValidator` uses `zoneExists` and `loadZone` to validate a zone is usable by VinylDNS
2. `RecordSetChangeHandler` uses `resolve` and `applyChange` to apply changes to the DNS backend
3. `ZoneSyncHandler` and `DnsZoneViewLoader` use `loadZone` in order to load records into VinylDNS
**What else is here**
- Provided an implementation of a backend provider for DNS via `Backend`
- Updated all of VinylDNS to use `Backends` instead of hard coded to DNS
- Provided an implementation of a backend provider for AWS Route 53 as an example to follow for other providers
**Example configuration**
```
vinyldns {
backend {
default-backend-id = "r53"
backend-providers = [
{
class-name = "vinyldns.route53.backend.Route53BackendProviderLoader"
settings = {
backends = [
{
id = "test"
access-key = "vinyldnsTest"
secret-key = "notNeededForSnsLocal"
service-endpoint = "http://127.0.0.1:19009"
signing-region = "us-east-1"
}
]
}
}
]
}
}
```
* Updating dependencies
Updated almost all dependencies to current. There were some issues with
akka-http 10.1.11 so I stayed with 10.1.10 for the time being.
Func tests passed locally and manual review of the UI looks to be good
Significant changes are:
- `pureconfig` - this update had breaking syntax, so I had to update everywhere
we use pureconfig. Functionally it is the same, just different syntax
- `scalatest` - this was a big change, as scalatest has refactored out things
like Mockito and scalacheck. Many imports changed.
- `Java11` - formally moved everything to java 11. This required some new
dependencies like `javax.activation` and `java.xml.bind`
* Updating travis to JDK 11
* Finishing JDK 11 update
In order to update to JDK 11, needed to modify several docker things.
Removed timeout test that was causing issues as timeout tests here are not good
for running in travis.
Updated release process:
- `bin/release.sh` - added checks so we can only release from master, and can only release from upstream
- `build.sbt` - removed sbt publishing of docker images, we will now use `build/docker-release.sh` for that release
- `build/release.sh` -- renamed --> `build/docker-release.sh`
- `build/docker-release.sh` - added a version override to make it simple to force a version
A few specific build optimizations:
1. Consolidated `dockerComposeUp` to only use a single `root/docker/docker-compose.yml` instead of each module having its own docker-compose files. This eliminates additional waits for docker containers to startup and stop, as well as reduces memory consumption during the build
2. Cleaned up `VinylDNSSpec` - I noticed that this spec was taking 3 minutes to run! I discovered that the way we were mocking the `WSClient` was largely to blame. Was able to get tests to run in **16 SECONDS** using a library called `mock-ws`. This is where we see most savings.
3. Added back `dynamodb-local` instead of running it in `localstack`. Integration tests for dynamodb were very slow in localstack. This added an additional 20-30 second improvement.
After doing several tests locally running the following command...
```
> SECONDS=0; sbt verify; echo "DURATION = $SECONDS SECONDS"
```
Current master took 535 seconds to run; with these optimizations it took **211 SECONDS** - that is a 60% improvement.
The initial Travis builds reported a run time of 13 minutes as opposed to 19 minutes; this would save some 6 minutes off of Travis build times (or 30% improvement).
* Fix docker releases
There was an issue starting the docker containers due to how native packager
works where we were seeing issues with the container being able to start.
The issue was that we were assuming a "daemon" user to run the containers under.
At some point this changed to "1001:0". As a result, there were not sufficient
privileges to start the containers because the "daemon" user was invalid or did
not have access to the scripts created by sbt native packager.
* `build.sbt` - update the user to "1001:0" for our custom install. Cleaned up the hardcoded references in the script extras to `/opt/docker` to use the variable `app_home` instead.
* `plugins.sbt` - updated to the latest sbt native packager
* Add task and task handler.
* Update tests.
* Updates.
* Updates based on feedback (rebstar6).
* Update tests.
* Updates based on feedback (rebstar6).
* Add log for sync error.
* Change handleError to handleErrorWith.
* WIP
* WIP
* Use new TaskScheduler
* Fixing unit test
* Cleanup errant change
Creates a more general task scheduler. The existing user sync process had some half generic pieces, and other pieces that were tightly coupled to the user sync process.
This is the first step at making a general purpose task scheduler. This has been proven out in the implementation of the user sync process in #718
1. `TaskRepository` - renamed `pollingInterval` to `taskTimeout` as the value is similar to `visbilityTimeout` in SQS
2. `Task` - is an interface that needs to be implemented by future tasks. `name` is the unique name of the task; `timeout` is how long to wait to consider the last claim expired; `runEvery` is how often to attempt to run the task; `run()` is the function that actually executes the task itself.
3. `TaskScheduler` - this is the logic of scheduling. It embodies the logic of a) saving the task b) claiming the task c) running the task and d) releasing the task. It uses `IO.bracket` to make sure the finalizer `releaseTask` is called no matter what the result is of running the task. It uses `fs2.Stream.awakeEvery` for polling. The expectation is that the caller will acquire the stream and do an `Stream.compile.drain.start` to kick it off running. It can be cancelled using the `Fiber` returned from `Stream.compile.drain.start`
* Add email notifier
Provide email on batch change to the requesting user
* Test email notifier
Add unit tests for email notifier
* Address EmailNotifier comments
Add integration test for Email Notifier
Log unparseable emails
Add detail to email
Adding updates to handle large zones (> 500,000).
1. `APIMetrics` allows configuration driven metrics collection. Metrics we need here are for large zones, so we have a flag to enable logging of memory usage. If `log-enabled=true` in the settings, start up a logging reporter that will memory usage to the log file every `log-seconds` seconds.
1. `CommandHandler` - increase the visibility timeout to 1 hour. In testing with a large zone of 600,000 records, the initial zone sync process took 36 minutes. Going to 1 hour should give us the ability to handle zones a little larger than 600,000 DNS records
1. `ZoneConnectionValidator` - increasing the timeout to 60 seconds from 6 seconds, as doing a zone transfer of large zones can take 10-20 seconds
1. `DNSZoneViewLoader` - adding logging around how many raw records are loaded so we can marry raw counts to memory usage
1. `core.Instrumented` - I put the `MemoryGaugeSet` into the `core` project as I thought it would be useful for the portal as well as the API.
* Add MySqlRecordSetRepository
* Updated docker for mysql to use general_log for fun sql debug times
* Made sure to use rewriteBatchStatements to acheive new hights for bulk inserts
* `MySqlDataStoreProvider` support for the record set repo
Needed to add implicit `ContextShift` whenever we use `par` features in the codebase.
Needed to add implicit `Timer` whenever we need to use scheduling or races.
* Add JDK 9+ support
* Update sbt-assembly to support JDK 11
* Update aws-sdk to support JDK 11
* Deduplicate jaxb module-info.
* Add jaxb-core that is depended by jaxb-impl
Create a MySQL Message Queue implementation.
* Created a `MySqlMessageQueue` in the mysql sub module
* Created a `MySqlMessage` that implements `CommandMessage` from core
* Created a `MessageType` enum to determine which type of command is on the message
* Created a `MySqlMessageQueueIntegrationSpec` which exercises 100% code coverage on the queue, including hidden behavior in the SQL
* Externalize mysql.
* Updates based on feedback (pauljamescleary).
* Update publish settings for mysql.
* WIP
* Updates based on feedback (rebstar6).
* Update reference to MySQL.
* Use config file for API integration tests.
* Fixed scalikejdbc version.
* Add back application.conf
* Be more specific with MySQL settings.
* Update test config for MySQL module.
* Updates based on feedback (rebstar6).
The root cause for the authentication error is that the portal
was not decrypting the user secret key before signing requests.
This is solved via the following:
1. Update VinylDNS controller to decrypt user secret when needed
1. Make sure that the `encrypt-user-secrets` feature flag is `on`
in the API reference.conf. This was why in testing locally we
did not hit the same issue that we saw in the development environment.
Because the flag was false, test users secrets were not encrypted.
* `portal application.conf` - set the crypto to match the API
* `Dependencies.scala` - eliminate some duplication of dependencies
* `api reference.conf` - set the encrypt-user-secrets flag to true
* `TestApplicationData.scala` - modify the mock play app to have a
CryptoAlgebra binding
* `VinylDNS` - add secret decryption in getUserCreds and processCSV
* `VinylDNSModule` - add binding for CryptoAlgebra for dependency
injection.
Replace the repos in the portal with dynamodb and core
* Remove all data stores from the portal
* Use the user and user change repository from core and dynamodb
* Remove the UserAccount type, use core User instead
* Remove the UserChangeLog types, use core UserChange instead
* Clean up duplication in VinylDNS
* Moved `Module` to `modules.VinylDNSModule`. The reason is that
you cannot disable the "default" module for unit tests.
* Use mock configuration for VinylDNSSpec and FrontendControllerSpec.
The mock app configuration is what allows us to run without dynamodb
* Added a TestApplicationData trait to cut down on duplication
* IO startup for dynamodb stores (rather than unsafe throws)
* Update unit and integration tests in the dynamodb module
* update api module where dependent on dnamodb
* bin/release.sh script to check for required env variables, run tests, then run `sbt release`
* MAINTAINERS.md that describes steps needed to release
* implemented sbt release to run our release to docker and sonatype
* config file updates for mysql loading
* dynamic loading for mysql
* IT test changes for dynamic load
* rebase fixes
* move settings to own file
* conf cleanup
* missing headers
* cleanup, some testing
* pureconfig cats load
* error message fix
The sbt protobuf plugin we were using forced developers to install
protoc version 2.6.1 on their local. That makes it difficult
to onboard new developers.
sbt-protoc is an alternative plugin. It has a lot of features we
are presently not using. The biggest feature it brings is to
not require developers to install protoc.
* build.sbt - use the new protoc plugin
* VinylDNSProto.proto - add syntax = proto2 to ensure compile
to compatible protobuf version
* plugins.sbt - remove the old protobuf plugin
* protoc.sbt - add the new protoc plugin
* Removing ProtocPlugin as it is an auto-plugin
* Remove uneeded protoc lib dependency
* Update docs to remove protoc requirement
* Remove protobuf install from travis
* Modified JsonValidations to use cats instead of scalaz
* Modified the Json protocols accordingly
* Replaced scalaz ValidationNel with cats ValidatedNel
* Replaced all Disjunctions with Either