* Updating dependencies
Updated almost all dependencies to current. There were some issues with
akka-http 10.1.11 so I stayed with 10.1.10 for the time being.
Func tests passed locally and manual review of the UI looks to be good
Significant changes are:
- `pureconfig` - this update had breaking syntax, so I had to update everywhere
we use pureconfig. Functionally it is the same, just different syntax
- `scalatest` - this was a big change, as scalatest has refactored out things
like Mockito and scalacheck. Many imports changed.
- `Java11` - formally moved everything to java 11. This required some new
dependencies like `javax.activation` and `java.xml.bind`
* Updating travis to JDK 11
* Finishing JDK 11 update
In order to update to JDK 11, needed to modify several docker things.
Removed timeout test that was causing issues as timeout tests here are not good
for running in travis.
A few specific build optimizations:
1. Consolidated `dockerComposeUp` to only use a single `root/docker/docker-compose.yml` instead of each module having its own docker-compose files. This eliminates additional waits for docker containers to startup and stop, as well as reduces memory consumption during the build
2. Cleaned up `VinylDNSSpec` - I noticed that this spec was taking 3 minutes to run! I discovered that the way we were mocking the `WSClient` was largely to blame. Was able to get tests to run in **16 SECONDS** using a library called `mock-ws`. This is where we see most savings.
3. Added back `dynamodb-local` instead of running it in `localstack`. Integration tests for dynamodb were very slow in localstack. This added an additional 20-30 second improvement.
After doing several tests locally running the following command...
```
> SECONDS=0; sbt verify; echo "DURATION = $SECONDS SECONDS"
```
Current master took 535 seconds to run; with these optimizations it took **211 SECONDS** - that is a 60% improvement.
The initial Travis builds reported a run time of 13 minutes as opposed to 19 minutes; this would save some 6 minutes off of Travis build times (or 30% improvement).
* Add task and task handler.
* Update tests.
* Updates.
* Updates based on feedback (rebstar6).
* Update tests.
* Updates based on feedback (rebstar6).
* Add log for sync error.
* Change handleError to handleErrorWith.
* WIP
* WIP
* Use new TaskScheduler
* Fixing unit test
* Cleanup errant change
Creates a more general task scheduler. The existing user sync process had some half generic pieces, and other pieces that were tightly coupled to the user sync process.
This is the first step at making a general purpose task scheduler. This has been proven out in the implementation of the user sync process in #718
1. `TaskRepository` - renamed `pollingInterval` to `taskTimeout` as the value is similar to `visbilityTimeout` in SQS
2. `Task` - is an interface that needs to be implemented by future tasks. `name` is the unique name of the task; `timeout` is how long to wait to consider the last claim expired; `runEvery` is how often to attempt to run the task; `run()` is the function that actually executes the task itself.
3. `TaskScheduler` - this is the logic of scheduling. It embodies the logic of a) saving the task b) claiming the task c) running the task and d) releasing the task. It uses `IO.bracket` to make sure the finalizer `releaseTask` is called no matter what the result is of running the task. It uses `fs2.Stream.awakeEvery` for polling. The expectation is that the caller will acquire the stream and do an `Stream.compile.drain.start` to kick it off running. It can be cancelled using the `Fiber` returned from `Stream.compile.drain.start`
* Add email notifier
Provide email on batch change to the requesting user
* Test email notifier
Add unit tests for email notifier
* Address EmailNotifier comments
Add integration test for Email Notifier
Log unparseable emails
Add detail to email
Adding updates to handle large zones (> 500,000).
1. `APIMetrics` allows configuration driven metrics collection. Metrics we need here are for large zones, so we have a flag to enable logging of memory usage. If `log-enabled=true` in the settings, start up a logging reporter that will memory usage to the log file every `log-seconds` seconds.
1. `CommandHandler` - increase the visibility timeout to 1 hour. In testing with a large zone of 600,000 records, the initial zone sync process took 36 minutes. Going to 1 hour should give us the ability to handle zones a little larger than 600,000 DNS records
1. `ZoneConnectionValidator` - increasing the timeout to 60 seconds from 6 seconds, as doing a zone transfer of large zones can take 10-20 seconds
1. `DNSZoneViewLoader` - adding logging around how many raw records are loaded so we can marry raw counts to memory usage
1. `core.Instrumented` - I put the `MemoryGaugeSet` into the `core` project as I thought it would be useful for the portal as well as the API.
* Add MySqlRecordSetRepository
* Updated docker for mysql to use general_log for fun sql debug times
* Made sure to use rewriteBatchStatements to acheive new hights for bulk inserts
* `MySqlDataStoreProvider` support for the record set repo
Needed to add implicit `ContextShift` whenever we use `par` features in the codebase.
Needed to add implicit `Timer` whenever we need to use scheduling or races.
* Add JDK 9+ support
* Update sbt-assembly to support JDK 11
* Update aws-sdk to support JDK 11
* Deduplicate jaxb module-info.
* Add jaxb-core that is depended by jaxb-impl
* Externalize mysql.
* Updates based on feedback (pauljamescleary).
* Update publish settings for mysql.
* WIP
* Updates based on feedback (rebstar6).
* Update reference to MySQL.
* Use config file for API integration tests.
* Fixed scalikejdbc version.
* Add back application.conf
* Be more specific with MySQL settings.
* Update test config for MySQL module.
* Updates based on feedback (rebstar6).
The root cause for the authentication error is that the portal
was not decrypting the user secret key before signing requests.
This is solved via the following:
1. Update VinylDNS controller to decrypt user secret when needed
1. Make sure that the `encrypt-user-secrets` feature flag is `on`
in the API reference.conf. This was why in testing locally we
did not hit the same issue that we saw in the development environment.
Because the flag was false, test users secrets were not encrypted.
* `portal application.conf` - set the crypto to match the API
* `Dependencies.scala` - eliminate some duplication of dependencies
* `api reference.conf` - set the encrypt-user-secrets flag to true
* `TestApplicationData.scala` - modify the mock play app to have a
CryptoAlgebra binding
* `VinylDNS` - add secret decryption in getUserCreds and processCSV
* `VinylDNSModule` - add binding for CryptoAlgebra for dependency
injection.
Replace the repos in the portal with dynamodb and core
* Remove all data stores from the portal
* Use the user and user change repository from core and dynamodb
* Remove the UserAccount type, use core User instead
* Remove the UserChangeLog types, use core UserChange instead
* Clean up duplication in VinylDNS
* Moved `Module` to `modules.VinylDNSModule`. The reason is that
you cannot disable the "default" module for unit tests.
* Use mock configuration for VinylDNSSpec and FrontendControllerSpec.
The mock app configuration is what allows us to run without dynamodb
* Added a TestApplicationData trait to cut down on duplication
* IO startup for dynamodb stores (rather than unsafe throws)
* Update unit and integration tests in the dynamodb module
* update api module where dependent on dnamodb
* config file updates for mysql loading
* dynamic loading for mysql
* IT test changes for dynamic load
* rebase fixes
* move settings to own file
* conf cleanup
* missing headers
* cleanup, some testing
* pureconfig cats load
* error message fix
* Modified JsonValidations to use cats instead of scalaz
* Modified the Json protocols accordingly
* Replaced scalaz ValidationNel with cats ValidatedNel
* Replaced all Disjunctions with Either