2
0
mirror of https://github.com/VinylDNS/vinyldns synced 2025-08-22 10:10:12 +00:00
- Fix issues with `SSHFP` record type
- Add `sbt.sh` helper script
- Update configuration file (`application.conf`) for consistency
- Update documentation
This commit is contained in:
Emerle, Ryan 2021-12-03 12:16:21 -05:00
parent e1743e5342
commit c3d4e16da4
No known key found for this signature in database
GPG Key ID: C0D34C592AED41CE
36 changed files with 312 additions and 282 deletions

View File

@ -22,7 +22,7 @@ jobs:
fetch-depth: 0 fetch-depth: 0
- name: Build and Test - name: Build and Test
run: cd build/ && ./assemble_api_jar.sh && ./run_all_tests.sh run: cd build/ && ./assemble_api.sh && ./run_all_tests.sh
shell: bash shell: bash
- name: Codecov - name: Codecov

View File

@ -206,60 +206,78 @@ You should now be able to see the zone in the portal at localhost:9001 when logg
### Unit Tests ### Unit Tests
1. First, start up your Scala build tool: `sbt`. Running *clean* immediately after starting is recommended. 1. First, start up your Scala build tool: `build/sbt.sh` (or `sbt` if running outside of Docker).
1. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the 2. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the
portal. portal.
1. Run _all_ unit tests by just running `test`. 3. Run _all_ unit tests by just running `test`.
1. Run an individual unit test by running `testOnly *MySpec`. 4. Run a single unit test suite by running `testOnly *MySpec`.
1. If you are working on a unit test and production code at the same time, use `~` (e.g., `~testOnly *MySpec`) to 5. Run a single unit by filtering the test name using the `-z` argument `testOnly *MySpec -- -z "some text from test"`.
- [More information on commandline arguments](https://www.scalatest.org/user_guide/using_the_runner)
6. If you are working on a unit test and production code at the same time, use `~` (e.g., `~testOnly *MySpec`) to
automatically background compile for you! automatically background compile for you!
### Integration Tests ### Integration Tests
Integration tests are used to test integration with _real_ dependent services. We use Docker to spin up those backend Integration tests are used to test integration with dependent services. We use Docker to spin up those backend services
services for integration test development. for integration test development.
1. Type `dockerComposeUp` to start up dependent background services 1. Type `quickstart/quickstart-vinyldns.sh --reset --deps-only` to start up dependent background services
1. Run sbt (`build/sbt.sh` or `sbt` locally)
1. Go to the target module in sbt, example: `project api` 1. Go to the target module in sbt, example: `project api`
1. Run all integration tests by typing `it:test`. 1. Run all integration tests by typing `it:test`.
1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec` 1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec`
1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec` 1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec`
1. You must stop (`dockerComposeStop`) and start (`dockerComposeUp`) the dependent services from the root 1. You must restart the dependent services (`quickstart/quickstart-vinyldns.sh --reset --deps-only`) before you rerun
project (`project root`) before you rerun the tests. the tests.
1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for 1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for
setup to complete. setup to complete.
#### Running both #### Running both
You can run all unit and integration tests for the api and portal by running `sbt verify` You can run all unit and integration tests for the api and portal by running `build/verify.sh`
### Functional Tests ### Functional Tests
When adding new features, you will often need to write new functional tests that black box / regression test the API. When adding new features, you will often need to write new functional tests that black box / regression test the API.
- The API functional tests are written in Python and live under `test/api/functional`. - The API functional tests are written in Python and live under `modules/api/src/test/functional`.
- The Portal functional tests are written in JavaScript and live under `test/portal/functional`. - The Portal functional tests are written in JavaScript and live under `modules/portal/test`.
#### Running Functional Tests #### Running Functional Tests
To run functional tests you can simply execute the following command: To run functional tests you can simply execute the following commands:
``` ```
build/func-test-api.sh
build/func-test-portal.sh
```
These command will run the API functional tests and portal functional tests respectively.
##### API Functional Tests
To run functional tests you can simply execute `build/func-test-api.sh`, but if you'd like finer-grained control, you
can work with the `Makefile` in `test/api/functional`:
```
cd test/api/functional
make build && make run make build && make run
``` ```
During iterative test development, you can use `make run-local` which will mount the current functional tests in the During iterative test development, you can use `make run-local` which will bind-mount the current functional tests in
container, allowing for easier test development. the container, allowing for easier test development.
Additionally, you can pass `--interactive` to `make run` or `make run-local` to drop to a shell inside the container. Additionally, you can pass `--interactive` to `make run` or `make run-local` to drop to a shell inside the container.
From there you can run tests with the `/functional_test/run.sh` command. This allows for finer-grained control over the From there you can run tests with the `/functional_test/run.sh` command. This allows for finer-grained control over the
test execution process as well as easier inspection of logs. test execution process as well as easier inspection of logs.
##### API Functional Tests
You can run a specific test by name by running `make run -- -k <name of test function>`. Any arguments after You can run a specific test by name by running `make run -- -k <name of test function>`. Any arguments after
`make run --` will be passed to the test runner [`test/api/functional/run.sh`](test/api/functional/run.sh). `make run --` will be passed to the test runner [`test/api/functional/run.sh`](test/api/functional/run.sh).
Finally, you can execute `make run-deps-bg` to all of the dependencies for the functional test, but not run the tests.
This is useful if, for example, you want to use an interactive debugger on your local machine, but host all of the
VinylDNS API dependencies in Docker.
#### Setup #### Setup
We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation so We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation so
@ -269,15 +287,16 @@ We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for
tests. Please browse that documentation as well so that you are familiar with the different matchers for PyHamcrest. tests. Please browse that documentation as well so that you are familiar with the different matchers for PyHamcrest.
There aren't a lot, so it should be quick. There aren't a lot, so it should be quick.
In the `test/api/functional` directory are a few important files for you to be familiar with: In the `modules/api/src/test/functional` directory are a few important files for you to be familiar with:
* `vinyl_client.py` - this provides the interface to the VinylDNS API. It handles signing the request for you, as well * `vinyl_python.py` - this provides the interface to the VinylDNS API. It handles signing the request for you, as well
as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should
be a corresponding function in the vinyl_client be a corresponding function in the vinyl_client
* `utils.py` - provides general use functions that can be used anywhere in your tests. Feel free to contribute new * `utils.py` - provides general use functions that can be used anywhere in your tests. Feel free to contribute new
functions here when you see repetition in the code functions here when you see repetition in the code
In the `test/api/functional/tests` directory, we have directories / modules for different areas of the application. In the `modules/api/src/test/functional/tests` directory, we have directories / modules for different areas of the
application.
* `batch` - for managing batch updates * `batch` - for managing batch updates
* `internal` - for internal endpoints (not intended for public consumption) * `internal` - for internal endpoints (not intended for public consumption)

View File

@ -3,11 +3,12 @@
This folder contains scripts for building VinylDNS and it's related artifacts. This folder contains scripts for building VinylDNS and it's related artifacts.
| Path | Description | | Path | Description |
| --- | --- | |-----------------------|-----------------------------------------------------------------------------------------|
| `assemble_api_jar.sh` | Builds the VinylDNS API jar file. You can find the resulting `jar` file in `assembly/`. | | `assemble_api_jar.sh` | Builds the VinylDNS API jar file. You can find the resulting `jar` file in `assembly/`. |
| `deep_clean.sh` | Removes all of the build artifacts and all `target/` directories recursively. | | `deep_clean.sh` | Removes all of the build artifacts and all `target/` directories recursively. |
| `func-test-api.sh` | Runs the functional tests for the API | | `func-test-api.sh` | Runs the functional tests for the API |
| `func-test-portal.sh` | Runs the functional tests for the Portal | | `func-test-portal.sh` | Runs the functional tests for the Portal |
| `publish_docs.sh` | Publishes the documentation site | | `publish_docs.sh` | Publishes the documentation site |
| `run_all_tests.sh` | Runs all of the tests: unit, integration, and functional | | `run_all_tests.sh` | Runs all of the tests: unit, integration, and functional |
| `sbt.sh` | Runs `sbt` in a Docker container with the current project bind-mounted in `/build` |
| `verify.sh` | Runs all of the unit and integration tests | | `verify.sh` | Runs all of the unit and integration tests |

View File

@ -12,16 +12,16 @@ DIR=$(
usage() { usage() {
echo "USAGE: assemble_api.sh [options]" echo "USAGE: assemble_api.sh [options]"
echo -e "\t-n, --no-clean do no perform a clean before assembling the jar" echo -e "\t-n, --no-cache do not use cache when building the artifact"
echo -e "\t-u, --update update the underlying docker image" echo -e "\t-u, --update update the underlying docker image"
} }
SKIP_CLEAN=0 NO_CACHE=0
UPDATE_DOCKER=0 UPDATE_DOCKER=0
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
case "$1" in case "$1" in
--no-clean | -n) --no-cache | -n)
SKIP_CLEAN=1 NO_CACHE=1
shift shift
;; ;;
--update | -u) --update | -u)
@ -35,15 +35,15 @@ while [[ $# -gt 0 ]]; do
esac esac
done done
if ! [[ $SKIP_CLEAN -eq 1 ]]; then if [[ $NO_CACHE -eq 1 ]]; then
"${DIR}/deep_clean.sh" rm -rf "${DIR}/../artifacts/vinyldns-api.jar" &> /dev/null || true
rm "${DIR}/../artifacts/vinyldns-api.jar" &> /dev/null || true docker rmi vinyldns:api-artifact &> /dev/null || true
fi fi
if [[ $UPDATE_DOCKER -eq 1 ]]; then if [[ $UPDATE_DOCKER -eq 1 ]]; then
echo "Pulling latest version of 'vinyldns/build:base-test-integration'" echo "Pulling latest version of 'vinyldns/build:base-build'"
docker pull vinyldns/build:base-test-integration docker pull vinyldns/build:base-build
fi fi
echo "Building VinylDNS API artifact" echo "Building VinylDNS API artifact"
docker run -i --rm -e RUN_SERVICES=none -v "${DIR}/..:/build" vinyldns/build:base-test-integration -- sbt 'api/assembly' make -C "${DIR}/docker/api" artifact

View File

@ -12,16 +12,16 @@ DIR=$(
usage() { usage() {
echo "USAGE: assemble_portal.sh [options]" echo "USAGE: assemble_portal.sh [options]"
echo -e "\t-n, --no-clean do no perform a clean before assembling the jar" echo -e "\t-n, --no-cache do not use cache when building the artifact"
echo -e "\t-u, --update update the underlying docker image" echo -e "\t-u, --update update the underlying docker image"
} }
SKIP_CLEAN=0 NO_CACHE=0
UPDATE_DOCKER=0 UPDATE_DOCKER=0
while [[ $# -gt 0 ]]; do while [[ $# -gt 0 ]]; do
case "$1" in case "$1" in
--no-clean | -n) --no-clean | -n)
SKIP_CLEAN=1 NO_CACHE=1
shift shift
;; ;;
--update | -u) --update | -u)
@ -35,16 +35,15 @@ while [[ $# -gt 0 ]]; do
esac esac
done done
if ! [[ $SKIP_CLEAN -eq 1 ]]; then if [[ $NO_CACHE -eq 1 ]]; then
"${DIR}/deep_clean.sh" rm -rf "${DIR}/../artifacts/vinyldns-portal.zip" &> /dev/null || true
rm "${DIR}/../artifacts/vinyldns-portal.zip" &> /dev/null || true docker rmi vinyldns:portal-artifact &> /dev/null || true
rm -rf "${DIR}/../artifacts/scripts" &> /dev/null || true
fi fi
if [[ $UPDATE_DOCKER -eq 1 ]]; then if [[ $UPDATE_DOCKER -eq 1 ]]; then
echo "Pulling latest version of 'vinyldns/build:base-test-integration'" echo "Pulling latest version of 'vinyldns/build:base-build'"
docker pull vinyldns/build:base-test-integration docker pull vinyldns/build:base-build
fi fi
echo "Building VinylDNS Portal artifact" echo "Building VinylDNS Portal artifact"
docker run -i --rm -e RUN_SERVICES=none -v "${DIR}/..:/build" vinyldns/build:base-test-integration -- sbt 'portal/dist' make -C "${DIR}/docker/portal" artifact

View File

@ -10,7 +10,6 @@ DIR=$(
echo "Performing deep clean" echo "Performing deep clean"
find "${DIR}/.." -type d -name target -o -name assembly | while read -r p; do if [ -d "$p" ]; then find "${DIR}/.." -type d -name target -o -name assembly | while read -r p; do if [ -d "$p" ]; then
echo -n "Removing $p.." echo -n "Removing $(realpath --relative-to="$DIR" "$p").." && \
rm -r "$p" || (echo -e "\e[93mError deleting $p, you may need to be root\e[0m"; exit 1) { { rm -rf "$p" &> /dev/null && echo "done."; } || { echo -e "\e[93mERROR\e[0m: you may need to be root"; exit 1; } }
echo "done."
fi; done fi; done

View File

@ -3,13 +3,13 @@ FROM vinyldns/build:base-build as base-build
COPY . /build/ COPY . /build/
WORKDIR /build WORKDIR /build
## Run the build if we don't already have a vinyldns.jar ## Run the build if we don't already have a vinyldns-api.jar
RUN mkdir -p /opt/vinyldns/conf && \ RUN mkdir -p /opt/vinyldns/conf && \
if [ -f artifacts/vinyldns-api.jar ]; then cp artifacts/vinyldns-api.jar /opt/vinyldns/; fi && \ if [ -f artifacts/vinyldns-api.jar ]; then cp artifacts/vinyldns-api.jar /opt/vinyldns/; fi && \
if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ if [ ! -f /opt/vinyldns/vinyldns-api.jar ]; then \
env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \
sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \
&& cp assembly/vinyldns.jar /opt/vinyldns/; \ && cp artifacts/vinyldns-api.jar /opt/vinyldns/; \
fi fi
FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine
@ -36,4 +36,4 @@ ENTRYPOINT ["/bin/bash", "-c", "java ${JVM_OPTS} -Dconfig.file=/opt/vinyldns/con
-Dlogback.configurationFile=/opt/vinyldns/conf/logback.xml \ -Dlogback.configurationFile=/opt/vinyldns/conf/logback.xml \
-Dvinyldns.version=$(cat /opt/vinyldns/version) \ -Dvinyldns.version=$(cat /opt/vinyldns/version) \
-cp /opt/vinyldns/lib_extra/* \ -cp /opt/vinyldns/lib_extra/* \
-jar /opt/vinyldns/vinyldns.jar" ] -jar /opt/vinyldns/vinyldns-api.jar" ]

View File

@ -31,6 +31,12 @@ endif
all: build run all: build run
artifact:
@set -euo pipefail
cd ../../..
docker build $(BUILD_ARGS) --target base-build --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t "vinyldns:api-artifact" -f "$(ROOT_DIR)/Dockerfile" .
docker run -it --rm -v "$$(pwd)/:/output" vinyldns:api-artifact /bin/bash -c "cp /build/artifacts/*.jar /output/artifacts"
build: build:
@set -euo pipefail @set -euo pipefail
cd ../../.. cd ../../..

View File

@ -168,6 +168,7 @@ vinyldns {
secret = ${?CRYPTO_SECRET} secret = ${?CRYPTO_SECRET}
} }
data-stores = ["mysql"]
mysql { mysql {
settings { settings {
# JDBC Settings, these are all values in scalikejdbc-config, not our own # JDBC Settings, these are all values in scalikejdbc-config, not our own
@ -186,6 +187,28 @@ vinyldns {
password = "" password = ""
password = ${?JDBC_PASSWORD} password = ${?JDBC_PASSWORD}
} }
# TODO: Remove the need for these useless configuration blocks
repositories {
zone {
}
batch-change {
}
user {
}
record-set {
}
zone-change {
}
record-change {
}
group {
}
group-change {
}
membership {
}
}
} }
backends = [] backends = []

View File

@ -35,6 +35,12 @@ all: build run
all: build run all: build run
artifact:
@set -euo pipefail
cd ../../..
docker build $(BUILD_ARGS) --target base-build --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t "vinyldns:portal-artifact" -f "$(ROOT_DIR)/Dockerfile" .
docker run -it --rm -v "$$(pwd)/:/output" vinyldns:portal-artifact /bin/bash -c "cp /build/artifacts/*.zip /output/artifacts/"
build: build:
@set -euo pipefail @set -euo pipefail
cd ../../.. cd ../../..

7
build/sbt.sh Normal file
View File

@ -0,0 +1,7 @@
#!/usr/bin/env bash
set -euo pipefail
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
cd "$DIR/../test/api/integration"
make build DOCKER_PARAMS="--build-arg SKIP_API_BUILD=true" && make run-local WITH_ARGS="sbt" DOCKER_PARAMS="-e RUN_SERVICES=none"

View File

@ -228,5 +228,7 @@ object Boot extends App {
case Left(startupFailure) => case Left(startupFailure) =>
logger.error(s"VINYLDNS SERVER UNABLE TO START $startupFailure") logger.error(s"VINYLDNS SERVER UNABLE TO START $startupFailure")
startupFailure.printStackTrace() startupFailure.printStackTrace()
// It doesn't do us much good to keep the application running if it failed to start.
sys.exit(1)
} }
} }

View File

@ -17,8 +17,8 @@
package vinyldns.api.backend.dns package vinyldns.api.backend.dns
import java.net.InetAddress import java.net.InetAddress
import cats.syntax.either._ import cats.syntax.either._
import org.apache.commons.codec.binary.Hex
import org.joda.time.DateTime import org.joda.time.DateTime
import org.xbill.DNS import org.xbill.DNS
import scodec.bits.ByteVector import scodec.bits.ByteVector
@ -302,7 +302,7 @@ trait DnsConversions {
def fromSSHFPRecord(r: DNS.SSHFPRecord, zoneName: DNS.Name, zoneId: String): RecordSet = def fromSSHFPRecord(r: DNS.SSHFPRecord, zoneName: DNS.Name, zoneId: String): RecordSet =
fromDnsRecord(r, zoneName, zoneId) { data => fromDnsRecord(r, zoneName, zoneId) { data =>
List(SSHFPData(data.getAlgorithm, data.getDigestType, new String(data.getFingerPrint))) List(SSHFPData(data.getAlgorithm, data.getDigestType, Hex.encodeHexString(data.getFingerPrint).toUpperCase))
} }
def fromTXTRecord(r: DNS.TXTRecord, zoneName: DNS.Name, zoneId: String): RecordSet = def fromTXTRecord(r: DNS.TXTRecord, zoneName: DNS.Name, zoneId: String): RecordSet =
@ -390,7 +390,7 @@ trait DnsConversions {
) )
case SSHFPData(algorithm, typ, fingerprint) => case SSHFPData(algorithm, typ, fingerprint) =>
new DNS.SSHFPRecord(recordName, DNS.DClass.IN, ttl, algorithm, typ, fingerprint.getBytes) new DNS.SSHFPRecord(recordName, DNS.DClass.IN, ttl, algorithm, typ, Hex.decodeHex(fingerprint.toCharArray()))
case SPFData(text) => case SPFData(text) =>
new DNS.SPFRecord(recordName, DNS.DClass.IN, ttl, text) new DNS.SPFRecord(recordName, DNS.DClass.IN, ttl, text)

View File

@ -102,8 +102,8 @@ class TestData:
"records": [ "records": [
{ {
"algorithm": 1, "algorithm": 1,
"type": 2, "type": 1,
"fingerprint": "fp" "fingerprint": "123456789ABCDEF67890123456789ABCDEF67890"
} }
] ]
} }

View File

@ -309,7 +309,7 @@ class CommandHandlerSpec
// verify our interactions // verify our interactions
verify(mq, atLeastOnce()).receive(count) verify(mq, atLeastOnce()).receive(count)
verify(mockRecordChangeProcessor) verify(mockRecordChangeProcessor, atLeastOnce())
.apply(any[DnsBackend], mockito.Matchers.eq(pendingCreateAAAA)) .apply(any[DnsBackend], mockito.Matchers.eq(pendingCreateAAAA))
verify(mq).remove(cmd) verify(mq).remove(cmd)
} }

View File

@ -172,7 +172,7 @@ class DnsConversionsSpec
RecordSetStatus.Active, RecordSetStatus.Active,
DateTime.now, DateTime.now,
None, None,
List(SSHFPData(1, 2, "fingerprint")) List(SSHFPData(2, 1, "123456789ABCDEF67890123456789ABCDEF67890"))
) )
private val testTXT = RecordSet( private val testTXT = RecordSet(
testZone.id, testZone.id,

View File

@ -73,7 +73,7 @@ class ZoneConnectionValidatorSpec
List(new Regex("some.test.ns.")), List(new Regex("some.test.ns.")),
10000 10000
) { ) {
override val opTimeout: FiniteDuration = 10.milliseconds override val opTimeout: FiniteDuration = 60.seconds
override def loadDns(zone: Zone): IO[ZoneView] = testLoadDns(zone) override def loadDns(zone: Zone): IO[ZoneView] = testLoadDns(zone)
override def isValidBackendId(backendId: Option[String]): Either[Throwable, Unit] = override def isValidBackendId(backendId: Option[String]): Either[Throwable, Unit] =
Right(()) Right(())

View File

@ -10,6 +10,7 @@ release.version
private private
.bloop .bloop
.metals .metals
run.sh
public/js/* public/js/*
!public/js/custom.js !public/js/custom.js

View File

@ -4,7 +4,7 @@ Supplies a UI for and offers authentication into VinylDNS.
# Running Unit Tests # Running Unit Tests
First, startup sbt: `sbt`. First, startup sbt: `build/sbt.sh`.
Next, you can run all tests by simply running `test`, or you can run an individual test by running `test-only *MySpec` Next, you can run all tests by simply running `test`, or you can run an individual test by running `test-only *MySpec`

View File

@ -14,9 +14,19 @@ module.exports = function(config) {
// list of files / patterns to load in the browser // list of files / patterns to load in the browser
files: [ files: [
'*.js', 'js/jquery.min.js',
'js/bootstrap.min.js',
'js/angular.min.js',
'js/moment.min.js',
'js/ui.js',
'test_frameworks/*.js',
'js/vinyldns.js',
'lib/services/**/*.spec.js',
'lib/controllers/**/*.spec.js',
'lib/directives/**/*.spec.js',
'lib/*.js',
//fixtures //fixtures
{pattern: 'mocks/*.json', watched: true, served: true, included: false} {pattern: 'mocks/*.json', watched: true, served: true, included: false},
], ],
// list of files / patterns to exclude // list of files / patterns to exclude
@ -32,7 +42,7 @@ module.exports = function(config) {
plugins: [ plugins: [
'karma-jasmine', 'karma-jasmine',
'karma-chrome-launcher', 'karma-chrome-launcher',
'karma-mocha-reporter' 'karma-mocha-reporter',
], ],
// reporter types: // reporter types:
@ -59,12 +69,12 @@ module.exports = function(config) {
customLaunchers: { customLaunchers: {
ChromeHeadlessNoSandbox: { ChromeHeadlessNoSandbox: {
base: 'ChromeHeadless', base: 'ChromeHeadless',
flags: ['--no-sandbox'] flags: ['--no-sandbox'],
} },
}, },
// Continuous Integration mode // Continuous Integration mode
// if true, it capture browsers, run tests and exit // if true, it capture browsers, run tests and exit
singleRun: true singleRun: true,
}); });
}; };

View File

@ -101,16 +101,16 @@ describe('Service: recordsService', function () {
"name": 'recordName', "name": 'recordName',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": '300', "ttl": '300',
"sshfpItems": [{algorithm: '1', type: '1', fingerprint: 'foo'}, "sshfpItems": [{algorithm: '1', type: '1', fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: '2', type: '1', fingerprint: 'bar'}] {algorithm: '2', type: '1', fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'}]
}; };
expectedRecord = { expectedRecord = {
"id": 'recordId', "id": 'recordId',
"name": 'recordName', "name": 'recordName',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "records": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}] {algorithm: 2, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'}]
}; };
var actualRecord = this.recordsService.toVinylRecord(sentRecord); var actualRecord = this.recordsService.toVinylRecord(sentRecord);
@ -123,8 +123,8 @@ describe('Service: recordsService', function () {
"name": 'recordName', "name": 'recordName',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "records": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}] {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}]
}; };
displayRecord = { displayRecord = {
@ -133,8 +133,8 @@ describe('Service: recordsService', function () {
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": undefined, "records": undefined,
"sshfpItems": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "sshfpItems": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}], {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}],
"onlyFour": true, "onlyFour": true,
"isDotted": false, "isDotted": false,
"canBeEdited": true "canBeEdited": true
@ -150,8 +150,8 @@ describe('Service: recordsService', function () {
"name": 'recordName.with.dot', "name": 'recordName.with.dot',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "records": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}] {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}]
}; };
displayRecord = { displayRecord = {
@ -160,8 +160,8 @@ describe('Service: recordsService', function () {
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": undefined, "records": undefined,
"sshfpItems": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "sshfpItems": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}], {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}],
"onlyFour": true, "onlyFour": true,
"isDotted": true, "isDotted": true,
"canBeEdited": true "canBeEdited": true
@ -177,8 +177,8 @@ describe('Service: recordsService', function () {
"name": 'apex.with.dot', "name": 'apex.with.dot',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "records": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}] {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}]
}; };
displayRecord = { displayRecord = {
@ -187,8 +187,8 @@ describe('Service: recordsService', function () {
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": undefined, "records": undefined,
"sshfpItems": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "sshfpItems": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}], {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}],
"onlyFour": true, "onlyFour": true,
"isDotted": false, "isDotted": false,
"canBeEdited": true "canBeEdited": true
@ -229,8 +229,8 @@ describe('Service: recordsService', function () {
"name": 'apex.with.dot', "name": 'apex.with.dot',
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "records": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}] {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}]
}; };
displayRecord = { displayRecord = {
@ -239,8 +239,8 @@ describe('Service: recordsService', function () {
"type": 'SSHFP', "type": 'SSHFP',
"ttl": 300, "ttl": 300,
"records": undefined, "records": undefined,
"sshfpItems": [{algorithm: 1, type: 1, fingerprint: 'foo'}, "sshfpItems": [{algorithm: 1, type: 1, fingerprint: '123456789ABCDEF67890123456789ABCDEF67890'},
{algorithm: 2, type: 1, fingerprint: 'bar'}], {algorithm: 2, type: 1, fingerprint: 'F23456789ABCDEF67890123456789ABCDEF67890'}],
"onlyFour": true, "onlyFour": true,
"isDotted": false, "isDotted": false,
"canBeEdited": true "canBeEdited": true

View File

@ -1,39 +0,0 @@
#!/bin/bash
function check_for() {
which $1 >/dev/null 2>&1
EXIT_CODE=$?
if [ ${EXIT_CODE} != 0 ]
then
echo "$1 is not installed"
exit ${EXIT_CODE}
fi
}
check_for python
check_for npm
# if the program exits before this has been captured then there must have been an error
EXIT_CODE=1
cd $(dirname $0)
# javascript code generate
bower install
grunt default
TEST_SUITES=('sbt clean coverage test'
'grunt unit'
)
for TEST in "${TEST_SUITES[@]}"
do
echo "##### Running test: [$TEST]"
$TEST
EXIT_CODE=$?
echo "##### Test [$TEST] ended with status [$EXIT_CODE]"
if [ ${EXIT_CODE} != 0 ]
then
exit ${EXIT_CODE}
fi
done

View File

@ -573,17 +573,6 @@ class VinylDNSSpec extends Specification with Mockito with TestApplicationData w
} }
} }
".oidcCallback" should {
"redirect to set session view" in new WithApplication(app) {
val response = vinyldnsPortal
.oidcCallback("id")
.apply(FakeRequest("GET", "id?query=q"))
status(response) mustEqual 200
contentAsString(response) must contain("/public/lib/oidc-finish.js")
}
}
".newGroup" should { ".newGroup" should {
tag("slow") tag("slow")
"return the group description on create - status ok (200)" in new WithApplication(app) { "return the group description on create - status ok (200)" in new WithApplication(app) {

View File

@ -22,10 +22,11 @@ From a shell in the `quickstart/` directory, simply run:
```shell script ```shell script
./quickstart-vinyldns.sh ./quickstart-vinyldns.sh
``` ```
The `quickstart-vinyldns.sh` script takes a number of optional arguments: The `quickstart-vinyldns.sh` script takes a number of optional arguments:
| Flag | Description | | Flag | Description |
|:---|:---| |:---|:-----------------------------------------------------------------------------|
| `-a, --api-only` | do not start up the VinylDNS Portal" | | `-a, --api-only` | do not start up the VinylDNS Portal" |
| `-b, --build` | force a rebuild of the Docker images with the local code" | | `-b, --build` | force a rebuild of the Docker images with the local code" |
| `-c, --clean` | stops all VinylDNS containers" | | `-c, --clean` | stops all VinylDNS containers" |

View File

@ -1,12 +1,15 @@
version: "3.5" version: "3.5"
services: services:
# LDAP container hosting example users
ldap: ldap:
container_name: "vinyldns-ldap" container_name: "vinyldns-ldap"
image: vinyldns/build:openldap image: vinyldns/build:openldap
ports: ports:
- "19004:19004" - "19004:19004"
# Integration image hosting r53, sns, sqs, bind, and mysql
integration: integration:
container_name: "vinyldns-api-integration" container_name: "vinyldns-api-integration"
hostname: &integration_hostname "vinyldns-integration" hostname: &integration_hostname "vinyldns-integration"
@ -25,6 +28,7 @@ services:
- "19001-19003:19001-19003/tcp" - "19001-19003:19001-19003/tcp"
- "19001:19001/udp" - "19001:19001/udp"
# The VinylDNS API
api: api:
container_name: "vinyldns-api" container_name: "vinyldns-api"
image: "vinyldns/api:${VINYLDNS_IMAGE_VERSION}" image: "vinyldns/api:${VINYLDNS_IMAGE_VERSION}"
@ -35,7 +39,7 @@ services:
VINYLDNS_VERSION: "${VINYLDNS_IMAGE_VERSION}" VINYLDNS_VERSION: "${VINYLDNS_IMAGE_VERSION}"
DOCKER_FILE_PATH: "../build/docker/api" DOCKER_FILE_PATH: "../build/docker/api"
volumes: volumes:
- ../build/docker/api/application.conf:/opt/vinyldns/conf/vinyldns.conf - ../build/docker/api/application.conf:/opt/vinyldns/conf/application.conf
env_file: env_file:
.env .env
ports: ports:
@ -43,6 +47,7 @@ services:
depends_on: depends_on:
- integration - integration
# The VinylDNS portal
portal: portal:
container_name: "vinyldns-portal" container_name: "vinyldns-portal"
image: "vinyldns/portal:${VINYLDNS_IMAGE_VERSION}" image: "vinyldns/portal:${VINYLDNS_IMAGE_VERSION}"
@ -62,6 +67,7 @@ services:
- api - api
- ldap - ldap
# Custom network so that we don't interfere with the host system
networks: networks:
default: default:
name: "vinyldns_net" name: "vinyldns_net"

View File

@ -1,16 +1,16 @@
# Build VinylDNS API if the JAR doesn't already exist # Build VinylDNS API if the JAR doesn't already exist
FROM vinyldns/build:base-build as base-build FROM vinyldns/build:base-build as base-build
ARG DOCKERFILE_PATH="test/api/functional" ARG DOCKERFILE_PATH="test/api/functional"
COPY "${DOCKERFILE_PATH}/vinyldns.*" /opt/vinyldns/ COPY "${DOCKERFILE_PATH}/application.conf" /opt/vinyldns/conf/
COPY . /build/ COPY . /build/
WORKDIR /build WORKDIR /build
## Run the build if we don't already have a vinyldns.jar ## Run the build if we don't already have a vinyldns-api.jar
RUN if [ -f assembly/vinyldns.jar ]; then cp assembly/vinyldns.jar /opt/vinyldns; fi && \ RUN if [ -f artifacts/vinyldns-api.jar ]; then cp artifacts/vinyldns-api.jar /opt/vinyldns; fi && \
if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ if [ ! -f /opt/vinyldns/vinyldns-api.jar ]; then \
env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \
sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \
&& cp modules/api/target/scala-2.12/vinyldns.jar /opt/vinyldns/; \ && cp artifacts/vinyldns-api.jar /opt/vinyldns/; \
fi fi
# Build the testing image, copying data from `vinyldns-api` # Build the testing image, copying data from `vinyldns-api`

View File

@ -2,7 +2,6 @@ SHELL=bash
IMAGE_NAME=vinyldns-api-test IMAGE_NAME=vinyldns-api-test
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST)))) ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR)) RELATIVE_ROOT_DIR:=$(shell realpath --relative-to=../../.. $(ROOT_DIR))
VINYLDNS_JAR_PATH?=modules/api/target/scala-2.12/vinyldns.jar
# Check that the required version of make is being used # Check that the required version of make is being used
REQ_MAKE_VER:=3.82 REQ_MAKE_VER:=3.82
@ -28,7 +27,7 @@ endif
.ONESHELL: .ONESHELL:
.PHONY: all build run run-local .PHONY: all build run run-local run-deps-bg clean-containers
all: build run all: build run
@ -42,7 +41,19 @@ run:
USE_TTY="" && test -t 1 && USE_TTY="-t" USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) $(ARG_SEPARATOR) $(WITH_ARGS) docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) $(ARG_SEPARATOR) $(WITH_ARGS)
# Runs the dependencies for the functional test in the background
# This is useful when running the tests on your host machine against the API in a container
run-deps-bg:
@set -euo pipefail
docker stop $(IMAGE_NAME) &> /dev/null || true
USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -d $${USE_TTY} --name $(IMAGE_NAME) --rm $(DOCKER_PARAMS) --entrypoint "/initialize.sh" -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp $(IMAGE_NAME) all tail-logs
run-local: run-local:
@set -euo pipefail @set -euo pipefail
USE_TTY="" && test -t 1 && USE_TTY="-t" USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp -v "$(ROOT_DIR)/../../../modules/api/src/test/functional:/functional_test" $(IMAGE_NAME) -- $(WITH_ARGS) docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp -v "$(ROOT_DIR)/../../../modules/api/src/test/functional:/functional_test" $(IMAGE_NAME) -- $(WITH_ARGS)
clean-containers:
@set -euo pipefail
"$(ROOT_DIR)/../../../utils/clean-vinyldns-containers.sh"

View File

@ -3,16 +3,17 @@ ARG VINYLDNS_BASE_VERSION=latest
# Build VinylDNS API if the JAR doesn't already exist # Build VinylDNS API if the JAR doesn't already exist
FROM vinyldns/build:base-build as base-build FROM vinyldns/build:base-build as base-build
ARG DOCKERFILE_PATH="test/api/integration" ARG DOCKERFILE_PATH="test/api/integration"
COPY "${DOCKERFILE_PATH}/vinyldns.*" /opt/vinyldns/ COPY "${DOCKERFILE_PATH}/application.conf" /opt/vinyldns/conf/
COPY . /build/ COPY . /build/
WORKDIR /build WORKDIR /build
## Run the build if we don't already have a vinyldns.jar ## Run the build if we don't already have a vinyldns-api.jar
RUN if [ -f assembly/vinyldns.jar ]; then cp assembly/vinyldns.jar /opt/vinyldns; fi && \ ARG SKIP_API_BUILD="false"
if [ ! -f /opt/vinyldns/vinyldns.jar ]; then \ RUN if [ -f artifacts/vinyldns-api.jar ]; then cp artifacts/vinyldns-api.jar /opt/vinyldns; fi && \
if [ ! -f /opt/vinyldns/vinyldns-api.jar ] && [ "$SKIP_API_BUILD" == "false" ]; then \
env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \ env SBT_OPTS="-XX:+UseConcMarkSweepGC -Xmx4G -Xms1G" \
sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \ sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \
&& cp assembly/vinyldns.jar /opt/vinyldns/; \ && cp artifacts/vinyldns-api.jar /opt/vinyldns/; \
fi fi
# Build the testing image, copying data from `base-build` # Build the testing image, copying data from `base-build`

View File

@ -27,14 +27,14 @@ endif
.ONESHELL: .ONESHELL:
.PHONY: all build run run-local .PHONY: all build run run-local run-bg stop-bg clean-containers
all: build run all: build run
build: build:
@set -euo pipefail @set -euo pipefail
cd ../../.. cd ../../..
docker build -t $(IMAGE_NAME) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" . docker build -t $(IMAGE_NAME) $(DOCKER_PARAMS) --build-arg DOCKERFILE_PATH="$(RELATIVE_ROOT_DIR)" -f "$(ROOT_DIR)/Dockerfile" .
run: run:
@set -euo pipefail @set -euo pipefail
@ -54,4 +54,8 @@ stop-bg:
run-local: run-local:
@set -euo pipefail @set -euo pipefail
USE_TTY="" && test -t 1 && USE_TTY="-t" USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -p 9000:9000 -p 19001-19003:19001-19003 -p 19001:19001/udp -v "$(ROOT_DIR)/../../..:/build" $(IMAGE_NAME) -- $(WITH_ARGS) docker run -i $${USE_TTY} --rm $(DOCKER_PARAMS) -v "$(ROOT_DIR)/../../..:/build" $(IMAGE_NAME) -- $(WITH_ARGS)
clean-containers:
@set -euo pipefail
"$(ROOT_DIR)/../../../utils/clean-vinyldns-containers.sh"

View File

@ -18,16 +18,13 @@ ifeq ($(EXTRACT_ARGS),true)
# use the rest as arguments for "run" # use the rest as arguments for "run"
WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS)) WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
endif endif
ifneq ($(WITH_ARGS),)
ARG_SEPARATOR=--
endif
%: %:
@: @:
.ONESHELL: .ONESHELL:
.PHONY: all build run run-local .PHONY: all build run run-local clean-containers
all: build run all: build run
@ -39,9 +36,13 @@ build:
run: run:
@set -euo pipefail @set -euo pipefail
USE_TTY="" && test -t 1 && USE_TTY="-t" USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -i $${USE_TTY} --rm $(IMAGE_NAME) -- $(WITH_ARGS) docker run -i $${USE_TTY} --rm $(IMAGE_NAME) $(WITH_ARGS)
run-local: run-local:
@set -euo pipefail @set -euo pipefail
USE_TTY="" && test -t 1 && USE_TTY="-t" USE_TTY="" && test -t 1 && USE_TTY="-t"
docker run -i $${USE_TTY} --rm -v "$$(pwd)/../../../modules/portal:/functional_test" $(IMAGE_NAME) $(ARG_SEPARATOR) $(WITH_ARGS) docker run -i $${USE_TTY} --rm -v "$$(pwd)/../../../modules/portal:/functional_test" -v "$$(pwd)/run.sh:/functional_test/run.sh" $(IMAGE_NAME) $(WITH_ARGS)
clean-containers:
@set -euo pipefail
"$(ROOT_DIR)/../../../utils/clean-vinyldns-containers.sh"

View File

@ -3,11 +3,13 @@
set -eo pipefail set -eo pipefail
ROOT_DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P) ROOT_DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
cd "${ROOT_DIR}" cd "${ROOT_DIR}"
if [ "$1" == "--interactive" ]; then if [ "$1" == "--interactive" ]; then
shift shift
bash bash
else else
grunt unit "$@" # Attempt to just run grunt - this should work most of the time
# We may need to update dependencies if our local functional tests dependencies
# differ from those of the 'base-test-portal' docker image
grunt unit "$@" || { echo "Attempting to recover.." && npm install -f --no-audit --no-fund && grunt unit "$@"; }
fi fi

View File

@ -9,7 +9,7 @@ RUN mkdir -p /vinyldns/python && \
protoc --proto_path=/vinyldns --python_out=/vinyldns/python /vinyldns/VinylDNSProto.proto protoc --proto_path=/vinyldns --python_out=/vinyldns/python /vinyldns/VinylDNSProto.proto
FROM python:3.7-alpine FROM vinyldns/build:base-build
ARG DOCKERFILE_PATH ARG DOCKERFILE_PATH
WORKDIR /app WORKDIR /app
RUN pip install mysql-connector-python==8.0.27 RUN pip install mysql-connector-python==8.0.27

View File

@ -1,8 +1,9 @@
#!/usr/bin/env python #!/usr/bin/env python
import mysql.connector
import VinylDNSProto_pb2
import sys
import os import os
import sys
import VinylDNSProto_pb2
import mysql.connector
# arguments # arguments
if len(sys.argv) != 3: if len(sys.argv) != 3:

View File

@ -1,32 +1,26 @@
#!/usr/bin/env bash #!/usr/bin/env bash
usage () { usage () {
echo -e "Description: Updates a user in VinylDNS to a support user, or removes the user as a support user.\n"
echo -e "Usage: update-support-user.sh [OPTIONS] <username> <enableSupport>\n" echo -e "Usage: update-support-user.sh [OPTIONS] <username> <enableSupport>\n"
echo -e "Description: Updates a user in VinylDNS to a support user, or removes the user as a support user.\n"
echo -e "Required Parameters:" echo -e "Required Parameters:"
echo -e "username\tThe VinylDNS user for which to change the support flag" echo -e "username\tThe VinylDNS user for which to change the support flag"
echo -e "enableSupport\t'true' to set the user as a support user; 'false' to remove support privileges\n" echo -e "enableSupport\t'true' to set the user as a support user; 'false' to remove support privileges\n"
echo -e "OPTIONS:" echo -e "OPTIONS:"
echo -e "Must define as an environment variables the following (or pass them in on the command line)\n" echo -e " -u|--user \tDatabase user name for accessing the VinylDNS database (DB_USER - default=root)"
echo -e "DB_USER (user name for accessing the VinylDNS database)" echo -e " -p|--password\tDatabase user password for accessing the VinylDNS database (DB_PASS - default=pass)"
echo -e "DB_PASS (user password for accessing the VinylDNS database)" echo -e " -h|--host\tDatabase host name for the mysql server (DB_HOST - default=vinyldns-integration)"
echo -e "DB_HOST (host name for the mysql server of the VinylDNS database)" echo -e " -n|--name\tName of the VinylDNS database, (DB_NAME - default=vinyldns)"
echo -e "DB_NAME (name of the VinylDNS database, defaults to vinyldns)" echo -e " -c|--port\tPort of the VinylDNS database, (DB_PORT - default=19002)"
echo -e "DB_PORT (port of the VinylDNS database, defaults to 19002)\n"
echo -e " -u|--user \tDatabase user name for accessing the VinylDNS database"
echo -e " -p|--password\tDatabase user password for accessing the VinylDNS database"
echo -e " -h|--host\tDatabase host name for the mysql server"
echo -e " -n|--name\tName of the VinylDNS database, defaults to vinyldns"
echo -e " -c|--port\tPort of the VinylDNS database, defaults to 19002"
} }
DIR=$( cd "$(dirname "$0")" || exit ; pwd -P ) DIR=$( cd "$(dirname "$0")" || exit ; pwd -P )
VINYL_ROOT=$DIR/.. VINYL_ROOT=$DIR/..
WORK_DIR=${VINYL_ROOT}/docker WORK_DIR=${VINYL_ROOT}/docker
DB_USER=$DB_USER DB_USER=${DB_USER:-root}
DB_PASS=$DB_PASS DB_PASS=${DB_PASS:-pass}
DB_HOST=$DB_HOST DB_HOST=${DB_HOST:-vinyldns-integration}
DB_NAME=${DB_NAME:-vinyldns} DB_NAME=${DB_NAME:-vinyldns}
DB_PORT=${DB_PORT:-19002} DB_PORT=${DB_PORT:-19002}
@ -46,50 +40,36 @@ VINYL_USER="$1"
MAKE_SUPPORT="$2" MAKE_SUPPORT="$2"
ERROR= ERROR=
if [[ -z "$DB_USER" ]] if [ -z "$DB_USER" ]; then
then
echo "No DB_USER environment variable found"
ERROR="1" ERROR="1"
fi fi
if [[ -z "$DB_PASS" ]] if [ -z "$DB_PASS" ]; then
then
echo "No DB_PASS environment variable found"
ERROR="1" ERROR="1"
fi fi
if [[ -z "$DB_HOST" ]] if [ -z "$DB_HOST" ]; then
then
echo "No DB_HOST environment variable found"
ERROR="1" ERROR="1"
fi fi
if [[ -z "$DB_NAME" ]] if [ -z "$DB_NAME" ]; then
then
echo "No DB_NAME environment variable found"
ERROR="1" ERROR="1"
fi fi
if [ -z "$VINYL_USER" ]; then
if [[ -z "$VINYL_USER" ]]
then
echo "Parameter 'username' not specified"
ERROR="1" ERROR="1"
fi fi
if [[ -z "$MAKE_SUPPORT" ]] if [ -z "$MAKE_SUPPORT" ]; then
then
echo "Parameter 'enableSupport' not specified"
ERROR="1" ERROR="1"
fi fi
if [[ -n "$ERROR" ]] if [ -n "$ERROR" ]; then
then
usage usage
exit 1 exit 1
fi fi
# Copy the proto definition to the Docker context and build # Build and run the Docker container
cd admin cd admin
make build make build
make run DOCKER_PARAMS="-e \"DB_USER=$DB_USER\" -e \"DB_PASS=$DB_PASS\" -e \"DB_HOST=$DB_HOST\" -e \"DB_NAME=$DB_NAME\" -e \"DB_PORT=$DB_PORT\"" WITH_ARGS="\"$VINYL_USER\" \"$MAKE_SUPPORT\"" make run DOCKER_PARAMS="-e \"DB_USER=$DB_USER\" -e \"DB_PASS=$DB_PASS\" -e \"DB_HOST=$DB_HOST\" -e \"DB_NAME=$DB_NAME\" -e \"DB_PORT=$DB_PORT\"" WITH_ARGS="\"$VINYL_USER\" \"$MAKE_SUPPORT\""