mirror of
https://github.com/VinylDNS/vinyldns
synced 2025-08-22 10:10:12 +00:00
Merge branch 'master' into aravindhr/create-transaction
This commit is contained in:
commit
9a6da3d5b4
@ -2,8 +2,9 @@ coverage:
|
|||||||
status:
|
status:
|
||||||
project:
|
project:
|
||||||
default:
|
default:
|
||||||
threshold: 0.1
|
threshold: 1
|
||||||
|
informational: true
|
||||||
patch:
|
patch:
|
||||||
default:
|
default:
|
||||||
# minimum coverage at 0, should never fail, but will still report
|
|
||||||
target: 0
|
target: 0
|
||||||
|
informational: true
|
||||||
|
17
.dockerignore
Normal file
17
.dockerignore
Normal file
@ -0,0 +1,17 @@
|
|||||||
|
**/.venv*
|
||||||
|
**/.virtualenv
|
||||||
|
**/target
|
||||||
|
**/docs
|
||||||
|
**/out
|
||||||
|
**/.log
|
||||||
|
**/.idea/
|
||||||
|
**/.bsp
|
||||||
|
**/*cache*
|
||||||
|
**/.git
|
||||||
|
**/Dockerfile
|
||||||
|
**/*.dockerignore
|
||||||
|
**/.github
|
||||||
|
**/_template
|
||||||
|
img/
|
||||||
|
**/.env
|
||||||
|
modules/portal/node_modules/
|
30
.github/ISSUE_TEMPLATE/bug_report.md
vendored
30
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@ -1,28 +1,16 @@
|
|||||||
---
|
---
|
||||||
name: Bug report
|
name: Bug Report
|
||||||
about: Create a report to help us improve
|
about: Create a report to help us improve
|
||||||
|
title: ''
|
||||||
|
labels: status/needs-label
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Describe the bug**
|
**Describe the bug**
|
||||||
A clear and concise description of what the bug is.
|
Please provide as much detail as you can. Here are some important details:
|
||||||
|
|
||||||
**VinylDNS Version**
|
1. A description of the bug (expected behavior vs actual behavior)
|
||||||
|
2. The VinylDNS version which contains the bug
|
||||||
**To Reproduce**
|
3. Any steps to reproduce (if we can't reproduce it, we can't fix it!)
|
||||||
Steps to reproduce the behavior:
|
4. Any other helpful information (stack trace, log messages, screenshots, etc)
|
||||||
1. Go to '...'
|
|
||||||
2. Click on '....'
|
|
||||||
3. Scroll down to '....'
|
|
||||||
4. See error
|
|
||||||
|
|
||||||
**Expected behavior**
|
|
||||||
A clear and concise description of what you expected to happen.
|
|
||||||
|
|
||||||
**Screenshots**
|
|
||||||
If applicable, add screenshots to help explain your problem.
|
|
||||||
|
|
||||||
**Stack trace or error log output**
|
|
||||||
|
|
||||||
**Additional context**
|
|
||||||
Add any other context about the problem here.
|
|
||||||
|
15
.github/ISSUE_TEMPLATE/feature_request.md
vendored
15
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@ -1,17 +1,10 @@
|
|||||||
---
|
---
|
||||||
name: Feature request
|
name: Feature request
|
||||||
about: Suggest an idea for this project
|
about: Suggest an idea for this project
|
||||||
|
title: ''
|
||||||
|
labels: status/needs-label
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
**Is your feature request related to a problem? Please describe.**
|
**Describe what you'd like to see added or improved in VinylDNS**
|
||||||
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
|
|
||||||
|
|
||||||
**Describe the solution you'd like**
|
|
||||||
A clear and concise description of what you want to happen.
|
|
||||||
|
|
||||||
**Describe alternatives you've considered**
|
|
||||||
A clear and concise description of any alternative solutions or features you've considered.
|
|
||||||
|
|
||||||
**Additional context**
|
|
||||||
Add any other context or screenshots about the feature request here.
|
|
||||||
|
11
.github/ISSUE_TEMPLATE/maintenance-request.md
vendored
11
.github/ISSUE_TEMPLATE/maintenance-request.md
vendored
@ -1,11 +0,0 @@
|
|||||||
---
|
|
||||||
name: Maintenance request
|
|
||||||
about: Suggest an upgrade, refactoring, code move, new library
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
**Motivation**
|
|
||||||
What is the reason to perform the maintenance. What benefits will come about
|
|
||||||
|
|
||||||
**Scope of change**
|
|
||||||
What part(s) of the system are likely to change. For example, REST endpoints, repositories, core, functional tests, etc.
|
|
143
.github/workflows/ci.yml
vendored
143
.github/workflows/ci.yml
vendored
@ -1,143 +0,0 @@
|
|||||||
# Much copied from sbt-github-actions, modified to support running e2e tests
|
|
||||||
name: Continuous Integration
|
|
||||||
|
|
||||||
on:
|
|
||||||
pull_request:
|
|
||||||
branches: ['*']
|
|
||||||
push:
|
|
||||||
branches: ['master']
|
|
||||||
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
build:
|
|
||||||
name: Build and Test
|
|
||||||
if: "!contains(github.event.head_commit.message, 'ci skip')"
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest]
|
|
||||||
scala: [2.12.10]
|
|
||||||
java: [adopt@1.11]
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout current branch (full)
|
|
||||||
uses: actions/checkout@v2
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Setup Java and Scala
|
|
||||||
uses: olafurpg/setup-scala@v10
|
|
||||||
env:
|
|
||||||
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
|
|
||||||
with:
|
|
||||||
java-version: ${{ matrix.java }}
|
|
||||||
|
|
||||||
- name: Cache ivy2
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.ivy2/cache
|
|
||||||
key: ${{ runner.os }}-sbt-ivy-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (generic)
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.coursier/cache/v1
|
|
||||||
key: ${{ runner.os }}-generic-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (linux)
|
|
||||||
if: contains(runner.os, 'linux')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.cache/coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (macOS)
|
|
||||||
if: contains(runner.os, 'macos')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/Library/Caches/Coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (windows)
|
|
||||||
if: contains(runner.os, 'windows')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/AppData/Local/Coursier/Cache/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache sbt
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.sbt
|
|
||||||
key: ${{ runner.os }}-sbt-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- run: sbt ++${{ matrix.scala }} validate verify
|
|
||||||
|
|
||||||
- name: Codecov
|
|
||||||
uses: codecov/codecov-action@v1
|
|
||||||
with:
|
|
||||||
fail_ci_if_error: true # optional (default = false)
|
|
||||||
|
|
||||||
func:
|
|
||||||
name: Func Test
|
|
||||||
if: "!contains(github.event.head_commit.message, 'ci skip')"
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest]
|
|
||||||
scala: [2.12.10]
|
|
||||||
java: [adopt@1.11]
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
steps:
|
|
||||||
- name: Checkout current branch (full)
|
|
||||||
uses: actions/checkout@v2
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Setup Java and Scala
|
|
||||||
uses: olafurpg/setup-scala@v10
|
|
||||||
env:
|
|
||||||
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
|
|
||||||
with:
|
|
||||||
java-version: ${{ matrix.java }}
|
|
||||||
|
|
||||||
- name: Cache ivy2
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.ivy2/cache
|
|
||||||
key: ${{ runner.os }}-sbt-ivy-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (generic)
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.coursier/cache/v1
|
|
||||||
key: ${{ runner.os }}-generic-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (linux)
|
|
||||||
if: contains(runner.os, 'linux')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.cache/coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (macOS)
|
|
||||||
if: contains(runner.os, 'macos')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/Library/Caches/Coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (windows)
|
|
||||||
if: contains(runner.os, 'windows')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/AppData/Local/Coursier/Cache/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache sbt
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.sbt
|
|
||||||
key: ${{ runner.os }}-sbt-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
- name: Func tests
|
|
||||||
run: ./bin/func-test-portal.sh && ./bin/func-test-api-travis.sh
|
|
55
.github/workflows/clean.yml
vendored
55
.github/workflows/clean.yml
vendored
@ -1,55 +0,0 @@
|
|||||||
# This file was automatically generated by sbt-github-actions using the
|
|
||||||
# githubWorkflowGenerate task. Kept it here
|
|
||||||
|
|
||||||
name: Clean
|
|
||||||
|
|
||||||
on: push
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
delete-artifacts:
|
|
||||||
name: Delete Artifacts
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
env:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
steps:
|
|
||||||
- name: Delete artifacts
|
|
||||||
run: |
|
|
||||||
# Customize those three lines with your repository and credentials:
|
|
||||||
REPO=${GITHUB_API_URL}/repos/${{ github.repository }}
|
|
||||||
|
|
||||||
# A shortcut to call GitHub API.
|
|
||||||
ghapi() { curl --silent --location --user _:$GITHUB_TOKEN "$@"; }
|
|
||||||
|
|
||||||
# A temporary file which receives HTTP response headers.
|
|
||||||
TMPFILE=/tmp/tmp.$$
|
|
||||||
|
|
||||||
# An associative array, key: artifact name, value: number of artifacts of that name.
|
|
||||||
declare -A ARTCOUNT
|
|
||||||
|
|
||||||
# Process all artifacts on this repository, loop on returned "pages".
|
|
||||||
URL=$REPO/actions/artifacts
|
|
||||||
while [[ -n "$URL" ]]; do
|
|
||||||
|
|
||||||
# Get current page, get response headers in a temporary file.
|
|
||||||
JSON=$(ghapi --dump-header $TMPFILE "$URL")
|
|
||||||
|
|
||||||
# Get URL of next page. Will be empty if we are at the last page.
|
|
||||||
URL=$(grep '^Link:' "$TMPFILE" | tr ',' '\n' | grep 'rel="next"' | head -1 | sed -e 's/.*<//' -e 's/>.*//')
|
|
||||||
rm -f $TMPFILE
|
|
||||||
|
|
||||||
# Number of artifacts on this page:
|
|
||||||
COUNT=$(( $(jq <<<$JSON -r '.artifacts | length') ))
|
|
||||||
|
|
||||||
# Loop on all artifacts on this page.
|
|
||||||
for ((i=0; $i < $COUNT; i++)); do
|
|
||||||
|
|
||||||
# Get name of artifact and count instances of this name.
|
|
||||||
name=$(jq <<<$JSON -r ".artifacts[$i].name?")
|
|
||||||
ARTCOUNT[$name]=$(( $(( ${ARTCOUNT[$name]} )) + 1))
|
|
||||||
|
|
||||||
id=$(jq <<<$JSON -r ".artifacts[$i].id?")
|
|
||||||
size=$(( $(jq <<<$JSON -r ".artifacts[$i].size_in_bytes?") ))
|
|
||||||
printf "Deleting '%s' #%d, %'d bytes\n" $name ${ARTCOUNT[$name]} $size
|
|
||||||
ghapi -X DELETE $REPO/actions/artifacts/$id
|
|
||||||
done
|
|
||||||
done
|
|
12
.github/workflows/codecov_review.yml
vendored
12
.github/workflows/codecov_review.yml
vendored
@ -1,12 +0,0 @@
|
|||||||
name: Codecov Review
|
|
||||||
|
|
||||||
on:
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
review:
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
|
|
||||||
steps:
|
|
||||||
- name: Output Environment
|
|
||||||
run: env
|
|
80
.github/workflows/publish-site.yml
vendored
80
.github/workflows/publish-site.yml
vendored
@ -1,85 +1,27 @@
|
|||||||
# Generates the microsite on push to master
|
|
||||||
# Relies on the SBT_MICROSITES_PUBLISH_TOKEN secret to be setup
|
|
||||||
# as a Github secret
|
|
||||||
name: Microsite
|
name: Microsite
|
||||||
|
concurrency:
|
||||||
|
cancel-in-progress: true
|
||||||
|
group: "publish-site"
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
shell: bash
|
||||||
|
|
||||||
on:
|
on:
|
||||||
push:
|
workflow_dispatch:
|
||||||
branches:
|
branches: [ 'master', 'main' ]
|
||||||
- master
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
site:
|
site:
|
||||||
name: Publish Site
|
name: Publish Site
|
||||||
strategy:
|
runs-on: ubuntu-latest
|
||||||
matrix:
|
|
||||||
os: [ubuntu-latest]
|
|
||||||
scala: [2.12.10]
|
|
||||||
java: [adopt@1.11]
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout current branch (full)
|
- name: Checkout current branch (full)
|
||||||
uses: actions/checkout@v2
|
uses: actions/checkout@v2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
|
||||||
- name: Setup Java and Scala
|
- run: "build/publish_docs.sh"
|
||||||
uses: olafurpg/setup-scala@v10
|
|
||||||
env:
|
|
||||||
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
|
|
||||||
with:
|
|
||||||
java-version: ${{ matrix.java }}
|
|
||||||
|
|
||||||
- name: Cache ivy2
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.ivy2/cache
|
|
||||||
key: ${{ runner.os }}-sbt-ivy-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (generic)
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.coursier/cache/v1
|
|
||||||
key: ${{ runner.os }}-generic-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (linux)
|
|
||||||
if: contains(runner.os, 'linux')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.cache/coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (macOS)
|
|
||||||
if: contains(runner.os, 'macos')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/Library/Caches/Coursier/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache coursier (windows)
|
|
||||||
if: contains(runner.os, 'windows')
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/AppData/Local/Coursier/Cache/v1
|
|
||||||
key: ${{ runner.os }}-sbt-coursier-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Cache sbt
|
|
||||||
uses: actions/cache@v1
|
|
||||||
with:
|
|
||||||
path: ~/.sbt
|
|
||||||
key: ${{ runner.os }}-sbt-cache-${{ hashFiles('**/*.sbt') }}-${{ hashFiles('project/build.properties') }}
|
|
||||||
|
|
||||||
- name: Set up Ruby
|
|
||||||
uses: actions/setup-ruby@v1
|
|
||||||
with:
|
|
||||||
ruby-version: 2.6
|
|
||||||
|
|
||||||
- name: Install dependencies
|
|
||||||
run: >
|
|
||||||
sudo apt install libxslt-dev &&
|
|
||||||
gem install sass jekyll:4.0.0
|
|
||||||
|
|
||||||
- run: sbt ++${{ matrix.scala }} ";project docs; publishMicrosite";
|
|
||||||
env:
|
env:
|
||||||
SBT_MICROSITES_PUBLISH_TOKEN: ${{ secrets.VINYLDNS_MICROSITE }}
|
SBT_MICROSITES_PUBLISH_TOKEN: ${{ secrets.VINYLDNS_MICROSITE }}
|
||||||
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
|
ACTIONS_ALLOW_UNSECURE_COMMANDS: 'true'
|
||||||
|
90
.github/workflows/release-vnext.yml
vendored
Executable file
90
.github/workflows/release-vnext.yml
vendored
Executable file
@ -0,0 +1,90 @@
|
|||||||
|
name: VinylDNS Release vNext
|
||||||
|
concurrency:
|
||||||
|
cancel-in-progress: true
|
||||||
|
group: "release-vnext"
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
shell: bash
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
branches: [ 'master','main' ]
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
|
||||||
|
verify:
|
||||||
|
name: Verify Release
|
||||||
|
if: "!contains(github.event.head_commit.message, 'ci skip')"
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch
|
||||||
|
if: github.event_name != 'push' # We only need to verify if this is manually triggered
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Build and Test
|
||||||
|
if: github.event_name != 'push' # We only need to verify if this is manually triggered
|
||||||
|
run: cd build/ && ./assemble_api.sh && ./run_all_tests.sh
|
||||||
|
|
||||||
|
docker-release-api:
|
||||||
|
name: Release API vNext Image
|
||||||
|
needs: [ verify ]
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch (full)
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKER_USER }}
|
||||||
|
password: ${{ secrets.DOCKER_TOKEN }}
|
||||||
|
|
||||||
|
- name: Import Content Trust Key
|
||||||
|
run: docker trust key load <(echo "${SIGNING_KEY}") --name vinyldns_svc
|
||||||
|
env:
|
||||||
|
SIGNING_KEY: ${{ secrets.SIGNING_KEY }}
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
- name: Publish API Docker Image
|
||||||
|
run: make -C build/docker/api publish-vnext
|
||||||
|
env:
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
docker-release-portal:
|
||||||
|
name: Release Portal vNext Image
|
||||||
|
needs: [ verify ]
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch (full)
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKER_USER }}
|
||||||
|
password: ${{ secrets.DOCKER_TOKEN }}
|
||||||
|
|
||||||
|
- name: Import Content Trust Key
|
||||||
|
run: docker trust key load <(echo "${SIGNING_KEY}") --name vinyldns_svc
|
||||||
|
env:
|
||||||
|
SIGNING_KEY: ${{ secrets.SIGNING_KEY }}
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
- name: Publish Portal Docker Image
|
||||||
|
run: make -C build/docker/portal publish-vnext
|
||||||
|
env:
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
144
.github/workflows/release.yml
vendored
Normal file
144
.github/workflows/release.yml
vendored
Normal file
@ -0,0 +1,144 @@
|
|||||||
|
name: VinylDNS Official Release
|
||||||
|
concurrency:
|
||||||
|
cancel-in-progress: true
|
||||||
|
group: "release"
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
shell: bash
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_dispatch:
|
||||||
|
inputs:
|
||||||
|
verify-first:
|
||||||
|
description: 'Verify First?'
|
||||||
|
required: true
|
||||||
|
default: 'true'
|
||||||
|
create-gh-release:
|
||||||
|
description: 'Create a GitHub Release?'
|
||||||
|
required: true
|
||||||
|
default: 'true'
|
||||||
|
publish-images:
|
||||||
|
description: 'Publish Docker Images?'
|
||||||
|
required: true
|
||||||
|
default: 'true'
|
||||||
|
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
verify:
|
||||||
|
name: Verify Release
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch
|
||||||
|
if: github.event.inputs.verify-first == 'true'
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Run Tests
|
||||||
|
id: verify
|
||||||
|
if: github.event.inputs.verify-first == 'true'
|
||||||
|
run: cd build/ && ./assemble_api.sh && ./run_all_tests.sh
|
||||||
|
|
||||||
|
create-gh-release:
|
||||||
|
name: Create GitHub Release
|
||||||
|
needs: verify
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.inputs.create-gh-release == 'true'
|
||||||
|
permissions:
|
||||||
|
contents: write
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Build Artifacts
|
||||||
|
id: build
|
||||||
|
run: cd build/ && ./assemble_api.sh && ./assemble_portal.sh
|
||||||
|
|
||||||
|
- name: Get Version
|
||||||
|
id: get-version
|
||||||
|
run: echo "::set-output name=vinyldns_version::$(awk -F'"' '{print $2}' ./version.sbt)"
|
||||||
|
|
||||||
|
- name: Create GitHub Release
|
||||||
|
id: create_release
|
||||||
|
uses: softprops/action-gh-release@1e07f4398721186383de40550babbdf2b84acfc5 # v0.1.14
|
||||||
|
with:
|
||||||
|
tag_name: v${{ steps.get-version.outputs.vinyldns_version }}
|
||||||
|
generate_release_notes: true
|
||||||
|
files: artifacts/*
|
||||||
|
|
||||||
|
docker-release-api:
|
||||||
|
name: Release API Docker Image
|
||||||
|
needs: [ verify, create-gh-release ]
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.inputs.publish-images == 'true'
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Get Version
|
||||||
|
id: get-version
|
||||||
|
run: echo "::set-output name=vinyldns_version::$(curl -s https://api.github.com/repos/vinyldns/vinyldns/releases | jq -rc '.[0].tag_name')"
|
||||||
|
|
||||||
|
- name: Checkout current branch (full)
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
ref: ${{ steps.get-version.outputs.vinyldns_version }}
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKER_USER }}
|
||||||
|
password: ${{ secrets.DOCKER_TOKEN }}
|
||||||
|
|
||||||
|
- name: Import Content Trust Key
|
||||||
|
run: docker trust key load <(echo "${SIGNING_KEY}") --name vinyldns_svc
|
||||||
|
env:
|
||||||
|
SIGNING_KEY: ${{ secrets.SIGNING_KEY }}
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
# This will publish the latest release
|
||||||
|
- name: Publish API Docker Image
|
||||||
|
run: make -C build/docker/api publish
|
||||||
|
env:
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
docker-release-portal:
|
||||||
|
name: Release Portal Docker Image
|
||||||
|
needs: [ verify, create-gh-release ]
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: github.event.inputs.publish-images == 'true'
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Get Version
|
||||||
|
id: get-version
|
||||||
|
run: echo "::set-output name=vinyldns_version::$(curl -s https://api.github.com/repos/vinyldns/vinyldns/releases | jq -rc '.[0].tag_name')"
|
||||||
|
|
||||||
|
- name: Checkout current branch (full)
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
ref: ${{ steps.get-version.outputs.vinyldns_version }}
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Login to Docker Hub
|
||||||
|
uses: docker/login-action@v1
|
||||||
|
with:
|
||||||
|
username: ${{ secrets.DOCKER_USER }}
|
||||||
|
password: ${{ secrets.DOCKER_TOKEN }}
|
||||||
|
|
||||||
|
- name: Import Content Trust Key
|
||||||
|
run: docker trust key load <(echo "${SIGNING_KEY}") --name vinyldns_svc
|
||||||
|
env:
|
||||||
|
SIGNING_KEY: ${{ secrets.SIGNING_KEY }}
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
||||||
|
|
||||||
|
# This will publish the latest release
|
||||||
|
- name: Publish Portal Docker Image
|
||||||
|
run: make -C build/docker/portal publish
|
||||||
|
env:
|
||||||
|
DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE: ${{ secrets.DOCKER_CONTENT_TRUST_REPOSITORY_PASSPHRASE }}
|
35
.github/workflows/verify.yml
vendored
Normal file
35
.github/workflows/verify.yml
vendored
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
name: Verify and Test
|
||||||
|
|
||||||
|
defaults:
|
||||||
|
run:
|
||||||
|
shell: bash
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request:
|
||||||
|
branches: [ '*' ]
|
||||||
|
push:
|
||||||
|
branches: [ 'master','main' ]
|
||||||
|
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
build:
|
||||||
|
name: Run Tests
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: "!contains(github.event.head_commit.message, 'ci skip')"
|
||||||
|
|
||||||
|
steps:
|
||||||
|
- name: Checkout current branch (full)
|
||||||
|
uses: actions/checkout@v2
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Build and Test
|
||||||
|
run: cd build/ && ./assemble_api.sh && ./run_all_tests.sh
|
||||||
|
|
||||||
|
- name: Codecov
|
||||||
|
uses: codecov/codecov-action@v2
|
||||||
|
with:
|
||||||
|
token: ${{ secrets.CODECOV_TOKEN }}
|
||||||
|
fail_ci_if_error: false
|
6
.gitignore
vendored
6
.gitignore
vendored
@ -31,4 +31,8 @@ tmp.out
|
|||||||
.vscode
|
.vscode
|
||||||
project/metals.sbt
|
project/metals.sbt
|
||||||
.bsp
|
.bsp
|
||||||
docker/data
|
quickstart/data
|
||||||
|
**/.virtualenv
|
||||||
|
**/.venv*
|
||||||
|
**/*cache*
|
||||||
|
**/artifacts/
|
||||||
|
29
AUTHORS.md
29
AUTHORS.md
@ -1,23 +1,12 @@
|
|||||||
# Authors
|
# Authors
|
||||||
|
|
||||||
This project would not be possible without the generous contributions of many people.
|
This project would not be possible without the generous contributions of many people. Thank you! If you have contributed
|
||||||
Thank you! If you have contributed in any way, but do not see your name here, please open a PR to add yourself (in alphabetical order by last name)!
|
in any way, but do not see your name here, please open a PR to add yourself (in alphabetical order by last name)!
|
||||||
|
|
||||||
## Maintainers
|
|
||||||
- Paul Cleary
|
|
||||||
- Ryan Emerle
|
|
||||||
- Nima Eskandary
|
|
||||||
|
|
||||||
## Tool Maintainers
|
|
||||||
- Mike Ball: vinyldns-cli, vinyldns-terraform
|
|
||||||
- Nathan Pierce: vinyldns-ruby
|
|
||||||
|
|
||||||
## DNS SMEs
|
|
||||||
- Joe Crowe
|
|
||||||
- David Back
|
|
||||||
- Hong Ye
|
|
||||||
|
|
||||||
## Contributors
|
## Contributors
|
||||||
|
|
||||||
|
- David Back
|
||||||
|
- Mike Ball
|
||||||
- Tommy Barker
|
- Tommy Barker
|
||||||
- Robert Barrimond
|
- Robert Barrimond
|
||||||
- Charles Bitter
|
- Charles Bitter
|
||||||
@ -25,11 +14,14 @@ Thank you! If you have contributed in any way, but do not see your name here, pl
|
|||||||
- Maulon Byron
|
- Maulon Byron
|
||||||
- Shirlette Chambers
|
- Shirlette Chambers
|
||||||
- Varsha Chandrashekar
|
- Varsha Chandrashekar
|
||||||
|
- Paul Cleary
|
||||||
- Peter Cline
|
- Peter Cline
|
||||||
- Kemar Cockburn
|
- Kemar Cockburn
|
||||||
- Luke Cori
|
- Luke Cori
|
||||||
|
- Joe Crowe
|
||||||
- Jearvon Dharrie
|
- Jearvon Dharrie
|
||||||
- Andrew Dunn
|
- Andrew Dunn
|
||||||
|
- Ryan Emerle
|
||||||
- David Grizzanti
|
- David Grizzanti
|
||||||
- Alejandro Guirao
|
- Alejandro Guirao
|
||||||
- Daniel Jin
|
- Daniel Jin
|
||||||
@ -37,16 +29,19 @@ Thank you! If you have contributed in any way, but do not see your name here, pl
|
|||||||
- Krista Khare
|
- Krista Khare
|
||||||
- Patrick Lee
|
- Patrick Lee
|
||||||
- Sheree Liu
|
- Sheree Liu
|
||||||
|
- Michael Ly
|
||||||
- Deepak Mohanakrishnan
|
- Deepak Mohanakrishnan
|
||||||
- Jon Moore
|
- Jon Moore
|
||||||
- Palash Nigam
|
- Palash Nigam
|
||||||
- Joshulyne Park
|
- Joshulyne Park
|
||||||
|
- Nathan Pierce
|
||||||
- Michael Pilquist
|
- Michael Pilquist
|
||||||
- Sriram Ramakrishnan
|
- Sriram Ramakrishnan
|
||||||
- Khalid Reid
|
- Khalid Reid
|
||||||
- Timo Schmid
|
- Timo Schmid
|
||||||
- Trent Schmidt
|
- Trent Schmidt
|
||||||
- Ghafar Shah
|
- Ghafar Shah
|
||||||
|
- Rebecca Star
|
||||||
- Jess Stodola
|
- Jess Stodola
|
||||||
- Juan Valencia
|
- Juan Valencia
|
||||||
- Anastasia Vishnyakova
|
- Anastasia Vishnyakova
|
||||||
@ -54,3 +49,5 @@ Thank you! If you have contributed in any way, but do not see your name here, pl
|
|||||||
- Fei Wan
|
- Fei Wan
|
||||||
- Andrew Wang
|
- Andrew Wang
|
||||||
- Peter Willis
|
- Peter Willis
|
||||||
|
- Britney Wright
|
||||||
|
- Hong Ye
|
||||||
|
@ -1,8 +1,11 @@
|
|||||||
# VinylDNS Code of Conduct
|
# Contributor Covenant Code of Conduct
|
||||||
|
|
||||||
## Our Pledge
|
## Our Pledge
|
||||||
|
|
||||||
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
|
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making
|
||||||
|
participation in our project and our community a harassment-free experience for everyone, regardless of age, body size,
|
||||||
|
disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race,
|
||||||
|
religion, or sexual identity and orientation.
|
||||||
|
|
||||||
## Our Standards
|
## Our Standards
|
||||||
|
|
||||||
@ -24,23 +27,36 @@ Examples of unacceptable behavior by participants include:
|
|||||||
|
|
||||||
## Our Responsibilities
|
## Our Responsibilities
|
||||||
|
|
||||||
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take
|
||||||
|
appropriate and fair corrective action in response to any instances of unacceptable behavior.
|
||||||
|
|
||||||
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits,
|
||||||
|
issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any
|
||||||
|
contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
|
||||||
|
|
||||||
## Scope
|
## Scope
|
||||||
|
|
||||||
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
|
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the
|
||||||
|
project or its community. Examples of representing a project or community include using an official project e-mail
|
||||||
|
address, posting via an official social media account, or acting as an appointed representative at an online or offline
|
||||||
|
event. Representation of a project may be further defined and clarified by project maintainers.
|
||||||
|
|
||||||
## Enforcement
|
## Enforcement
|
||||||
|
|
||||||
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at vinyldns-core@googlegroups.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
|
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team
|
||||||
|
at [Comcast_Open_Source_Services@comcast.com](mailto:Comcast_Open_Source_Services@comcast.com). All complaints will be
|
||||||
|
reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
|
||||||
|
The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of
|
||||||
|
specific enforcement policies may be posted separately.
|
||||||
|
|
||||||
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
|
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent
|
||||||
|
repercussions as determined by other members of the project's leadership.
|
||||||
|
|
||||||
## Attribution
|
## Attribution
|
||||||
|
|
||||||
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version]
|
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available
|
||||||
|
at [http://contributor-covenant.org/version/1/4][version]
|
||||||
|
|
||||||
[homepage]: http://contributor-covenant.org
|
[homepage]: http://contributor-covenant.org
|
||||||
|
|
||||||
[version]: http://contributor-covenant.org/version/1/4/
|
[version]: http://contributor-covenant.org/version/1/4/
|
||||||
|
174
CONTRIBUTING.md
174
CONTRIBUTING.md
@ -1,7 +1,9 @@
|
|||||||
# Contributing to VinylDNS
|
# Contributing to VinylDNS
|
||||||
|
|
||||||
The following are a set of guidelines for contributing to VinylDNS and its associated repositories.
|
The following are a set of guidelines for contributing to VinylDNS and its associated repositories.
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
* [Code of Conduct](#code-of-conduct)
|
* [Code of Conduct](#code-of-conduct)
|
||||||
* [Issues](#issues)
|
* [Issues](#issues)
|
||||||
* [Working on an Issue](#working-on-an-issue)
|
* [Working on an Issue](#working-on-an-issue)
|
||||||
@ -18,112 +20,139 @@ The following are a set of guidelines for contributing to VinylDNS and its assoc
|
|||||||
* [Contributor License Agreement](#contributor-license-agreement)
|
* [Contributor License Agreement](#contributor-license-agreement)
|
||||||
* [Modifying your Pull Request](#modifying-your-pull-requests)
|
* [Modifying your Pull Request](#modifying-your-pull-requests)
|
||||||
* [Pull Request Approval](#pull-request-approval)
|
* [Pull Request Approval](#pull-request-approval)
|
||||||
* [Release Management](#release-management)
|
|
||||||
|
|
||||||
## Code of Conduct
|
## Code of Conduct
|
||||||
This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By
|
|
||||||
participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com.
|
This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](https://github.com/vinyldns/vinyldns/blob/master/CODE_OF_CONDUCT.md). By
|
||||||
|
participating, you agree to this Code.
|
||||||
|
|
||||||
## Issues
|
## Issues
|
||||||
|
|
||||||
Work on VinylDNS is tracked by [Github Issues](https://guides.github.com/features/issues/). To contribute to VinylDNS,
|
Work on VinylDNS is tracked by [GitHub Issues](https://guides.github.com/features/issues/). To contribute to VinylDNS,
|
||||||
you can join the discussion on an issue, submit a Pull Request to resolve the issue, or make an issue of your own.
|
you can join the discussion on an issue, submit a Pull Request to resolve the issue, or make an issue of your own.
|
||||||
VinylDNS issues are generally labeled as bug reports, feature requests, or maintenance requests.
|
VinylDNS issues are generally labeled as bug reports, feature requests, or maintenance requests.
|
||||||
|
|
||||||
### Working on an Issue
|
### Working on an Issue
|
||||||
If you would like to contribute to VinylDNS, you can look through `good first issue` and `help wanted` issues. We keep a list
|
|
||||||
of these issues around to encourage participation in building the platform. In the issue list, you can chose "Labels" and
|
|
||||||
choose a specific label to narrow down the issues to review.
|
|
||||||
|
|
||||||
* **Beginner issues**: only require a few lines of code to complete, rather isolated to one or two files. A good way
|
If you would like to contribute to VinylDNS, you can look through `good first issue` and `help wanted` issues. We keep a
|
||||||
to get through changing and testing your code, and meet everyone!
|
list of these issues around to encourage participation in building the platform. In the issue list, you can chose "
|
||||||
* **Help wanted issues**: these are more involved than beginner issues, are items that tend to come near the top of our
|
Labels" and choose a specific label to narrow down the issues to review.
|
||||||
backlog but not necessarily in the current development stream.
|
|
||||||
|
- **Beginner issues**: only require a few lines of code to complete, rather isolated to one or two files. A good way to
|
||||||
|
get through changing and testing your code, and meet everyone!
|
||||||
|
- **Help wanted issues**: these are more involved than beginner issues, are items that tend to come near the top of our
|
||||||
|
backlog but not necessarily in the current development stream.
|
||||||
|
|
||||||
Besides those issues, you can sort the issue list by number of comments to find one that may be of interest. You do
|
Besides those issues, you can sort the issue list by number of comments to find one that may be of interest. You do
|
||||||
_not_ have to limit yourself to _only_ `good first issue` or `help wanted` issues.
|
_not_ have to limit yourself to _only_ `good first issue` or `help wanted` issues.
|
||||||
|
|
||||||
When resolving an issue, you generally will do so by making a [Pull Request](#pull-requests), and adding a link to the issue.
|
When resolving an issue, you generally will do so by making a [Pull Request](#pull-requests), and adding a link to the
|
||||||
|
issue.
|
||||||
|
|
||||||
Before choosing an issue, see if anyone is assigned or has indicated they are working on it (either in comment or via Pull Request).
|
Before choosing an issue, see if anyone is assigned or has indicated they are working on it (either in comment or via
|
||||||
If that is the case, then instead of making a Pull Request of your own, you can help out by reviewing their Pull Request.
|
Pull Request). If that is the case, then instead of making a Pull Request of your own, you can help out by reviewing
|
||||||
|
their Pull Request.
|
||||||
|
|
||||||
### Submitting an Issue
|
### Submitting an Issue
|
||||||
When submitting an issue you will notice there are three issue templates to choose from. Before making any issue, please
|
|
||||||
go search the issue list (open and closed issues) and check to see if a similar issue has been made. If so, we ask that you do not duplicate an
|
|
||||||
issue, but feel free to comment on the existing issue with additional details.
|
|
||||||
|
|
||||||
* **Bug report**: If you find a bug in the project you can report it with this template and the VinylDNS team will take a
|
When submitting an issue you will notice there are three issue templates to choose from. Before making any issue, please
|
||||||
look at it. Please be as detailed as possible as it will help us recreate the bug and figure out what exactly is going on.
|
go search the issue list (open and closed issues) and check to see if a similar issue has been made. If so, we ask that
|
||||||
If you are unsure whether what you found is a bug, we encourage you to first pop in our [dev gitter](https://gitter.im/vinyldns/vinyldns), and we can
|
you do not duplicate an issue, but feel free to comment on the existing issue with additional details.
|
||||||
help determine if what you're seeing is unexpected behavior, and if it is we will direct to make the bug report.
|
|
||||||
* **Feature request**: Use this template if you have something you wish to be added to the project. Please be detailed
|
- **Bug report**: If you find a bug in the project you can report it with this template and the VinylDNS team will take
|
||||||
when describing why you are requesting the feature, what you want it to do, and alternative solutions you have considered.
|
a look at it. Please be as detailed as possible as it will help us recreate the bug and figure out what exactly is
|
||||||
* **Maintenance request**: This template is for suggesting upgrades to the existing code base. This could include
|
going on. If you are unsure whether what you found is a bug, we encourage you to first pop in
|
||||||
code refactoring, new libraries, additional testing, among other things. Please be detailed when describing the
|
our [discussion board](https://github.com/vinyldns/vinyldns/discussions), and we can help determine if what you're
|
||||||
reason for the maintenance, and what benefits will come out of it. Please describe the scope of the change, and
|
seeing is unexpected behavior, and if it is we will direct to make the bug report.
|
||||||
what parts of the system will be impacted.
|
- **Feature request**: Use this template if you have something you wish to be added to the project. Please be detailed
|
||||||
|
when describing why you are requesting the feature, what you want it to do, and alternative solutions you have
|
||||||
|
considered.
|
||||||
|
|
||||||
### Discussion Process
|
### Discussion Process
|
||||||
|
|
||||||
Some issues may require discussion with the community before proceeding to implementation. This can happen if the issue is a larger change, for example a big refactoring or new feature. The VinylDNS maintainers may label an issue for **Discussion** in order to solicit more detail before proceeding. If the issue is straightforward and/or well documented, it can be implemented immediately by the submitter. If the submitter is unable to make the changes required to address the issue, the VinylDNS maintainers will prioritize the work in our backlog.
|
Some issues may require discussion with the community before proceeding to implementation. This can happen if the issue
|
||||||
|
is a larger change, for example a big refactoring or new feature. The VinylDNS maintainers may label an issue for **
|
||||||
|
Discussion** in order to solicit more detail before proceeding. If the issue is straightforward and/or well documented,
|
||||||
|
it can be implemented immediately by the submitter. If the submitter is unable to make the changes required to address
|
||||||
|
the issue, the VinylDNS maintainers will prioritize the work in our backlog.
|
||||||
|
|
||||||
## Pull Requests
|
## Pull Requests
|
||||||
Contributions to VinylDNS are generally made via [Github Pull Requests](https://help.github.com/articles/about-pull-requests/).
|
|
||||||
Most Pull Requests are related to an [issue](#issues), and will have a link to the issue in the Pull Request.
|
Contributions to VinylDNS are generally made
|
||||||
|
via [Github Pull Requests](https://help.github.com/articles/about-pull-requests/). Most Pull Requests are related to
|
||||||
|
an [issue](#issues), and will have a link to the issue in the Pull Request.
|
||||||
|
|
||||||
### General Flow
|
### General Flow
|
||||||
|
|
||||||
We follow the standard *GitHub Flow* for taking code contributions. The following is the process typically followed:
|
We follow the standard *GitHub Flow* for taking code contributions. The following is the process typically followed:
|
||||||
|
|
||||||
1. Create a fork of the repository that you want to contribute code to
|
1. Create a fork of the repository that you want to contribute code to
|
||||||
1. Clone your forked repository to your local machine
|
1. Clone your forked repository to your local machine
|
||||||
1. In your local machine, add a remote to the "main" repository, we call this "upstream" by running
|
1. In your local machine, add a remote to the "main" repository, we call this "upstream" by running
|
||||||
`git remote add upstream https://github.com/vinyldns/vinyldns.git`. Note: you can also use `ssh` instead of `https`
|
`git remote add upstream https://github.com/vinyldns/vinyldns.git`. Note: you can also use `ssh` instead of `https`
|
||||||
1. Create a local branch for your work `git checkout -b your-user-name/user-branch-name`. Add whatever your GitHub
|
1. Create a local branch for your work `git checkout -b your-user-name/user-branch-name`. Add whatever your GitHub user
|
||||||
user name is before whatever you want your branch to be.
|
name is before whatever you want your branch to be.
|
||||||
1. Begin working on your local branch
|
1. Begin working on your local branch
|
||||||
1. Be sure to add necessary unit, integration, and functional tests, see the [Testing](DEVELOPER_GUIDE.md#testing) section of the Developer Guide.
|
1. Be sure to add necessary unit, integration, and functional tests, see
|
||||||
1. Make sure you run all builds before posting a Pull Request! It's faster to run everything locally rather than waiting for
|
the [Testing](https://github.com/vinyldns/vinyldns/blob/master/DEVELOPER_GUIDE.md#testing)
|
||||||
the build server to complete its job. See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for information on local development.
|
section of the Developer Guide.
|
||||||
1. When you are ready to contribute your code, run `git push origin your-user-name/user-branch-name` to push your changes
|
1. Make sure you run all builds before posting a Pull Request! It's faster to run everything locally rather than waiting
|
||||||
to your _own fork_
|
for the build server to complete its job.
|
||||||
1. Go to the [VinylDNS main repository](https://github.com/vinyldns/vinyldns.git) (or whatever repo you are contributing to)
|
See [DEVELOPER_GUIDE.md](https://github.com/vinyldns/vinyldns/blob/master/DEVELOPER_GUIDE.md) for information on
|
||||||
and you will see your change waiting and a link to "Create a Pull Request". Click the link to create a Pull Request.
|
local development.
|
||||||
1. Be as detailed as possible in the description of your Pull Request. Describe what you changed, why you changed it, and
|
1. When you are ready to contribute your code, run `git push origin your-user-name/user-branch-name` to push your
|
||||||
give a detailed list of changes and impacted files. If your Pull Request is related to an existing issue, be sure to link the
|
changes to your _own fork_
|
||||||
issue in the Pull Request itself, in addition to the Pull Request description.
|
1. Go to the [VinylDNS main repository](https://github.com/vinyldns/vinyldns.git) (or whatever repo you are contributing
|
||||||
|
to)
|
||||||
|
and you will see your change waiting and a link to "Create a Pull Request". Click the link to create a Pull Request.
|
||||||
|
1. Be as detailed as possible in the description of your Pull Request. Describe what you changed, why you changed it,
|
||||||
|
and give a detailed list of changes and impacted files. If your Pull Request is related to an existing issue, be sure
|
||||||
|
to link the issue in the Pull Request itself, in addition to the Pull Request description.
|
||||||
1. You will receive comments on your Pull Request. Use the Pull Request as a dialog on your changes.
|
1. You will receive comments on your Pull Request. Use the Pull Request as a dialog on your changes.
|
||||||
|
|
||||||
### Pull Request Requirements
|
### Pull Request Requirements
|
||||||
|
|
||||||
#### Commit Messages
|
#### Commit Messages
|
||||||
|
|
||||||
* Limit the first line to 72 characters or fewer.
|
* Limit the first line to 72 characters or fewer.
|
||||||
* Use the present tense ("Add validation" not "Added validation").
|
* Use the present tense ("Add validation" not "Added validation").
|
||||||
* Use the imperative mood ("Move database call" not "Moves database call").
|
* Use the imperative mood ("Move database call" not "Moves database call").
|
||||||
* Reference issues and other pull requests liberally after the first line. Use [GitHub Auto Linking](https://help.github.com/articles/autolinked-references-and-urls/)
|
* Reference issues and other pull requests liberally after the first line.
|
||||||
to link your Pull Request to other issues.
|
Use [GitHub Auto Linking](https://help.github.com/articles/autolinked-references-and-urls/)
|
||||||
|
to link your Pull Request to other issues.
|
||||||
* Use markdown syntax as much as you want
|
* Use markdown syntax as much as you want
|
||||||
|
|
||||||
#### Testing
|
#### Testing
|
||||||
When making changes to the VinylDNS codebase, be sure to add necessary unit, integration, and functional tests.
|
|
||||||
For specifics on our tests, see the [Testing](DEVELOPER_GUIDE.md#testing) section of the Developer Guide.
|
When making changes to the VinylDNS codebase, be sure to add necessary unit, integration, and functional tests. For
|
||||||
|
specifics on our tests, see the [Testing](https://github.com/vinyldns/vinyldns/blob/master/DEVELOPER_GUIDE.md#testing)
|
||||||
|
section of the Developer Guide.
|
||||||
|
|
||||||
#### Documentation Edits
|
#### Documentation Edits
|
||||||
Documentation for the VinylDNS project lives in files such as this one in the root of the project directory, as well
|
|
||||||
as in `modules/docs/src/main/tut` for the docs you see on [vinyldns.io](https://vinyldns.io). Many changes, such as those that impact
|
Documentation for the VinylDNS project lives in files such as this one in the root of the project directory, as well as
|
||||||
an API endpoint, config, portal usage, etc, will also need corresponding documentation edited to prevent it from going stale. The VinylDNS [gh-pages branch README](https://github.com/vinyldns/vinyldns/tree/gh-pages#vinyldns-documentation-site) has information on how to run and edit the documentation page.
|
in `modules/docs/src/main/mdoc` for the docs you see on [vinyldns.io](https://vinyldns.io). Many changes, such as those
|
||||||
|
that impact an API endpoint, config, portal usage, etc, will also need corresponding documentation edited to prevent it
|
||||||
|
from going stale. The
|
||||||
|
VinylDNS [gh-pages branch README](https://github.com/vinyldns/vinyldns/tree/gh-pages#vinyldns-documentation-site) has
|
||||||
|
information on how to run and edit the documentation page.
|
||||||
|
|
||||||
#### Style Guides
|
#### Style Guides
|
||||||
* For Scala code we use [Scalastyle](https://www.scalastyle.org/). The configs are `scalastyle-config.xml` and
|
|
||||||
`scalastyle-test-config.xml` for source code and test code respectively
|
* For Scala code we use [Scalastyle](http://www.scalastyle.org/). The configs are `scalastyle-config.xml` and
|
||||||
* We have it set to fail builds if the styling rules are not followed. For example, one of our rules is that all lines must be <= 120 characters, and a build will fail if that is violated.
|
`scalastyle-test-config.xml` for source code and test code respectively
|
||||||
* For our python code that we use for functional testing, we generally try to follow [PEP 8](https://www.python.org/dev/peps/pep-0008/)
|
* We have it set to fail builds if the styling rules are not followed. For example, one of our rules is that all
|
||||||
|
lines must be <= 120 characters, and a build will fail if that is violated.
|
||||||
|
* For our python code that we use for functional testing, we generally try to
|
||||||
|
follow [PEP 8](https://www.python.org/dev/peps/pep-0008/)
|
||||||
|
|
||||||
#### License Header Checks
|
#### License Header Checks
|
||||||
VinylDNS is configured with [sbt-header](https://github.com/sbt/sbt-header). All existing scala files have the appropriate
|
|
||||||
header. To add or check for headers, follow these steps:
|
VinylDNS is configured with [sbt-header](https://github.com/sbt/sbt-header). All existing scala files have the
|
||||||
|
appropriate header. To add or check for headers, follow these steps:
|
||||||
|
|
||||||
##### API
|
##### API
|
||||||
|
|
||||||
You can check for headers in the API in `sbt` with:
|
You can check for headers in the API in `sbt` with:
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -137,6 +166,7 @@ If you add a new file, you can add the appropriate header in `sbt` with:
|
|||||||
```
|
```
|
||||||
|
|
||||||
##### Portal
|
##### Portal
|
||||||
|
|
||||||
You can check for headers in the Portal in `sbt` with:
|
You can check for headers in the Portal in `sbt` with:
|
||||||
|
|
||||||
```
|
```
|
||||||
@ -150,36 +180,34 @@ If you add a new file, you can add the appropriate header in `sbt` with:
|
|||||||
```
|
```
|
||||||
|
|
||||||
#### Contributor License Agreement
|
#### Contributor License Agreement
|
||||||
|
|
||||||
Before Comcast merges your code into the project you must sign the
|
Before Comcast merges your code into the project you must sign the
|
||||||
[Comcast Contributor License Agreement (CLA)](https://gist.github.com/ComcastOSS/a7b8933dd8e368535378cda25c92d19a).
|
[Comcast Contributor License Agreement (CLA)](https://gist.github.com/ComcastOSS/a7b8933dd8e368535378cda25c92d19a).
|
||||||
|
|
||||||
If you haven't previously signed a Comcast CLA, you'll automatically be asked to when you open a pull request.
|
If you haven't previously signed a Comcast CLA, you'll automatically be asked to when you open a pull request.
|
||||||
Alternatively, we can send you a PDF that you can sign and scan back to us. Please create a new GitHub issue to request a PDF version of the CLA.
|
Alternatively, we can send you a PDF that you can sign and scan back to us. Please create a new GitHub issue to request
|
||||||
|
a PDF version of the CLA.
|
||||||
|
|
||||||
### Modifying your Pull Requests
|
### Modifying your Pull Requests
|
||||||
Often times, you will need to make revisions to your Pull Requests that you submit. This is part of the standard process of code
|
|
||||||
review. There are different ways that you can make revisions, but the following process is pretty standard.
|
|
||||||
|
|
||||||
1. Sync with upstream first. `git checkout master && git fetch upstream && git rebase upstream master && git push origin master`
|
Often times, you will need to make revisions to your Pull Requests that you submit. This is part of the standard process
|
||||||
|
of code review. There are different ways that you can make revisions, but the following process is pretty standard.
|
||||||
|
|
||||||
|
1. Sync with upstream
|
||||||
|
first. `git checkout master && git fetch upstream && git rebase upstream master && git push origin master`
|
||||||
1. Checkout your branch on your local `git checkout your-user-name/user-branch-name`
|
1. Checkout your branch on your local `git checkout your-user-name/user-branch-name`
|
||||||
1. Sync your branch with latest `git rebase master`. Note: If you have merge conflicts, you will have to resolve them
|
1. Sync your branch with latest `git rebase master`. Note: If you have merge conflicts, you will have to resolve them
|
||||||
1. Revise your Pull Request, making changes recommended in the comments / code review
|
1. Revise your Pull Request, making changes recommended in the comments / code review
|
||||||
1. Stage and commit these changes on top of your existing commits
|
1. Stage and commit these changes on top of your existing commits
|
||||||
1. When all tests pass, `git push origin your-user-name/user-branch-name` to revise your commit. _Note: If you rebased or altered the commit history, you will have to force push with a `-f` flag._ GitHub automatically
|
1. When all tests pass, `git push origin your-user-name/user-branch-name` to revise your commit. _Note: If you rebased
|
||||||
recognizes the update and will re-run verification on your Pull Request!
|
or altered the commit history, you will have to force push with a `-f` flag._ GitHub automatically recognizes the
|
||||||
|
update and will re-run verification on your Pull Request!
|
||||||
|
|
||||||
### Pull Request Approval
|
### Pull Request Approval
|
||||||
|
|
||||||
A pull request must satisfy our [pull request requirements](#pull-request-requirements)
|
A pull request must satisfy our [pull request requirements](#pull-request-requirements)
|
||||||
|
|
||||||
Afterwards, if a Pull Request is approved, a maintainer of the project will merge it.
|
Afterwards, if a Pull Request is approved, a maintainer of the project will merge it. If you are a maintainer, you can
|
||||||
If you are a maintainer, you can merge your Pull Request once you have the approval of at least 2 other maintainers.
|
merge your Pull Request once you have the approval of at least 2 other maintainers.
|
||||||
|
|
||||||
> Note: The first time you make a Pull Request, add yourself to the authors list [here](AUTHORS.md) as part of the Pull Request
|
|
||||||
|
|
||||||
## Release Management
|
|
||||||
As an overview, we release on a regular schedule roughly once per month.
|
|
||||||
|
|
||||||
* **current release** - For example, 0.8.0. This constitutes the current work that is in-flight
|
|
||||||
* **next release** - For example, 0.8.1. These are the issues pegged for the _next_ release to be worked on
|
|
||||||
* **maintenance release** - We will have maintenance releases once we bump MINOR. For example, we will have `0.8.x` once we move to `0.9.0-SNAPSHOT`
|
|
||||||
|
|
||||||
|
> Note: The first time you make a Pull Request, add yourself to the authors list [here](https://github.com/vinyldns/vinyldns/blob/master/AUTHORS.md) as part of the Pull Request
|
||||||
|
@ -1,236 +1,310 @@
|
|||||||
# Developer Guide
|
# Developer Guide
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
- [Developer Requirements](#developer-requirements)
|
|
||||||
|
- [Developer Requirements (Local)](#developer-requirements-local)
|
||||||
|
- [Developer Requirements (Docker)](#developer-requirements-docker)
|
||||||
- [Project Layout](#project-layout)
|
- [Project Layout](#project-layout)
|
||||||
|
* [Core](#core)
|
||||||
|
* [API](#api)
|
||||||
|
* [Portal](#portal)
|
||||||
|
* [Documentation](#documentation)
|
||||||
- [Running VinylDNS Locally](#running-vinyldns-locally)
|
- [Running VinylDNS Locally](#running-vinyldns-locally)
|
||||||
|
* [Starting the API Server](#starting-the-api-server)
|
||||||
|
* [Starting the Portal](#starting-the-portal)
|
||||||
- [Testing](#testing)
|
- [Testing](#testing)
|
||||||
- [Validating VinylDNS](#validating-vinyldns)
|
* [Unit Tests](#unit-tests)
|
||||||
|
* [Integration Tests](#integration-tests)
|
||||||
|
+ [Running both](#running-both)
|
||||||
|
* [Functional Tests](#functional-tests)
|
||||||
|
+ [Running Functional Tests](#running-functional-tests)
|
||||||
|
- [API Functional Tests](#api-functional-tests)
|
||||||
|
+ [Setup](#setup)
|
||||||
|
- [Functional Test Context](#functional-test-context)
|
||||||
|
- [Partitioning](#partitioning)
|
||||||
|
- [Really Important Test Context Rules!](#really-important-test-context-rules)
|
||||||
|
- [Managing Test Zone Files](#managing-test-zone-files)
|
||||||
|
|
||||||
## Developer Requirements
|
## Developer Requirements (Local)
|
||||||
|
|
||||||
|
- Java 8+
|
||||||
- Scala 2.12
|
- Scala 2.12
|
||||||
- sbt 1+
|
- sbt 1.4+
|
||||||
- Java 8 (at least u162)
|
|
||||||
- Python 2.7
|
|
||||||
- virtualenv
|
|
||||||
- Docker
|
|
||||||
- curl
|
|
||||||
- npm
|
|
||||||
- grunt
|
|
||||||
|
|
||||||
Make sure that you have the requirements installed before proceeding.
|
|
||||||
|
- curl
|
||||||
|
- docker
|
||||||
|
- docker-compose
|
||||||
|
- GNU Make 3.82+
|
||||||
|
- grunt
|
||||||
|
- npm
|
||||||
|
- Python 3.5+
|
||||||
|
|
||||||
|
## Developer Requirements (Docker)
|
||||||
|
|
||||||
|
Since almost everything can be run with Docker and GNU Make, if you don't want to setup a local development environment,
|
||||||
|
then you simply need:
|
||||||
|
|
||||||
|
- `Docker` v19.03+ _(earlier versions may work fine)_
|
||||||
|
- `Docker Compose` v2.0+ _(earlier versions may work fine)_
|
||||||
|
- `GNU Make` v3.82+
|
||||||
|
- `Bash` 3.2+
|
||||||
|
- Basic utilities: `awk`, `sed`, `curl`, `grep`, etc may be needed for scripts
|
||||||
|
|
||||||
## Project Layout
|
## Project Layout
|
||||||
[SYSTEM_DESIGN.md](SYSTEM_DESIGN.md) provides a high-level architectural overview of VinylDNS and interoperability of its components.
|
|
||||||
|
|
||||||
The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project,
|
[SYSTEM_DESIGN.md](SYSTEM_DESIGN.md) provides a high-level architectural overview of VinylDNS and interoperability of
|
||||||
from the root directory run `sbt`. Most of the code can be found in the `modules` directory.
|
its components.
|
||||||
The following modules are present:
|
|
||||||
|
|
||||||
* `root` - this is the parent project, if you run tasks here, it will run against all sub-modules
|
The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project, from the
|
||||||
* [`core`](#core): core modules that are used by both the API and portal, such as cryptography implementations.
|
root directory run `sbt`. Most of the code can be found in the `modules` directory. The following modules are present:
|
||||||
* [`api`](#api): the API is the main engine for all of VinylDNS. This is the most active area of the codebase, as everything else typically just funnels through
|
|
||||||
the API.
|
- `root` - this is the parent project, if you run tasks here, it will run against all sub-modules
|
||||||
* [`portal`](#portal): The portal is a user interface wrapper around the API. Most of the business rules, logic, and processing can be found in the API. The
|
- [`core`](#core): core modules that are used by both the API and portal, such as cryptography implementations.
|
||||||
_only_ features in the portal not found in the API are creation of users and user authentication.
|
- [`api`](#api): the API is the main engine for all of VinylDNS. This is the most active area of the codebase, as
|
||||||
* [`docs`](#documentation): documentation for VinylDNS.
|
everything else typically just funnels through the API.
|
||||||
|
- [`portal`](#portal): The portal is a user interface wrapper around the API. Most of the business rules, logic, and
|
||||||
|
processing can be found in the API. The
|
||||||
|
_only_ features in the portal not found in the API are creation of users and user authentication.
|
||||||
|
- [`docs`](#documentation): documentation for VinylDNS.
|
||||||
|
|
||||||
### Core
|
### Core
|
||||||
|
|
||||||
Code that is used across multiple modules in the VinylDNS ecosystem live in `core`.
|
Code that is used across multiple modules in the VinylDNS ecosystem live in `core`.
|
||||||
|
|
||||||
#### Code Layout
|
#### Code Layout
|
||||||
* `src/main` - the main source code
|
|
||||||
* `src/test` - unit tests
|
- `src/main` - the main source code
|
||||||
|
- `src/test` - unit tests
|
||||||
|
|
||||||
### API
|
### API
|
||||||
|
|
||||||
The API is the RESTful API for interacting with VinylDNS. The following technologies are used:
|
The API is the RESTful API for interacting with VinylDNS. The following technologies are used:
|
||||||
|
|
||||||
* [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) - Used primarily for REST and HTTP calls.
|
- [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) - Used primarily for REST and HTTP calls.
|
||||||
* [FS2](https://functional-streams-for-scala.github.io/fs2/) - Used for backend change processing off of message queues.
|
- [FS2](https://functional-streams-for-scala.github.io/fs2/) - Used for backend change processing off of message queues.
|
||||||
FS2 has back-pressure built in, and gives us tools like throttling and concurrency.
|
FS2 has back-pressure built in, and gives us tools like throttling and concurrency.
|
||||||
* [Cats Effect](https://typelevel.org/cats-effect/) - We are currently migrating away from `Future` as our primary type
|
- [Cats Effect](https://typelevel.org/cats-effect/) - A replacement of `Future` with the `IO` monad
|
||||||
and towards cats effect IO. Hopefully, one day, all the things will be using IO.
|
- [Cats](https://typelevel.org/cats) - Used for functional programming.
|
||||||
* [Cats](https://typelevel.org/cats) - Used for functional programming.
|
- [PureConfig](https://pureconfig.github.io/) - For loading configuration values.
|
||||||
* [PureConfig](https://pureconfig.github.io/) - For loading configuration values. We are currently migrating to
|
|
||||||
use PureConfig everywhere. Not all the places use it yet.
|
|
||||||
|
|
||||||
The API has the following dependencies:
|
The API has the following dependencies:
|
||||||
* MySQL - the SQL database that houses zone data
|
|
||||||
* DynamoDB - where all of the other data is stored
|
- MySQL - the SQL database that houses the data
|
||||||
* SQS - for managing concurrent updates and enabling high-availability
|
- SQS - for managing concurrent updates and enabling high-availability
|
||||||
* Bind9 - for testing integration with a real DNS system
|
- Bind9 - for testing integration with a real DNS system
|
||||||
|
|
||||||
#### Code Layout
|
#### Code Layout
|
||||||
|
|
||||||
The API code can be found in `modules/api`.
|
The API code can be found in `modules/api`.
|
||||||
|
|
||||||
* `functional_test` - contains the python black box / regression tests
|
- `src/it` - integration tests
|
||||||
* `src/it` - integration tests
|
- `src/main` - the main source code
|
||||||
* `src/main` - the main source code
|
- `src/test` - unit tests
|
||||||
* `src/test` - unit tests
|
- `src/universal` - items that are packaged in the Docker image for the VinylDNS API
|
||||||
* `src/universal` - items that are packaged in the Docker image for the VinylDNS API
|
|
||||||
|
|
||||||
The package structure for the source code follows:
|
The package structure for the source code follows:
|
||||||
|
|
||||||
* `vinyldns.api.domain` - contains the core front-end logic. This includes things like the application services,
|
- `vinyldns.api.domain` - contains the core front-end logic. This includes things like the application services,
|
||||||
repository interfaces, domain model, validations, and business rules.
|
repository interfaces, domain model, validations, and business rules.
|
||||||
* `vinyldns.api.engine` - the back-end processing engine. This is where we process commands including record changes,
|
- `vinyldns.api.engine` - the back-end processing engine. This is where we process commands including record changes,
|
||||||
zone changes, and zone syncs.
|
zone changes, and zone syncs.
|
||||||
* `vinyldns.api.protobuf` - marshalling and unmarshalling to and from protobuf to types in our system
|
- `vinyldns.api.protobuf` - marshalling and unmarshalling to and from protobuf to types in our system
|
||||||
* `vinyldns.api.repository` - repository implementations live here
|
- `vinyldns.api.repository` - repository implementations live here
|
||||||
* `vinyldns.api.route` - HTTP endpoints
|
- `vinyldns.api.route` - HTTP endpoints
|
||||||
|
|
||||||
### Portal
|
### Portal
|
||||||
|
|
||||||
The project is built using:
|
The project is built using:
|
||||||
* [Play Framework](https://www.playframework.com/documentation/2.6.x/Home)
|
|
||||||
* [AngularJS](https://angularjs.org/)
|
- [Play Framework](https://www.playframework.com/documentation/2.6.x/Home)
|
||||||
|
- [AngularJS](https://angularjs.org/)
|
||||||
|
|
||||||
The portal is _mostly_ a shim around the API. Most actions in the user interface are translated into API calls.
|
The portal is _mostly_ a shim around the API. Most actions in the user interface are translated into API calls.
|
||||||
|
|
||||||
The features that the Portal provides that are not in the API include:
|
The features that the Portal provides that are not in the API include:
|
||||||
* Authentication against LDAP
|
|
||||||
* Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user and new credentials for them in the
|
- Authentication against LDAP
|
||||||
database with their LDAP information.
|
- Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user and new credentials
|
||||||
|
for them in the database with their LDAP information.
|
||||||
|
|
||||||
#### Code Layout
|
#### Code Layout
|
||||||
|
|
||||||
The portal code can be found in `modules/portal`.
|
The portal code can be found in `modules/portal`.
|
||||||
|
|
||||||
* `app` - source code for portal back-end
|
- `app` - source code for portal back-end
|
||||||
* `models` - data structures that are used by the portal
|
- `models` - data structures that are used by the portal
|
||||||
* `views` - HTML templates for each web page
|
- `views` - HTML templates for each web page
|
||||||
* `controllers` - logic for updating data
|
- `controllers` - logic for updating data
|
||||||
* `conf` - configurations and endpoint routes
|
- `conf` - configurations and endpoint routes
|
||||||
* `public` - source code for portal front-end
|
- `public` - source code for portal front-end
|
||||||
* `css` - stylesheets
|
- `css` - stylesheets
|
||||||
* `images` - images, including icons, used in the portal
|
- `images` - images, including icons, used in the portal
|
||||||
* `js` - scripts
|
- `js` - scripts
|
||||||
* `mocks` - mock JSON used in Grunt tests
|
- `mocks` - mock JSON used in Grunt tests
|
||||||
* `templates` - modal templates
|
- `templates` - modal templates
|
||||||
* `test` - unit tests for portal back-end
|
- `test` - unit tests for portal back-end
|
||||||
|
|
||||||
### Documentation
|
### Documentation
|
||||||
Code used to build the microsite content for the API, operator and portal guides at https://www.vinyldns.io/. Some settings for the microsite
|
|
||||||
are also configured in `build.sbt` of the project root.
|
Code used to build the microsite content for the API, operator and portal guides at https://www.vinyldns.io/. Some
|
||||||
|
settings for the microsite are also configured in `build.sbt` of the project root.
|
||||||
|
|
||||||
#### Code Layout
|
#### Code Layout
|
||||||
* `src/main/resources` - Microsite resources and configurations
|
|
||||||
* `src/main/tut` - Content for microsite web pages
|
- `src/main/resources` - Microsite resources and configurations
|
||||||
|
- `src/main/mdoc` - Content for microsite web pages
|
||||||
|
|
||||||
## Running VinylDNS Locally
|
## Running VinylDNS Locally
|
||||||
VinylDNS can be started in the background by running the [quickstart instructions](README.md#quickstart) located in the README. However, VinylDNS
|
|
||||||
can also be run in the foreground.
|
VinylDNS can be started in the background by running the [quickstart instructions](README.md#quickstart) located in the
|
||||||
|
README. However, VinylDNS can also be run in the foreground.
|
||||||
|
|
||||||
### Starting the API Server
|
### Starting the API Server
|
||||||
To start the API for integration, functional, or portal testing. Start up sbt by running `sbt` from the root directory.
|
|
||||||
* `dockerComposeUp` to spin up the dependencies on your machine from the root project.
|
|
||||||
* `project api` to change the sbt project to the API
|
|
||||||
* `reStart` to start up the API server
|
|
||||||
* Wait until you see the message `VINYLDNS SERVER STARTED SUCCESSFULLY` before working with the server
|
|
||||||
* To stop the VinylDNS server, run `reStop` from the api project
|
|
||||||
* To stop the dependent Docker containers, change to the root project `project root`, then run `dockerComposeStop` from the API project
|
|
||||||
|
|
||||||
See the [API Configuration Guide](https://www.vinyldns.io/operator/config-api) for information regarding API configuration.
|
Before starting the API service, you can start the dependencies for local development:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
quickstart/quickstart-vinyldns.sh --deps-only
|
||||||
|
```
|
||||||
|
|
||||||
|
This will start a container running in the background with necessary prerequisites.
|
||||||
|
|
||||||
|
Once the prerequisites are running, you can start up sbt by running `sbt` from the root directory.
|
||||||
|
|
||||||
|
- `project api` to change the sbt project to the API
|
||||||
|
- `reStart` to start up the API server
|
||||||
|
- To enable interactive debugging, you can run `set Revolver.enableDebugging(port = 5020, suspend = true)` before running `reStart`
|
||||||
|
- Wait until you see the message `VINYLDNS SERVER STARTED SUCCESSFULLY` before working with the server
|
||||||
|
- To stop the VinylDNS server, run `reStop` from the api project
|
||||||
|
- To stop the dependent Docker containers: `utils/clean-vinyldns-containers.sh`
|
||||||
|
|
||||||
|
See the [API Configuration Guide](https://www.vinyldns.io/operator/config-api) for information regarding API
|
||||||
|
configuration.
|
||||||
|
|
||||||
### Starting the Portal
|
### Starting the Portal
|
||||||
To run the portal locally, you _first_ have to start up the VinylDNS API Server (see instructions above). Once
|
|
||||||
that is done, in the same `sbt` session or a different one, go to `project portal` and then execute `;preparePortal; run`.
|
|
||||||
|
|
||||||
See the [Portal Configuration Guide](https://www.vinyldns.io/operator/config-portal) for information regarding portal configuration.
|
To run the portal locally, you _first_ have to start up the VinylDNS API Server. This can be done by following the
|
||||||
|
instructions for [Staring the API Server](#starting-the-api-server) or by using the QuickStart:
|
||||||
### Loading test data
|
|
||||||
Normally the portal can be used for all VinylDNS requests. Test users are locked down to only have access to test zones,
|
|
||||||
which the portal connection modal has not been updated to incorporate. To connect to a zone with testuser, you will need to use an alternative
|
|
||||||
client and set `isTest=true` on the zone being connected to.
|
|
||||||
|
|
||||||
Use the vinyldns-js client (Note, you need Node installed):
|
|
||||||
|
|
||||||
|
```shell
|
||||||
|
quickstart/quickstart-vinyldns.sh --api-only
|
||||||
```
|
```
|
||||||
git clone https://github.com/vinyldns/vinyldns-js.git
|
|
||||||
cd vinyldns-js
|
|
||||||
npm install
|
|
||||||
export VINYLDNS_API_SERVER=http://localhost:9000
|
|
||||||
export VINYLDNS_ACCESS_KEY_ID=testUserAccessKey
|
|
||||||
export VINYLDNS_SECRET_ACCESS_KEY=testUserSecretKey
|
|
||||||
npm run repl
|
|
||||||
> var groupId;
|
|
||||||
> vinyl.createGroup({"name": "test-group", "email":"test@test.com", members: [{id: "testuser"}], admins: [{id: "testuser"}]}).then(res => {groupId = res.id}).catch(err => {console.log(err)});
|
|
||||||
> vinyl.createZone ({name: "ok.", isTest: true, adminGroupId: groupId, email: "test@test.com"}).then(res => { console.log(res) }).catch(err => { console.log(err) })
|
|
||||||
|
|
||||||
You should now be able to see the zone in the portal at localhost:9001 when logged in as username=testuser password=testpassword
|
Once that is done, in the same `sbt` session or a different one, go to `project portal` and then
|
||||||
```
|
execute `;preparePortal; run`.
|
||||||
|
|
||||||
|
See the [Portal Configuration Guide](https://www.vinyldns.io/operator/config-portal) for information regarding portal
|
||||||
|
configuration.
|
||||||
|
|
||||||
## Testing
|
## Testing
|
||||||
|
|
||||||
### Unit Tests
|
### Unit Tests
|
||||||
1. First, start up your Scala build tool: `sbt`. Running *clean* immediately after starting is recommended.
|
|
||||||
1. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the portal.
|
1. First, start up your Scala build tool: `build/sbt.sh` (or `sbt` if running outside of Docker).
|
||||||
1. Run _all_ unit tests by just running `test`.
|
2. (Optionally) Go to the project you want to work on, for example `project api` for the API; `project portal` for the
|
||||||
1. Run an individual unit test by running `testOnly *MySpec`.
|
portal.
|
||||||
1. If you are working on a unit test and production code at the same time, use `~` (eg. `~testOnly *MySpec`) to automatically background compile for you!
|
3. Run _all_ unit tests by just running `test`.
|
||||||
|
4. Run a single unit test suite by running `testOnly *MySpec`.
|
||||||
|
5. Run a single unit by filtering the test name using the `-z` argument `testOnly *MySpec -- -z "some text from test"`.
|
||||||
|
- [More information on commandline arguments](https://www.scalatest.org/user_guide/using_the_runner)
|
||||||
|
6. If you are working on a unit test and production code at the same time, use `~` (e.g., `~testOnly *MySpec`) to
|
||||||
|
automatically background compile for you!
|
||||||
|
|
||||||
### Integration Tests
|
### Integration Tests
|
||||||
Integration tests are used to test integration with _real_ dependent services. We use Docker to spin up those
|
|
||||||
backend services for integration test development.
|
|
||||||
|
|
||||||
1. Type `dockerComposeUp` to start up dependent background services
|
Integration tests are used to test integration with dependent services. We use Docker to spin up those backend services
|
||||||
|
for integration test development.
|
||||||
|
|
||||||
|
1. Type `quickstart/quickstart-vinyldns.sh --reset --deps-only` to start up dependent background services
|
||||||
|
1. Run sbt (`build/sbt.sh` or `sbt` locally)
|
||||||
1. Go to the target module in sbt, example: `project api`
|
1. Go to the target module in sbt, example: `project api`
|
||||||
1. Run all integration tests by typing `it:test`.
|
1. Run all integration tests by typing `it:test`.
|
||||||
1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec`
|
1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec`
|
||||||
1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec`
|
1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec`
|
||||||
1. You must stop (`dockerComposeStop`) and start (`dockerComposeUp`) the dependent services from the root project (`project root`) before you rerun the tests.
|
1. You must restart the dependent services (`quickstart/quickstart-vinyldns.sh --reset --deps-only`) before you rerun
|
||||||
1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for setup to complete.
|
the tests.
|
||||||
|
1. For the mysql module, you may need to wait up to 30 seconds after starting the services before running the tests for
|
||||||
|
setup to complete.
|
||||||
|
|
||||||
#### Running both
|
#### Running both
|
||||||
|
|
||||||
You can run all unit and integration tests for the api and portal by running `sbt verify`
|
You can run all unit and integration tests for the api and portal by running `build/verify.sh`
|
||||||
|
|
||||||
### Functional Tests
|
### Functional Tests
|
||||||
When adding new features, you will often need to write new functional tests that black box / regression test the
|
|
||||||
API. We have over 350 (and growing) automated regression tests. The API functional tests are written in Python
|
|
||||||
and live under `modules/api/functional_test`.
|
|
||||||
|
|
||||||
#### Running functional tests
|
When adding new features, you will often need to write new functional tests that black box / regression test the API.
|
||||||
To run functional tests, make sure that you have started the API server (directions above).
|
|
||||||
Then in another terminal session:
|
|
||||||
|
|
||||||
1. `cd modules/api/functional_test`
|
- The API functional tests are written in Python and live under `modules/api/src/test/functional`.
|
||||||
1. `./run.py live_tests -v`
|
- The Portal functional tests are written in JavaScript and live under `modules/portal/test`.
|
||||||
|
|
||||||
You can run a specific test by name by running `./run.py live_tests -v -k <name of test function>`
|
#### Running Functional Tests
|
||||||
|
|
||||||
You run specific tests for a portion of the project, say recordsets, by running `./run.py live_tests/recordsets -v`
|
To run functional tests you can simply execute the following commands:
|
||||||
|
|
||||||
#### Our Setup
|
```shell
|
||||||
We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation
|
build/func-test-api.sh
|
||||||
so that you are familiar with pytest and how our functional tests operate.
|
build/func-test-portal.sh
|
||||||
|
```
|
||||||
|
|
||||||
We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for matchers in order to write easy
|
These command will run the API functional tests and portal functional tests respectively.
|
||||||
to read tests. Please browse that documentation as well so that you are familiar with the different matchers
|
|
||||||
for PyHamcrest. There aren't a lot, so it should be quick.
|
|
||||||
|
|
||||||
|
##### API Functional Tests
|
||||||
|
|
||||||
In the `modules/api/functional_test` directory are a few important files for you to be familiar with:
|
To run functional tests you can simply execute `build/func-test-api.sh`, but if you'd like finer-grained control, you
|
||||||
|
can work with the `Makefile` in `test/api/functional`:
|
||||||
|
|
||||||
* vinyl_client.py - this provides the interface to the VinylDNS API. It handles signing the request for you, as well
|
```shell
|
||||||
as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should
|
# Build and then run the function test container
|
||||||
be a corresponding function in the vinyl_client
|
make -C test/api/functional build run
|
||||||
* utils.py - provides general use functions that can be used anywhere in your tests. Feel free to contribute new
|
```
|
||||||
functions here when you see repetition in the code
|
|
||||||
|
|
||||||
Functional tests run on every build, and are designed to work _in every environment_. That means locally, in Docker,
|
During iterative test development, you can use `make run-local` which will bind-mount the current functional tests in
|
||||||
and in production environments.
|
the container, allowing for easier test development.
|
||||||
|
|
||||||
In the `modules/api/functional_test/live_tests` directory, we have directories / modules for different areas of the application.
|
Additionally, you can pass `--interactive` to `make run` or `make run-local` to drop to a shell inside the container.
|
||||||
|
From there you can run tests with the `/functional_test/run.sh` command. This allows for finer-grained control over the
|
||||||
|
test execution process as well as easier inspection of logs.
|
||||||
|
|
||||||
* membership - for managing groups and users
|
You can run a specific test by name by running `make run -- -k <name of test function>`. Any arguments after
|
||||||
* recordsets - for managing record sets
|
`make run --` will be passed to the test runner [`test/api/functional/run.sh`](test/api/functional/run.sh).
|
||||||
* zones - for managing zones
|
|
||||||
* internal - for internal endpoints (not intended for public consumption)
|
Finally, you can execute `make run-deps-bg` to all of the dependencies for the functional test, but not run the tests.
|
||||||
* batch - for managing batch updates
|
This is useful if, for example, you want to use an interactive debugger on your local machine, but host all of the
|
||||||
|
VinylDNS API dependencies in Docker.
|
||||||
|
|
||||||
|
#### Setup
|
||||||
|
|
||||||
|
We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation so
|
||||||
|
that you are familiar with pytest and how our functional tests operate.
|
||||||
|
|
||||||
|
We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for matchers in order to write easy to read
|
||||||
|
tests. Please browse that documentation as well so that you are familiar with the different matchers for PyHamcrest.
|
||||||
|
There aren't a lot, so it should be quick.
|
||||||
|
|
||||||
|
In the `modules/api/src/test/functional` directory are a few important files for you to be familiar with:
|
||||||
|
|
||||||
|
- `vinyl_python.py` - this provides the interface to the VinylDNS API. It handles signing the request for you, as well
|
||||||
|
as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should
|
||||||
|
be a corresponding function in the vinyl_client
|
||||||
|
- `utils.py` - provides general use functions that can be used anywhere in your tests. Feel free to contribute new
|
||||||
|
functions here when you see repetition in the code
|
||||||
|
|
||||||
|
In the `modules/api/src/test/functional/tests` directory, we have directories / modules for different areas of the
|
||||||
|
application.
|
||||||
|
|
||||||
|
- `batch` - for managing batch updates
|
||||||
|
- `internal` - for internal endpoints (not intended for public consumption)
|
||||||
|
- `membership` - for managing groups and users
|
||||||
|
- `recordsets` - for managing record sets
|
||||||
|
- `zones` - for managing zones
|
||||||
|
|
||||||
##### Functional Test Context
|
##### Functional Test Context
|
||||||
Our func tests use pytest contexts. There is a main test context that lives in `shared_zone_test_context.py`
|
|
||||||
that creates and tears down a shared test context used by many functional tests. The
|
Our functional tests use `pytest` contexts. There is a main test context that lives in `shared_zone_test_context.py`
|
||||||
beauty of pytest is that it will ensure that the test context is stood up exactly once, then all individual tests
|
that creates and tears down a shared test context used by many functional tests. The beauty of pytest is that it will
|
||||||
that use the context are called using that same context.
|
ensure that the test context is stood up exactly once, then all individual tests that use the context are called using
|
||||||
|
that same context.
|
||||||
|
|
||||||
The shared test context sets up several things that can be reused:
|
The shared test context sets up several things that can be reused:
|
||||||
|
|
||||||
@ -243,30 +317,33 @@ The shared test context sets up several things that can be reused:
|
|||||||
1. A classless IPv4 reverse zone
|
1. A classless IPv4 reverse zone
|
||||||
1. A parent zone that has child zones - used for testing NS record management and zone delegations
|
1. A parent zone that has child zones - used for testing NS record management and zone delegations
|
||||||
|
|
||||||
|
##### Partitioning
|
||||||
|
|
||||||
|
Each of the test zones are configured in a `partition`. By default, there are four partitions. These partitions are
|
||||||
|
effectively copies of the zones so that parallel tests can run without interfering with one another.
|
||||||
|
|
||||||
|
For instance, there are four zones for the `ok` zone: `ok1`, `ok2`, `ok3`, and `ok4`. The functional tests will handle
|
||||||
|
distributing which zone is being used by which of the parallel test runners.
|
||||||
|
|
||||||
|
As such, you should **never** hardcode the name of the zone. Always get the zone from the `shared_zone_test_context`.
|
||||||
|
For instance, to get the `ok` zone, you would write:
|
||||||
|
|
||||||
|
```python
|
||||||
|
zone = shared_zone_test_context.ok_zone
|
||||||
|
zone_name = shared_zone_test_context.ok_zone["name"]
|
||||||
|
zone_id = shared_zone_test_context.ok_zone["id"]
|
||||||
|
```
|
||||||
|
|
||||||
##### Really Important Test Context Rules!
|
##### Really Important Test Context Rules!
|
||||||
|
|
||||||
1. Try to use the `shared_zone_test_context` whenever possible! This reduces the time
|
1. Try to use the `shared_zone_test_context` whenever possible! This reduces the time it takes to run functional
|
||||||
it takes to run functional tests (which is in minutes).
|
tests (which is in minutes).
|
||||||
1. Limit changes to users, groups, and zones in the shared test context, as doing so could impact downstream tests
|
1. Be mindful of changes to users, groups, and zones in the shared test context, as doing so could impact downstream
|
||||||
|
tests
|
||||||
1. If you do modify any entities in the shared zone context, roll those back when your function completes!
|
1. If you do modify any entities in the shared zone context, roll those back when your function completes!
|
||||||
|
|
||||||
##### Managing Test Zone Files
|
##### Managing Test Zone Files
|
||||||
When functional tests are run, we spin up several Docker containers. One of the Docker containers is a Bind9 DNS
|
|
||||||
server. If you need to add or modify the test DNS zone files, you can find them in
|
|
||||||
`docker/bind9/zones`
|
|
||||||
|
|
||||||
## Validating VinylDNS
|
When functional tests are run, we spin up several Docker containers. One of the Docker containers is a Bind9 DNS server.
|
||||||
VinylDNS comes with a build script `./build.sh` that validates VinylDNS compiles, verifies that unit tests pass, and then runs functional tests.
|
If you need to add or modify the test DNS zone files, you can find them in
|
||||||
Note: This takes a while to run, and typically is only necessary if you want to simulate the same process that runs on the build servers.
|
`quickstart/bind9/zones`
|
||||||
|
|
||||||
When functional tests run, you will see a lot of output intermingled together across the various containers. You can view only the output
|
|
||||||
of the functional tests at `target/vinyldns-functest.log`. If you want to see the Docker log output from any one container, you can view
|
|
||||||
them after the tests complete at:
|
|
||||||
|
|
||||||
* `target/vinyldns-api.log` - the API server logs
|
|
||||||
* `target/vinyldns-bind9.log` - the Bind9 DNS server logs
|
|
||||||
* `target/vinyldns-elasticmq.log` - the ElasticMQ (SQS) server logs
|
|
||||||
* `target/vinyldns-functest.log` - the output of running the functional tests
|
|
||||||
* `target/vinyldns-mysql.log` - the MySQL server logs
|
|
||||||
|
|
||||||
When the func tests complete, the entire Docker setup will be automatically torn down.
|
|
||||||
|
157
MAINTAINERS.md
157
MAINTAINERS.md
@ -1,170 +1,39 @@
|
|||||||
# Maintainers
|
# Maintainers
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
* [Docker Content Trust](#docker-content-trust)
|
* [Docker Content Trust](#docker-content-trust)
|
||||||
* [Docker Hub Account](#docker-hub-account)
|
|
||||||
* [Delegating Image Signing](#delegating-image-signing)
|
|
||||||
* [Setting up Notary](#setting-up-notary)
|
|
||||||
* [Generating a Personal Delegation Key](#generating-a-personal-delegation-key)
|
|
||||||
* [Adding a Delegation Key To a Repository](#adding-a-delegation-key-to-a-repository)
|
|
||||||
* [Pushing a Signed Image with your Delegation Key](#pushing-a-signed-image-with-your-delegation-key)
|
|
||||||
* [Sonatype Credentials](#sonatype-credentials)
|
|
||||||
* [Release Process](#release-process)
|
* [Release Process](#release-process)
|
||||||
|
|
||||||
## Docker Content Trust
|
## Docker Content Trust
|
||||||
|
|
||||||
Official VinylDNS Docker images are signed when being pushed to Docker Hub. Docs for Docker Content Trust can be found
|
Official VinylDNS Docker images are signed when being pushed to Docker Hub. Docs for Docker Content Trust can be found
|
||||||
at https://docs.docker.com/engine/security/trust/content_trust/.
|
at <https://docs.docker.com/engine/security/trust/>.
|
||||||
|
|
||||||
Content trust is enabled through the `DOCKER_CONTENT_TRUST` environment variable, which must be set to 1. It is recommended that
|
Content trust is enabled through the `DOCKER_CONTENT_TRUST` environment variable, which must be set to `1`. It is
|
||||||
in your `~/.bashrc`, you have `export DOCKER_CONTENT_TRUST=1` by default, and if you ever want to turn it off for a
|
recommended that in your `~/.bashrc`, you have `export DOCKER_CONTENT_TRUST=1` by default, and if you ever want to turn
|
||||||
Docker command, add the `--disable-content-trust` flag to the command, e.g. `docker pull --disable-content-trust ...`.
|
it off for a Docker command, add the `--disable-content-trust` flag to the command,
|
||||||
|
e.g. `docker pull --disable-content-trust ...`.
|
||||||
|
|
||||||
There are multiple Docker repositories on Docker Hub under
|
There are multiple Docker repositories on Docker Hub under
|
||||||
the [vinyldns organization](https://hub.docker.com/u/vinyldns/dashboard/). Namely:
|
the [vinyldns organization](https://hub.docker.com/u/vinyldns/dashboard/). Namely:
|
||||||
|
|
||||||
* vinyldns/api: images for vinyldns core api engine
|
* vinyldns/api: images for vinyldns core api engine
|
||||||
* vinyldns/portal: images for vinyldns web client
|
* vinyldns/portal: images for vinyldns web client
|
||||||
* vinyldns/bind9: images for local DNS server used for testing
|
|
||||||
* vinyldns/test-bind9: contains the setup to run functional tests
|
|
||||||
* vinyldns/test: has the actual functional tests pinned to a version of VinylDNS
|
|
||||||
|
|
||||||
The offline root key and repository keys are managed by the core maintainer team. The keys managed are:
|
The offline root key and repository keys are managed by the core maintainer team. The keys managed are:
|
||||||
|
|
||||||
* root key: also known as the offline key, used to create the separate repository signing keys
|
* root key: also known as the offline key, used to create the separate repository signing keys
|
||||||
* api key: used to sign tagged images in vinyldns/api
|
* api key: used to sign tagged images in vinyldns/api
|
||||||
* portal key: used to sign tagged images in vinyldns/portal
|
* portal key: used to sign tagged images in vinyldns/portal
|
||||||
* bind9 key: used to sign tagged images in the vinyldns/bind9
|
|
||||||
* test-bind9 key: used to sign tagged images in the vinyldns/test-bind9
|
|
||||||
* test key: used to sign tagged images in the vinyldns/test
|
|
||||||
|
|
||||||
These keys are named in a <hash>.key format, e.g. 5526ecd15bd413e08718e66c440d17a28968d5cd2922b59a17510da802ca6572.key,
|
|
||||||
do not change the names of the keys.
|
|
||||||
|
|
||||||
Docker expects these keys to be saved in `~/.docker/trust/private`. Each key is encrypted with a passphrase, that you
|
|
||||||
must have available when pushing an image.
|
|
||||||
|
|
||||||
### Docker Hub Account
|
|
||||||
|
|
||||||
If you don't have one already, make an account on Docker Hub. Get added as a Collaborator to vinyldns/api, vinyldns/portal,
|
|
||||||
and vinyldns/bind9
|
|
||||||
|
|
||||||
### Delegating Image Signing
|
|
||||||
Someone with our keys can sign images when pushing, but instead of sharing those keys we can utilize
|
|
||||||
notary to delegate image signing permissions in a safer way. Notary will have you make a public-private key pair and
|
|
||||||
upload your public key. This way you only need your private key, and a developer's permissions can easily be revoked.
|
|
||||||
|
|
||||||
#### Setting up Notary
|
|
||||||
If you do not already have notary:
|
|
||||||
|
|
||||||
1. Download the latest release for your machine at https://github.com/theupdateframework/notary/releases,
|
|
||||||
for example, on a mac download the precompiled binary `notary-Darwin-amd64`
|
|
||||||
1. Rename the binary to notary, and choose a location where it will live,
|
|
||||||
e.g. `cd ~/Downloads/; mv notary-Darwin-amd64 notary; mv notary ~/Documents/notary; cd ~/Documents`
|
|
||||||
1. Make it executable, e.g. `chmod +x notary`
|
|
||||||
1. Add notary to your path, e.g. `vim ~/.bashrc`, add `export PATH="$PATH":<path to notary>`
|
|
||||||
1. Create a `~/.notary/config.json` with
|
|
||||||
|
|
||||||
```
|
|
||||||
{
|
|
||||||
"trust_dir" : "~/.docker/trust",
|
|
||||||
"remote_server": {
|
|
||||||
"url": "https://notary.docker.io"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
You can test notary with `notary -s https://notary.docker.io -d ~/.docker/trust list docker.io/vinyldns/api`, in which
|
|
||||||
you should see tagged images for the VinylDNS API
|
|
||||||
|
|
||||||
> Note: you'll pretty much always use the `-s https://notary.docker.io -d ~/.docker/trust` args when running notary,
|
|
||||||
it will be easier for you to alias a command like `notarydefault` to `notary -s https://notary.docker.io -d ~/.docker/trust`
|
|
||||||
in your `.bashrc`
|
|
||||||
|
|
||||||
#### Generating a Personal Delegation Key
|
|
||||||
1. `cd` to a directory where you will save your delegation keys and certs
|
|
||||||
1. Generate your private key: `openssl genrsa -out delegation.key 2048`
|
|
||||||
1. Generate your public key: `openssl req -new -sha256 -key delegation.key -out delegation.csr`, all fields are optional,
|
|
||||||
but when it gets to your email it makes sense to add that
|
|
||||||
1. Self-sign your public key (valid for one year):
|
|
||||||
`openssl x509 -req -sha256 -days 365 -in delegation.csr -signkey delegation.key -out delegation.crt`
|
|
||||||
1. Change the `delegation.crt` to some unique name, like `my-name-vinyldns-delegation.crt`
|
|
||||||
1. Give your `my-name-vinyldns-delegation.crt` to someone that has the root keys and passphrases so
|
|
||||||
they can upload your delegation key to the repository
|
|
||||||
|
|
||||||
#### Adding a Delegation Key to a Repository
|
|
||||||
This expects you to have the root keys and passphrases for the Docker repositories
|
|
||||||
|
|
||||||
1. List current keys: `notary -s https://notary.docker.io -d ~/.docker/trust delegation list docker.io/vinyldns/api`
|
|
||||||
1. Add team member's public key: `notary delegation add docker.io/vinyldns/api targets/releases <team members delegation crt path> --all-paths`
|
|
||||||
1. Push key: `notary publish docker.io/vinyldns/api`
|
|
||||||
1. Repeat above steps for `docker.io/vinyldns/portal`
|
|
||||||
|
|
||||||
Add their key ID to the table below, it can be viewed with `notary -s https://notary.docker.io -d ~/.docker/trust delegation list docker.io/vinyldns/api`.
|
|
||||||
It will be the one that didn't show up when you ran step one of this section
|
|
||||||
|
|
||||||
| Name | Key ID |
|
|
||||||
|----------------|------------------------------------------------------------------
|
|
||||||
| Nima Eskandary | 66027c822d68133da859f6639983d6d3d9643226b3f7259fc6420964993b499a, cdca33de91c54f801d89240d18b5037e274461ba1c88c10451070c97e9f665b4 |
|
|
||||||
| Rebecca Star | 04285e24d3b9a8b614b34da229669de1f75c9faa471057e8b4a7d60aac0d5bf5 |
|
|
||||||
| Michael Ly |dd3a5938fc927de087ad4b59d6ac8f62b6502d05b2cc9b0623276cbac7dbf05b |
|
|
||||||
|
|
||||||
#### Pushing a Signed Image with your Delegation Key
|
|
||||||
1. Run `notary key import <path to private delegation key> --role user`
|
|
||||||
1. You will have to create a passphrase for this key that encrypts it at rest. Use a password generator to make a
|
|
||||||
strong password and save it somewhere safe, like apple keychain or some other password manager
|
|
||||||
1. From now on `docker push` will be try to sign images with the delegation key if it was configured for that Docker
|
|
||||||
repository
|
|
||||||
|
|
||||||
## Sonatype Credentials
|
|
||||||
|
|
||||||
The core module is pushed to oss.sonatype.org under io.vinyldns
|
|
||||||
|
|
||||||
To be able to push to sonatype you will need the pgp key used to sign the module. We use a [blackbox](https://github.com/StackExchange/blackbox/)
|
|
||||||
repo to share this key and its corresponding passphrase. Follow these steps to set it up properly on your local
|
|
||||||
|
|
||||||
1. Ensure you have a gpg key setup on your machine by running `gpg -K`, if you do not then run `gpg --gen-key` to create one,
|
|
||||||
note you will have to generate a strong passphrase and save it in some password manager
|
|
||||||
1. Make sure you have blackbox, on mac this would be `brew install blackbox`
|
|
||||||
1. Clone our blackbox repo, get the git url from another maintainer
|
|
||||||
1. Run `blackbox_addadmin <the email associated with your gpg key>`
|
|
||||||
1. Commit your changes to the blackbox repo and push to master
|
|
||||||
1. Have an existing admin pull the repo and run `gpg --keyring keyrings/live/pubring.kbx --export | gpg --import`, and `blackbox_update_all_files`
|
|
||||||
1. Have the existing admin commit and push those changes to master
|
|
||||||
1. Back to you - pull the changes, and now you should be able to read those files
|
|
||||||
1. Run `blackbox_edit_start vinyldns-sonatype-key.asc.gpg` to temporarily decrypt the sonatype signing key
|
|
||||||
1. Run `gpg --import vinyldns-sonatype-key.asc` to import the sonatype signing key to your keyring
|
|
||||||
1. Run `blackbox_edit_end vinyldns-sonatype-key.asc.gpg` to re-encrypt the sonatype signing key
|
|
||||||
1. Run `blackbox_cat vinyldns-sonatype.txt.gpg` to view the passphrase for that key - you will need this passphrase handy when releasing
|
|
||||||
1. Create a file `~/.sbt/1.0/vinyldns-gpg-credentials` with the content
|
|
||||||
|
|
||||||
```
|
|
||||||
realm=GnuPG Key ID
|
|
||||||
host=gpg
|
|
||||||
user=vinyldns@gmail.com
|
|
||||||
password=ignored-must-use-pinentry
|
|
||||||
```
|
|
||||||
|
|
||||||
1. Add credenial configuration to global sbt setting in `~/.sbt/1.0/credential.sbt` with the content
|
|
||||||
|
|
||||||
```
|
|
||||||
credentials += Credentials(Path.userHome / ".sbt" / "1.0" / "vinyldns-gpg-credentials")
|
|
||||||
```
|
|
||||||
|
|
||||||
## Release Process
|
## Release Process
|
||||||
|
|
||||||
We are using sbt-release to run our release steps and auto-bump the version in `version.sbt`. The `bin/release.sh`
|
The release process is automated by GitHub Actions.
|
||||||
script will first run functional tests, then kick off `sbt release`, which also runs unit and integration tests before
|
|
||||||
running the release
|
|
||||||
|
|
||||||
1. Follow [Docker Content Trust](#docker-content-trust) to setup a notary delegation for yourself
|
To start, create a release in GitHub with the same tag as the version found in `version.sbt`.
|
||||||
1. Follow [Sonatype Credentials](#sonatype-credentials) to setup the sonatype pgp signing key on your local
|
|
||||||
1. Make sure you're logged in to Docker with `docker login`
|
The release will perform the following actions:
|
||||||
1. Run `bin/release.sh` _Note: the arg "skip-tests" will skip unit, integration and functional testing before a release_
|
|
||||||
1. You will be asked to confirm the version which originally comes from `version.sbt`. _NOTE: if the version ends with
|
1. Published Docker images to `hub.docker.com`
|
||||||
`SNAPSHOT`, then the docker latest tag won't be applied and the core module will only be published to the sonatype
|
2. Attached artifacts created by the build to the GitHub release
|
||||||
staging repo.
|
|
||||||
1. When it comes to the sonatype stage, you will need the passphrase handy for the signing key, [Sonatype Credentials](#sonatype-credentials)
|
|
||||||
1. Assuming things were successful, make a pr since sbt release auto-bumped `version.sbt` and made a commit for you
|
|
||||||
1. Run `./build/docker-release.sh --branch [TAG CREATED FROM PREVIOUS STEP, e.g. v0.9.3] --clean --push`
|
|
||||||
1. You will need to have your keys ready so you can sign each image as it is published.
|
|
||||||
|
206
README.md
206
README.md
@ -1,12 +1,9 @@
|
|||||||
[](https://gitter.im/vinyldns)
|
[](https://github.com/vinyldns/vinyldns/releases/latest)
|
||||||

|
[](https://hub.docker.com/r/vinyldns/api/tags?page=1&ordering=last_updated)
|
||||||
[](https://codecov.io/gh/vinyldns/vinyldns)
|
[](https://hub.docker.com/r/vinyldns/portal/tags?page=1&ordering=last_updated)
|
||||||
[](https://bestpractices.coreinfrastructure.org/projects/2682)
|
|
||||||
[](https://github.com/vinyldns/vinyldns/blob/master/LICENSE)
|
|
||||||
[](https://github.com/vinyldns/vinyldns/blob/master/CODE_OF_CONDUCT.md)
|
|
||||||
|
|
||||||
<p align="left">
|
<p align="left">
|
||||||
<a href="http://www.vinyldns.io/">
|
<a href="https://www.vinyldns.io/">
|
||||||
<img
|
<img
|
||||||
alt="VinylDNS"
|
alt="VinylDNS"
|
||||||
src="img/vinyldns_optimized.svg"
|
src="img/vinyldns_optimized.svg"
|
||||||
@ -16,106 +13,163 @@
|
|||||||
</p>
|
</p>
|
||||||
|
|
||||||
# VinylDNS
|
# VinylDNS
|
||||||
VinylDNS is a vendor agnostic front-end for enabling self-service DNS and streamlining DNS operations.
|
|
||||||
VinylDNS manages millions of DNS records supporting thousands of engineers in production at [Comcast](http://www.comcast.com).
|
VinylDNS is a vendor-agnostic front-end for enabling self-service DNS and streamlining DNS operations. VinylDNS manages
|
||||||
The platform provides fine-grained access controls, auditing of all changes, a self-service user interface,
|
millions of DNS records supporting thousands of engineers in production at [Comcast](http://www.comcast.com). The
|
||||||
secure RESTful API, and integration with infrastructure automation tools like Ansible and Terraform.
|
platform provides fine-grained access controls, auditing of all changes, a self-service user interface, secure RESTful
|
||||||
It is designed to integrate with your existing DNS infrastructure, and provides extensibility to fit your installation.
|
API, and integration with infrastructure automation tools like Ansible and Terraform. It is designed to integrate with
|
||||||
|
your existing DNS infrastructure, and provides extensibility to fit your installation.
|
||||||
|
|
||||||
VinylDNS helps secure DNS management via:
|
VinylDNS helps secure DNS management via:
|
||||||
* AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit
|
|
||||||
* Throttling of DNS updates to rate limit concurrent updates against your DNS systems
|
- AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit
|
||||||
* Encrypting user secrets and TSIG keys at rest and in-transit
|
- Throttling of DNS updates to rate limit concurrent updates against your DNS systems
|
||||||
* Recording every change made to DNS records and zones
|
- Encrypting user secrets and TSIG keys at rest and in-transit
|
||||||
|
- Recording every change made to DNS records and zones
|
||||||
|
|
||||||
Integration is simple with first-class language support including:
|
Integration is simple with first-class language support including:
|
||||||
* java
|
|
||||||
* ruby
|
- Java
|
||||||
* python
|
- Python
|
||||||
* go-lang
|
- Go
|
||||||
* javascript
|
- JavaScript
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
- [Quickstart](#quickstart)
|
|
||||||
- [Code of Conduct](#code-of-conduct)
|
* [Quickstart](#quickstart)
|
||||||
- [Developer Guide](#developer-guide)
|
- [Quickstart Optimization](#quickstart-optimization)
|
||||||
- [Contributing](#contributing)
|
* [Things to Try in the Portal](#things-to-try-in-the-portal)
|
||||||
- [Roadmap](#roadmap)
|
+ [Verifying Your Changes](#verifying-your-changes)
|
||||||
- [Contact](#contact)
|
+ [Other things to note](#other-things-to-note)
|
||||||
- [Maintainers and Contributors](#maintainers-and-contributors)
|
* [Code of Conduct](#code-of-conduct)
|
||||||
- [Credits](#credits)
|
* [Developer Guide](#developer-guide)
|
||||||
|
* [Contributing](#contributing)
|
||||||
|
* [Maintainers and Contributors](#maintainers-and-contributors)
|
||||||
|
* [Credits](#credits)
|
||||||
|
|
||||||
## Quickstart
|
## Quickstart
|
||||||
Docker images for VinylDNS live on Docker Hub at <https://hub.docker.com/u/vinyldns/>.
|
|
||||||
To start up a local instance of VinylDNS on your machine with docker:
|
|
||||||
|
|
||||||
1. Ensure that you have [docker](https://docs.docker.com/install/) and [docker-compose](https://docs.docker.com/compose/install/)
|
Docker images for VinylDNS live on Docker Hub at <https://hub.docker.com/u/vinyldns/>. To start up a local instance of
|
||||||
|
VinylDNS on your machine with docker:
|
||||||
|
|
||||||
|
1. Ensure that you have [docker](https://docs.docker.com/install/)
|
||||||
|
and [docker-compose](https://docs.docker.com/compose/install/)
|
||||||
1. Clone the repo: `git clone https://github.com/vinyldns/vinyldns.git`
|
1. Clone the repo: `git clone https://github.com/vinyldns/vinyldns.git`
|
||||||
1. Navigate to repo: `cd vinyldns`
|
1. Navigate to repo: `cd vinyldns`
|
||||||
1. Run `./bin/docker-up-vinyldns.sh`. This will start up the api at `localhost:9000` and the portal at `localhost:9001`
|
1. Run `./quickstart/quickstart-vinyldns.sh`. This will start up the api at `localhost:9000` and the portal
|
||||||
1. See [Developer Guide](DEVELOPER_GUIDE.md#loading-test-data) for how to load a test DNS zone
|
at `localhost:9001`
|
||||||
1. To stop the local setup, run `./bin/remove-vinyl-containers.sh`.
|
1. See [Things to Try in the Portal](#things-to-try-in-the-portal) for getting familiar with the Portal
|
||||||
|
1. To stop the local setup, run `./utils/clean-vinyldns-containers.sh`.
|
||||||
|
|
||||||
There exist several clients at <https://github.com/vinyldns> that can be used to make API requests, using the endpoint `http://localhost:9000`
|
There exist several clients at <https://github.com/vinyldns> that can be used to make API requests, using the
|
||||||
|
endpoint `http://localhost:9000`.
|
||||||
|
|
||||||
|
#### Quickstart Optimization
|
||||||
|
|
||||||
|
If you are experimenting with Quickstart, you may encounter a delay each time you run it. This is because the API and
|
||||||
|
Portal are rebuilt every time you launch Quickstart. If you'd like to cache the builds of the API and Portal, you may
|
||||||
|
want to first run:
|
||||||
|
|
||||||
|
| Script | Description |
|
||||||
|
|----------------------------|------------------------------------------------------------------------------|
|
||||||
|
| `build/assemble_api.sh` | This will create the API `jar` file which will then be used by Quickstart |
|
||||||
|
| `build/assemble_portal.sh` | This will create the Portal `zip` file which will then be used by Quickstart |
|
||||||
|
|
||||||
|
Once these scripts are run, the artifacts are placed into the `artifacts/` directory and will be reused for each
|
||||||
|
Quickstart launch. If you'd like to regenerate the artifacts, simply delete them and rerun the scripts above.
|
||||||
|
|
||||||
|
## Things to Try in the Portal
|
||||||
|
|
||||||
## Things to try in the portal
|
|
||||||
1. View the portal at <http://localhost:9001> in a web browser
|
1. View the portal at <http://localhost:9001> in a web browser
|
||||||
1. Login with the credentials ***professor*** and ***professor***
|
2. Login with the credentials `professor` and `professor`
|
||||||
1. Navigate to the `groups` tab: <http://localhost:9001/groups>
|
3. Navigate to the `groups` tab: <http://localhost:9001/groups>
|
||||||
1. Click on the **New Group** button and create a new group, the group id is the uuid in the url after you view the group
|
4. Click on the **New Group** button and create a new group, the group id is the uuid in the url after you view the
|
||||||
1. View zones you connected to in the `zones` tab: <http://localhost:9001/zones>. For a quick test, create a new zone named "ok" with an email of "test@test.com" and choose a group you created from the previous step. (Note, see [Developer Guide](DEVELOPER_GUIDE.md#loading-test-data) for creating a zone)
|
group
|
||||||
1. You will see that some records are preloaded in the zoned already, this is because these records are preloaded in the local docker DNS server
|
5. Connect a zone by going to the `zones` tab: <http://localhost:9001/zones>.
|
||||||
and VinylDNS automatically syncs records with the backend DNS server upon zone connection
|
1. Click the `-> Connect` button
|
||||||
1. From here, you can create DNS record sets in the **Manage Records** tab, and manage zone settings and ***ACL rules***
|
2. For `Zone Name` enter `ok` with an email of `test@test.com`
|
||||||
in the **Manage Zone** tab
|
3. For `Admin Group`, choose a group you created from the previous step
|
||||||
1. To try creating a DNS record, click on the **Create Record Set** button under Records, `Record Type = A, Record Name = my-test-a,
|
4. Leave everything else as-is and click the `Connect` button at the bottom of the form
|
||||||
TTL = 300, IP Addressess = 1.1.1.1`
|
6. A new zone `ok` should appear in your `My Zones` tab _(you may need to refresh your browser)_
|
||||||
1. Click on the **Refresh** button under Records, you should see your new record created
|
7. You will see that some records are preloaded in the zone already, this is because these records are preloaded in the
|
||||||
|
local docker DNS server and VinylDNS automatically syncs records with the backend DNS server upon zone connection
|
||||||
|
8. From here, you can create DNS record sets in the **Manage Records** tab, and manage zone settings and ***ACL rules***
|
||||||
|
in the **Manage Zone** tab
|
||||||
|
9. To try creating a DNS record, click on the **Create Record Set** button under
|
||||||
|
Records, `Record Type = A, Record Name = my-test-a, TTL = 300, IP Addressess = 1.1.1.1`
|
||||||
|
10. Click on the **Refresh** button under Records, you should see your new record created
|
||||||
|
|
||||||
## Other things to note
|
### Verifying Your Changes
|
||||||
1. Upon connecting to a zone for the first time, a zone sync is executed to provide VinylDNS a copy of the records in the zone
|
|
||||||
1. Changes made via VinylDNS are made against the DNS backend, you do not need to sync the zone further to push those changes out
|
VinylDNS will synchronize with the DNS backend. For the Quickstart this should be running on port `19001` on `localhost`
|
||||||
1. If changes to the zone are made outside of VinylDNS, then the zone will have to be re-synced to give VinylDNS a copy of those records
|
.
|
||||||
1. If you wish to modify the url used in the creation process from `http://localhost:9000`, to say `http://vinyldns.yourdomain.com:9000`, you can modify the `bin/.env` file before execution.
|
|
||||||
1. A similar `docker/.env.quickstart` can be modified to change the default ports for the Portal and API. You must also modify their config files with the new port: https://www.vinyldns.io/operator/config-portal & https://www.vinyldns.io/operator/config-api
|
To verify your changes, you can use a DNS resolution utility like `dig`
|
||||||
|
|
||||||
|
```bash
|
||||||
|
$ dig @127.0.0.1 -p 19001 +short my-test-a.ok
|
||||||
|
1.1.1.1
|
||||||
|
```
|
||||||
|
|
||||||
|
This tells `dig` to use `127.0.0.1` as the resolver on port `19001`. The `+short` just makes the output a bit less
|
||||||
|
verbose. Finally, the record we're looking up is `my-test-a.ok`. You can see the returned output of `1.1.1.1` matches
|
||||||
|
the record data we entered.
|
||||||
|
|
||||||
|
### Other things to note
|
||||||
|
|
||||||
|
1. Upon connecting to a zone for the first time, a zone sync is executed to provide VinylDNS a copy of the records in
|
||||||
|
the zone
|
||||||
|
1. Changes made via VinylDNS are made against the DNS backend, you do not need to sync the zone further to push those
|
||||||
|
changes out
|
||||||
|
1. If changes to the zone are made outside of VinylDNS, then the zone will have to be re-synced to give VinylDNS a copy
|
||||||
|
of those records
|
||||||
|
1. If you wish to modify the url used in the creation process from `http://localhost:9000`, to
|
||||||
|
say `http://vinyldns.yourdomain.com:9000`, you can modify the `quickstart/.env` file before execution.
|
||||||
|
1. Further configuration can be ac https://www.vinyldns.io/operator/config-portal
|
||||||
|
& https://www.vinyldns.io/operator/config-api
|
||||||
|
|
||||||
## Code of Conduct
|
## Code of Conduct
|
||||||
This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By
|
|
||||||
participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com.
|
This project, and everyone participating in it, are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By
|
||||||
|
participating, you agree to this Code.
|
||||||
|
|
||||||
## Developer Guide
|
## Developer Guide
|
||||||
|
|
||||||
See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for instructions on setting up VinylDNS locally.
|
See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for instructions on setting up VinylDNS locally.
|
||||||
|
|
||||||
## Contributing
|
## Contributing
|
||||||
|
|
||||||
See the [Contributing Guide](CONTRIBUTING.md).
|
See the [Contributing Guide](CONTRIBUTING.md).
|
||||||
|
|
||||||
## Roadmap
|
|
||||||
See [ROADMAP.md](ROADMAP.md) for the future plans for VinylDNS.
|
|
||||||
|
|
||||||
## Contact
|
|
||||||
- [Gitter](https://gitter.im/vinyldns)
|
|
||||||
- If you have any security concerns please contact the maintainers directly vinyldns-core@googlegroups.com
|
|
||||||
|
|
||||||
## Maintainers and Contributors
|
## Maintainers and Contributors
|
||||||
|
|
||||||
The current maintainers (people who can merge pull requests) are:
|
The current maintainers (people who can merge pull requests) are:
|
||||||
- Paul Cleary
|
|
||||||
- Ryan Emerle
|
- Ryan Emerle ([@remerle](https://github.com/remerle))
|
||||||
- Sriram Ramakrishnan
|
- Sriram Ramakrishnan ([@sramakr](https://github.com/sramakr))
|
||||||
- Jim Wakemen
|
- Jim Wakemen ([@jwakemen](https://github.com/jwakemen))
|
||||||
|
|
||||||
See [AUTHORS.md](AUTHORS.md) for the full list of contributors to VinylDNS.
|
See [AUTHORS.md](AUTHORS.md) for the full list of contributors to VinylDNS.
|
||||||
|
|
||||||
See [MAINTAINERS.md](MAINTAINERS.md) for documentation specific to maintainers
|
See [MAINTAINERS.md](MAINTAINERS.md) for documentation specific to maintainers
|
||||||
|
|
||||||
## Credits
|
## Credits
|
||||||
VinylDNS would not be possible without the help of many other pieces of open source software. Thank you open source world!
|
|
||||||
|
|
||||||
Initial development of DynamoDBHelper done by [Roland Kuhn](https://github.com/rkuhn) from https://github.com/akka/akka-persistence-dynamodb/blob/8d7495821faef754d97759f0d3d35ed18fc17cc7/src/main/scala/akka/persistence/dynamodb/journal/DynamoDBHelper.scala
|
VinylDNS would not be possible without the help of many other pieces of open source software. Thank you open source
|
||||||
|
world!
|
||||||
|
|
||||||
Given the Apache 2.0 license of VinylDNS, we specifically want to call out the following libraries and their corresponding licenses shown below.
|
Given the Apache 2.0 license of VinylDNS, we specifically want to call out the following libraries and their
|
||||||
- [logback-classic](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
corresponding licenses shown below.
|
||||||
- [logback-core](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
|
||||||
- [h2 database](http://h2database.com) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
- [logback-classic](https://github.com/qos-ch/logback)
|
||||||
- [pureconfig](https://github.com/pureconfig/pureconfig) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
- [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
||||||
- [pureconfig-macros](https://github.com/pureconfig/pureconfig) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
- [logback-core](https://github.com/qos-ch/logback)
|
||||||
- [junit](https://junit.org/junit4/) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
- [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
||||||
|
- [h2 database](http://h2database.com)
|
||||||
|
- [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
||||||
|
- [pureconfig](https://github.com/pureconfig/pureconfig)
|
||||||
|
- [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
||||||
|
- [pureconfig-macros](https://github.com/pureconfig/pureconfig)
|
||||||
|
- [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/)
|
||||||
|
- [junit](https://junit.org/junit4/)
|
||||||
|
- [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html)
|
||||||
|
44
ROADMAP.md
44
ROADMAP.md
@ -1,44 +0,0 @@
|
|||||||
# Roadmap
|
|
||||||
What is a Roadmap in opensource? VinylDNS would like to communicate _direction_ in terms of the features and needs
|
|
||||||
expressed by the VinylDNS community. In open source, demand is driven by the community through
|
|
||||||
Github issues. As more members join the discussion, we anticipate the "plan" to change. This document will be updated regularly to reflect the changes in prioritization.
|
|
||||||
|
|
||||||
This document is organized by priority / planned release timeframes. Reading top-down should give you a sense of the order in which new features are planned to be delivered.
|
|
||||||
|
|
||||||
## Completed
|
|
||||||
|
|
||||||
- **Batch Change** - users can now submit multiple changes across zones at the same time. Included in batch change are:
|
|
||||||
- **Manual Review** - the ability to manually review certain DNS changes
|
|
||||||
- **Scheduled Changes** - the ability to schedule certain DNS changes to occur at a point in time in the future (requires manual processing right now)
|
|
||||||
- **Bulk import** - allows users to bulk load DNS changes from a CSV file
|
|
||||||
- **Global ACL Rules** - allows override on Shared / Record ownership
|
|
||||||
- **Global record search** - allows users to search for records across zones
|
|
||||||
- **Backend Providers** - allow connectivity to DNS backends _other_ than DDNS, e.g. AWS Route 53
|
|
||||||
|
|
||||||
## Next up?
|
|
||||||
|
|
||||||
We are currently reviewing our roadmap. Some of the features we have discussed are below. If you have features you would like to contribute, drop us a line!
|
|
||||||
|
|
||||||
## Zone Management
|
|
||||||
Presently VinylDNS _connects to existing zones_ for management. Zone Management will allow users
|
|
||||||
to create and manage zones in the authoritative systems themselves. The following high-level features are planned:
|
|
||||||
|
|
||||||
1. Server Groups - allow VinylDNS admins to setup Server Groups. A Server Group consists of the primary,
|
|
||||||
secondary, and other information for a specific DNS backend. Server Groups are _vendor_ specific, plugins will be
|
|
||||||
be created for specific DNS vendors
|
|
||||||
1. Quotas - restrictions defined for a specific Server Group. These include items like `maxRecordSetsPerZone`, `concurrentUpdates`and more.
|
|
||||||
1. Zone Creation - allow the creation of a sub-domain from an existing Zone. Users choose the Server Group where
|
|
||||||
the zone will live, VinylDNS creates the delegation as well as access controls for the new zone.
|
|
||||||
1. Zone Maintenance - support the modification of zone properties, like default SOA record settings.
|
|
||||||
|
|
||||||
## Other
|
|
||||||
There are several other features that we would like to support. We will be opening up these for RFC shortly. These include:
|
|
||||||
|
|
||||||
1. DNS SEC - There is no first-class support for DNS SEC. That feature set is being defined.
|
|
||||||
1. Record meta data - VinylDNS will allow the "tagging" of DNS records with arbitrary key-value pairs
|
|
||||||
1. DNS Global Service Load Balancing (GSLB) - Support for common DNS GSLB use cases and integration with various GSLB vendors
|
|
||||||
1. A new user interface
|
|
||||||
1. Additional automation tools
|
|
||||||
1. VinylDNS admin user experience - pull alot of things from Config into the Portal UI for simpler administration
|
|
||||||
1. Split views / zone views
|
|
||||||
|
|
@ -1,56 +1,57 @@
|
|||||||
# System Design
|
# System Design
|
||||||
|
|
||||||
## Table of Contents
|
## Table of Contents
|
||||||
|
|
||||||
- [Components](#components)
|
- [Components](#components)
|
||||||
- [Process Flow](#process-flow)
|
- [Process Flow](#process-flow)
|
||||||
- [Integration](#integration)
|
- [Integration](#integration)
|
||||||
|
|
||||||
## Components
|
## Components
|
||||||
|
|
||||||
The following diagram illustrates the major components in the VinylDNS ecosystem and the external systems they interact with.
|
The following diagram illustrates the major components in the VinylDNS ecosystem and the external systems they interact
|
||||||
|
with.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
* API - RESTful endpoints to allow interaction with VinylDNS
|
| Component | Description |
|
||||||
|
|------------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||||
* Database - stores information that the VinylDNS application needs
|
| Portal | Web user interface to interact with the VinylDNS API |
|
||||||
|
| API | RESTful endpoints to allow interaction with VinylDNS |
|
||||||
* DNS servers - communicates DNS changes and resolves DNS records
|
| API Worker Nodes | These are API components with `processing-disabled` set to `false` (see [documentation](https://www.vinyldns.io/operator/config-api.html#processing-disabled)) |
|
||||||
|
| Message queue | Queue for DNS commands to enable flow control to the DNS backends (see [documentation](https://www.vinyldns.io/operator/pre.html#message-queues)) |
|
||||||
* Message queue - temporarily stores DNS requests for processing
|
| Database | Stores information about users, membership, and DNS records |
|
||||||
|
| DNS Backend(s) | The DNS backend servers which VinylDNS will query and update. |
|
||||||
* LDAP Service - application protocol used to authenticate user access to the VinylDNS portal
|
| LDAP Service | The optional LDAP service that VinylDNS can be configured to communicate with (see [documentation](https://www.vinyldns.io/operator/setup-ldap.html#setup-ldap)) |
|
||||||
|
|
||||||
* Portal - graphical user interface to interact with the VinylDNS API
|
|
||||||
|
|
||||||
* Tooling - external libraries and utilities used to interact with the VinylDNS API
|
|
||||||
|
|
||||||
## Process Flow
|
## Process Flow
|
||||||
|
|
||||||
1. LDAP service authenticates user credentials and grants access to the portal.
|
1. LDAP service authenticates user credentials and grants access to the portal.
|
||||||
1. If the user is accessing the portal for the first time, VinylDNS credentials are generated and stored.
|
1. If the user is accessing the portal for the first time, VinylDNS credentials are generated and stored.
|
||||||
1. User navigates portal or uses integration tooling to generate a signed API request.
|
1. User navigates portal or uses integration tooling to generate a signed API request.
|
||||||
1. When the API receives a request, it loads the credentials for the calling user from the database and validates the request signature to ensure that the request was not modified in transit.
|
1. When the API receives a request, it loads the credentials for the calling user from the database and validates the
|
||||||
|
request signature to ensure that the request was not modified in transit.
|
||||||
1. The request is then validated to ensure that:
|
1. The request is then validated to ensure that:
|
||||||
- the request data is correct
|
- the request data is correct
|
||||||
- the request passes all validation checks
|
- the request passes all validation checks
|
||||||
- the user has access to make the change
|
- the user has access to make the change
|
||||||
1. Assuming the request is in good order, the request is put on a message queue for handling.
|
1. Assuming the request is in good order, the request is put on a message queue for handling.
|
||||||
1. One of the VinylDNS API server instances pulls the message from the queue for processing. For record changes, a DDNS message is issued to the DNS backend server.
|
1. One of the VinylDNS API server instances pulls the message from the queue for processing. For record changes, a DDNS
|
||||||
1. When the message completes processing, it is removed from the message queue. The changes are applied to the VinylDNS database along with an audit record for the request.
|
message is issued to the DNS backend server.
|
||||||
|
1. When the message completes processing, it is removed from the message queue. The changes are applied to the VinylDNS
|
||||||
|
database along with an audit record for the request.
|
||||||
|
|
||||||
## Integration
|
## Integration
|
||||||
|
|
||||||
Integrating with VinylDNS is simple since each API endpoint is effectively a distinct DNS operation (eg. create record, update record, delete record, etc.). The only requirement for sending a request is generating the correct AWS SIG4 signature without content length and providing the corresponding HTTP headers so that VinylDNS can verify it. See [API Authentication](https://www.vinyldns.io/api/auth-mechanism.html) for more details.
|
Integrating with VinylDNS is simple since each API endpoint is effectively a distinct DNS operation (eg. create record,
|
||||||
|
update record, delete record, etc.). The only requirement for sending a request is generating the correct AWS SIG4
|
||||||
|
signature without content length and providing the corresponding HTTP headers so that VinylDNS can verify it.
|
||||||
|
See [API Authentication](https://www.vinyldns.io/api/auth-mechanism.html) for more details.
|
||||||
|
|
||||||
The current tooling available to perform VinylDNS API requests include:
|
The current tooling available to perform VinylDNS API requests include:
|
||||||
|
|
||||||
* [go-vinyldns](https://github.com/vinyldns/go-vinyldns) - Golang client package
|
* [go-vinyldns](https://github.com/vinyldns/go-vinyldns) - Golang client package
|
||||||
|
* [terraform-provider-vinyldns](https://github.com/vinyldns/terraform-provider-vinyldns) - A [Terraform](https://terraform.io/) provider for VinylDNS
|
||||||
* [vinyldns-cli](https://github.com/vinyldns/vinyldns-cli) - command line utility written in Golang
|
* [vinyldns-cli](https://github.com/vinyldns/vinyldns-cli) - Command line utility written in Golang
|
||||||
|
|
||||||
* [vinyldns-java](https://github.com/vinyldns/vinyldns-java) - Java client
|
* [vinyldns-java](https://github.com/vinyldns/vinyldns-java) - Java client
|
||||||
|
* [vinyldns-js](https://github.com/vinyldns/vinyldns-js) - JavaScript client
|
||||||
* [vinyldns-python](https://github.com/vinyldns/vinyldns-python) - Python client library
|
* [vinyldns-python](https://github.com/vinyldns/vinyldns-python) - Python client library
|
||||||
|
|
||||||
* [vinyldns-ruby](https://github.com/vinyldns/vinyldns-ruby) - Ruby gem
|
|
||||||
|
2
bin/.env
2
bin/.env
@ -1,2 +0,0 @@
|
|||||||
VINYLDNS_API_URL=http://localhost:9000
|
|
||||||
VINYLDNS_PORTAL_URL=http://localhost:9001
|
|
34
bin/build.sh
34
bin/build.sh
@ -1,34 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
|
|
||||||
echo "Verifying code..."
|
|
||||||
#${DIR}/verify.sh
|
|
||||||
|
|
||||||
#step_result=$?
|
|
||||||
step_result=0
|
|
||||||
if [ ${step_result} != 0 ]
|
|
||||||
then
|
|
||||||
echo "Failed to verify build!!!"
|
|
||||||
exit ${step_result}
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Func testing the api..."
|
|
||||||
${DIR}/func-test-api.sh
|
|
||||||
|
|
||||||
step_result=$?
|
|
||||||
if [ ${step_result} != 0 ]
|
|
||||||
then
|
|
||||||
echo "Failed API func tests!!!"
|
|
||||||
exit ${step_result}
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Func testing the portal..."
|
|
||||||
${DIR}/func-test-portal.sh
|
|
||||||
step_result=$?
|
|
||||||
if [ ${step_result} != 0 ]
|
|
||||||
then
|
|
||||||
echo "Failed Portal func tests!!!"
|
|
||||||
exit ${step_result}
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit 0
|
|
@ -1,10 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
|
|
||||||
cd $DIR/../
|
|
||||||
|
|
||||||
echo "Publishing docker image..."
|
|
||||||
sbt clean docker:publish
|
|
||||||
publish_result=$?
|
|
||||||
cd $DIR
|
|
||||||
exit ${publish_result}
|
|
@ -1,58 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
######################################################################
|
|
||||||
# Copies the contents of `docker` into target/scala-2.12
|
|
||||||
# to start up dependent services via docker compose. Once
|
|
||||||
# dependent services are started up, the fat jar built by sbt assembly
|
|
||||||
# is loaded into a docker container. The api will be available
|
|
||||||
# by default on port 9000
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
|
|
||||||
set -a # Required in order to source docker/.env
|
|
||||||
# Source customizable env files
|
|
||||||
source "$DIR"/.env
|
|
||||||
source "$DIR"/../docker/.env
|
|
||||||
|
|
||||||
WORK_DIR="$DIR"/../target/scala-2.12
|
|
||||||
mkdir -p "$WORK_DIR"
|
|
||||||
|
|
||||||
echo "Copy all Docker to the target directory so we can start up properly and the Docker context is small..."
|
|
||||||
cp -af "$DIR"/../docker "$WORK_DIR"/
|
|
||||||
|
|
||||||
echo "Copy the vinyldns.jar to the API Docker folder so it is in context..."
|
|
||||||
if [[ ! -f "$DIR"/../modules/api/target/scala-2.12/vinyldns.jar ]]; then
|
|
||||||
echo "vinyldns.jar not found, building..."
|
|
||||||
cd "$DIR"/../
|
|
||||||
sbt api/clean api/assembly
|
|
||||||
cd "$DIR"
|
|
||||||
fi
|
|
||||||
cp -f "$DIR"/../modules/api/target/scala-2.12/vinyldns.jar "$WORK_DIR"/docker/api
|
|
||||||
|
|
||||||
echo "Starting API server and all dependencies in the background..."
|
|
||||||
docker-compose -f "$WORK_DIR"/docker/docker-compose-func-test.yml --project-directory "$WORK_DIR"/docker up --build -d api
|
|
||||||
|
|
||||||
echo "Waiting for API to be ready at ${VINYLDNS_API_URL} ..."
|
|
||||||
DATA=""
|
|
||||||
RETRY=40
|
|
||||||
while [ "$RETRY" -gt 0 ]
|
|
||||||
do
|
|
||||||
DATA=$(curl -I -s "${VINYLDNS_API_URL}/ping" -o /dev/null -w "%{http_code}")
|
|
||||||
if [ $? -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Succeeded in connecting to VinylDNS API!"
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep 1
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Exceeded retries waiting for VinylDNS API to be ready, failing"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
@ -1,5 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
|
|
||||||
echo "Starting ONLY the bind9 server. To start an api server use the api server script"
|
|
||||||
docker-compose -f $DIR/../docker/docker-compose-func-test.yml --project-directory $DIR/../docker up -d bind9
|
|
@ -1,113 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
#####################################################################################################
|
|
||||||
# Starts up the api, portal, and dependent services via
|
|
||||||
# docker-compose. The api will be available on localhost:9000 and the
|
|
||||||
# portal will be on localhost:9001
|
|
||||||
#
|
|
||||||
# Relevant overrides can be found at ./.env and ../docker/.env
|
|
||||||
#
|
|
||||||
# Options:
|
|
||||||
# -t, --timeout seconds: overwrite ping timeout, default of 60
|
|
||||||
# -a, --api-only: only starts up vinyldns-api and its dependencies, excludes vinyldns-portal
|
|
||||||
# -c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub
|
|
||||||
# -v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags
|
|
||||||
#####################################################################################################
|
|
||||||
|
|
||||||
function wait_for_url {
|
|
||||||
echo "pinging ${URL} ..."
|
|
||||||
DATA=""
|
|
||||||
RETRY="$TIMEOUT"
|
|
||||||
while [ "$RETRY" -gt 0 ]
|
|
||||||
do
|
|
||||||
DATA=$(curl -I -s "${URL}" -o /dev/null -w "%{http_code}")
|
|
||||||
if [ $? -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Succeeded in connecting to ${URL}!"
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep 1
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Exceeded retries waiting for ${URL} to be ready, failing"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
function usage {
|
|
||||||
printf "usage: docker-up-vinyldns.sh [OPTIONS]\n\n"
|
|
||||||
printf "starts up a local VinylDNS installation using docker compose\n\n"
|
|
||||||
printf "options:\n"
|
|
||||||
printf "\t-t, --timeout seconds: overwrite ping timeout of 60\n"
|
|
||||||
printf "\t-a, --api-only: do not start up vinyldns-portal\n"
|
|
||||||
printf "\t-c, --clean: re-pull vinyldns/api and vinyldns/portal images from docker hub\n"
|
|
||||||
printf "\t-v, --version tag: overwrite vinyldns/api and vinyldns/portal docker tags\n"
|
|
||||||
}
|
|
||||||
|
|
||||||
function clean_images {
|
|
||||||
if (( $CLEAN == 1 )); then
|
|
||||||
echo "cleaning docker images..."
|
|
||||||
docker rmi vinyldns/api:$VINYLDNS_VERSION
|
|
||||||
docker rmi vinyldns/portal:$VINYLDNS_VERSION
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function wait_for_api {
|
|
||||||
echo "Waiting for api..."
|
|
||||||
URL="$VINYLDNS_API_URL"
|
|
||||||
wait_for_url
|
|
||||||
}
|
|
||||||
|
|
||||||
function wait_for_portal {
|
|
||||||
# check if portal was skipped
|
|
||||||
if [ "$SERVICE" != "api" ]; then
|
|
||||||
echo "Waiting for portal..."
|
|
||||||
URL="$VINYLDNS_PORTAL_URL"
|
|
||||||
wait_for_url
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
# initial var setup
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
TIMEOUT=60
|
|
||||||
DOCKER_COMPOSE_CONFIG="${DIR}/../docker/docker-compose-quick-start.yml"
|
|
||||||
# empty service starts up all docker services in compose file
|
|
||||||
SERVICE=""
|
|
||||||
# when CLEAN is set to 1, existing docker images are deleted so they are re-pulled
|
|
||||||
CLEAN=0
|
|
||||||
# default to latest for docker versions
|
|
||||||
export VINYLDNS_VERSION=latest
|
|
||||||
|
|
||||||
# source env before parsing args so vars can be overwritten
|
|
||||||
set -a # Required in order to source docker/.env
|
|
||||||
# Source customizable env files
|
|
||||||
source "$DIR"/.env
|
|
||||||
source "$DIR"/../docker/.env
|
|
||||||
|
|
||||||
# parse args
|
|
||||||
while [ "$1" != "" ]; do
|
|
||||||
case "$1" in
|
|
||||||
-t | --timeout ) TIMEOUT="$2"; shift;;
|
|
||||||
-a | --api-only ) SERVICE="api";;
|
|
||||||
-c | --clean ) CLEAN=1;;
|
|
||||||
-v | --version ) export VINYLDNS_VERSION=$2; shift;;
|
|
||||||
* ) usage; exit;;
|
|
||||||
esac
|
|
||||||
shift
|
|
||||||
done
|
|
||||||
|
|
||||||
clean_images
|
|
||||||
|
|
||||||
echo "timeout is set to ${TIMEOUT}"
|
|
||||||
echo "vinyldns version is set to '${VINYLDNS_VERSION}'"
|
|
||||||
|
|
||||||
echo "Starting vinyldns and all dependencies in the background..."
|
|
||||||
docker-compose -f "$DOCKER_COMPOSE_CONFIG" up -d ${SERVICE}
|
|
||||||
|
|
||||||
wait_for_api
|
|
||||||
wait_for_portal
|
|
@ -1,57 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
######################################################################
|
|
||||||
# Copies the contents of `docker` into target/scala-2.12
|
|
||||||
# to start up dependent services via docker compose. Once
|
|
||||||
# dependent services are started up, the fat jar built by sbt assembly
|
|
||||||
# is loaded into a docker container. Finally, the func tests run inside
|
|
||||||
# another docker container
|
|
||||||
# At the end, we grab all the logs and place them in the target
|
|
||||||
# directory
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
WORK_DIR=$DIR/../target/scala-2.12
|
|
||||||
mkdir -p $WORK_DIR
|
|
||||||
|
|
||||||
echo "Cleaning up unused networks..."
|
|
||||||
docker network prune -f
|
|
||||||
|
|
||||||
echo "Copy all docker to the target directory so we can start up properly and the docker context is small..."
|
|
||||||
cp -af $DIR/../docker $WORK_DIR/
|
|
||||||
|
|
||||||
echo "Copy over the functional tests as well as those that are run in a container..."
|
|
||||||
mkdir -p $WORK_DIR/functest
|
|
||||||
rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest
|
|
||||||
|
|
||||||
echo "Copy the vinyldns.jar to the api docker folder so it is in context..."
|
|
||||||
if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then
|
|
||||||
echo "vinyldns jar not found, building..."
|
|
||||||
cd $DIR/../
|
|
||||||
sbt api/clean api/assembly
|
|
||||||
cd $DIR
|
|
||||||
fi
|
|
||||||
cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api
|
|
||||||
|
|
||||||
echo "Starting docker environment and running func tests..."
|
|
||||||
|
|
||||||
# If PAR_CPU is unset; default to auto
|
|
||||||
if [ -z "${PAR_CPU}" ]; then
|
|
||||||
export PAR_CPU=auto
|
|
||||||
fi
|
|
||||||
|
|
||||||
docker-compose -f $WORK_DIR/docker/docker-compose-func-test-testbind9.yml --project-directory $WORK_DIR/docker --log-level ERROR up --build --exit-code-from functest
|
|
||||||
test_result=$?
|
|
||||||
|
|
||||||
echo "Grabbing the logs..."
|
|
||||||
|
|
||||||
docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null
|
|
||||||
docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null
|
|
||||||
docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null
|
|
||||||
docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null
|
|
||||||
docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null
|
|
||||||
|
|
||||||
echo "Cleaning up docker containers..."
|
|
||||||
$DIR/./remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
echo "Func tests returned result: ${test_result}"
|
|
||||||
exit ${test_result}
|
|
@ -1,57 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
######################################################################
|
|
||||||
# Copies the contents of `docker` into target/scala-2.12
|
|
||||||
# to start up dependent services via docker compose. Once
|
|
||||||
# dependent services are started up, the fat jar built by sbt assembly
|
|
||||||
# is loaded into a docker container. Finally, the func tests run inside
|
|
||||||
# another docker container
|
|
||||||
# At the end, we grab all the logs and place them in the target
|
|
||||||
# directory
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
WORK_DIR=$DIR/../target/scala-2.12
|
|
||||||
mkdir -p $WORK_DIR
|
|
||||||
|
|
||||||
echo "Cleaning up unused networks..."
|
|
||||||
docker network prune -f
|
|
||||||
|
|
||||||
echo "Copy all docker to the target directory so we can start up properly and the docker context is small..."
|
|
||||||
cp -af $DIR/../docker $WORK_DIR/
|
|
||||||
|
|
||||||
echo "Copy over the functional tests as well as those that are run in a container..."
|
|
||||||
mkdir -p $WORK_DIR/functest
|
|
||||||
rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest
|
|
||||||
|
|
||||||
echo "Copy the vinyldns.jar to the api docker folder so it is in context..."
|
|
||||||
if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then
|
|
||||||
echo "vinyldns jar not found, building..."
|
|
||||||
cd $DIR/../
|
|
||||||
sbt build-api
|
|
||||||
cd $DIR
|
|
||||||
fi
|
|
||||||
cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api
|
|
||||||
|
|
||||||
echo "Starting docker environment and running func tests..."
|
|
||||||
|
|
||||||
if [ -z "${PAR_CPU}" ]; then
|
|
||||||
export PAR_CPU=2
|
|
||||||
fi
|
|
||||||
|
|
||||||
docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker up --build --exit-code-from functest
|
|
||||||
test_result=$?
|
|
||||||
|
|
||||||
echo "Grabbing the logs..."
|
|
||||||
docker logs vinyldns-functest
|
|
||||||
docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null
|
|
||||||
docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null
|
|
||||||
docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null
|
|
||||||
docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null
|
|
||||||
docker logs vinyldns-dynamodb > $DIR/../target/vinyldns-dynamodb.log 2>/dev/null
|
|
||||||
docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null
|
|
||||||
|
|
||||||
echo "Cleaning up docker containers..."
|
|
||||||
$DIR/./remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
echo "Func tests returned result: ${test_result}"
|
|
||||||
exit ${test_result}
|
|
@ -1,57 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
######################################################################
|
|
||||||
# Copies the contents of `docker` into target/scala-2.12
|
|
||||||
# to start up dependent services via docker compose. Once
|
|
||||||
# dependent services are started up, the fat jar built by sbt assembly
|
|
||||||
# is loaded into a docker container. Finally, the func tests run inside
|
|
||||||
# another docker container
|
|
||||||
# At the end, we grab all the logs and place them in the target
|
|
||||||
# directory
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
WORK_DIR=$DIR/../target/scala-2.12
|
|
||||||
mkdir -p $WORK_DIR
|
|
||||||
|
|
||||||
echo "Cleaning up unused networks..."
|
|
||||||
docker network prune -f
|
|
||||||
|
|
||||||
echo "Copy all docker to the target directory so we can start up properly and the docker context is small..."
|
|
||||||
cp -af $DIR/../docker $WORK_DIR/
|
|
||||||
|
|
||||||
echo "Copy over the functional tests as well as those that are run in a container..."
|
|
||||||
mkdir -p $WORK_DIR/functest
|
|
||||||
rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest
|
|
||||||
|
|
||||||
echo "Copy the vinyldns.jar to the api docker folder so it is in context..."
|
|
||||||
if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then
|
|
||||||
echo "vinyldns jar not found, building..."
|
|
||||||
cd $DIR/../
|
|
||||||
sbt api/clean api/assembly
|
|
||||||
cd $DIR
|
|
||||||
fi
|
|
||||||
cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api
|
|
||||||
|
|
||||||
echo "Starting docker environment and running func tests..."
|
|
||||||
|
|
||||||
# If PAR_CPU is unset; default to auto
|
|
||||||
if [ -z "${PAR_CPU}" ]; then
|
|
||||||
export PAR_CPU=auto
|
|
||||||
fi
|
|
||||||
|
|
||||||
docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker --log-level ERROR up --build --exit-code-from functest
|
|
||||||
test_result=$?
|
|
||||||
|
|
||||||
echo "Grabbing the logs..."
|
|
||||||
|
|
||||||
docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null
|
|
||||||
docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null
|
|
||||||
docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null
|
|
||||||
docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null
|
|
||||||
docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null
|
|
||||||
|
|
||||||
echo "Cleaning up docker containers..."
|
|
||||||
$DIR/./remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
echo "Func tests returned result: ${test_result}"
|
|
||||||
exit ${test_result}
|
|
@ -1,46 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
######################################################################
|
|
||||||
# Runs e2e tests against the portal
|
|
||||||
######################################################################
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
WORK_DIR=$DIR/../modules/portal
|
|
||||||
|
|
||||||
function check_for() {
|
|
||||||
which $1 >/dev/null 2>&1
|
|
||||||
EXIT_CODE=$?
|
|
||||||
if [ ${EXIT_CODE} != 0 ]
|
|
||||||
then
|
|
||||||
echo "$1 is not installed"
|
|
||||||
exit ${EXIT_CODE}
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
cd $WORK_DIR
|
|
||||||
check_for python
|
|
||||||
check_for npm
|
|
||||||
|
|
||||||
# if the program exits before this has been captured then there must have been an error
|
|
||||||
EXIT_CODE=1
|
|
||||||
|
|
||||||
# javascript code generate
|
|
||||||
npm install
|
|
||||||
grunt default
|
|
||||||
|
|
||||||
TEST_SUITES=('grunt unit')
|
|
||||||
|
|
||||||
for TEST in "${TEST_SUITES[@]}"
|
|
||||||
do
|
|
||||||
echo "##### Running test: [$TEST]"
|
|
||||||
$TEST
|
|
||||||
EXIT_CODE=$?
|
|
||||||
echo "##### Test [$TEST] ended with status [$EXIT_CODE]"
|
|
||||||
if [ ${EXIT_CODE} != 0 ]
|
|
||||||
then
|
|
||||||
cd -
|
|
||||||
exit ${EXIT_CODE}
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
cd -
|
|
||||||
exit 0
|
|
@ -1,18 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Generate 256-bit AES key.
|
|
||||||
#
|
|
||||||
# Usage:
|
|
||||||
# $ ./generate-aes-256-hex-key.sh [passphrase]
|
|
||||||
# * passphrase: Optional passphrase used to generate secret key. A pseudo-random passphrase will be used if
|
|
||||||
# one is not provided.
|
|
||||||
|
|
||||||
if [[ ! -z "$1" ]]
|
|
||||||
then
|
|
||||||
echo "Using user-provided passphrase."
|
|
||||||
fi
|
|
||||||
|
|
||||||
PASSPHRASE=${1:-$(openssl rand 32)}
|
|
||||||
|
|
||||||
KEY=$(openssl enc -aes-256-cbc -k "$PASSPHRASE" -P -md sha1 | awk -F'=' 'NR == 2 {print $2}')
|
|
||||||
echo "Your 256-bit AES hex key: $KEY"
|
|
@ -1,65 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
printf "\nnote: follow the guides in MAINTAINERS.md to setup notary delegation (Docker) and get sonatype key (Maven) \n"
|
|
||||||
|
|
||||||
DIR=$( cd $(dirname $0) ; pwd -P )
|
|
||||||
|
|
||||||
# gpg sbt plugin fails if this is not set
|
|
||||||
export GPG_TTY=$(tty)
|
|
||||||
|
|
||||||
##
|
|
||||||
# running tests
|
|
||||||
##
|
|
||||||
if [ "$1" != "skip-tests" ]; then
|
|
||||||
# Checking for uncommitted changes
|
|
||||||
printf "\nchecking for uncommitted changes... \n"
|
|
||||||
if ! (cd "$DIR" && git add . && git diff-index --quiet HEAD --)
|
|
||||||
then
|
|
||||||
printf "\nerror: attempting to release with uncommitted changes\n"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
# If we are not in the main repository then fail fast
|
|
||||||
REMOTE_REPO=$(git config --get remote.origin.url)
|
|
||||||
echo "REMOTE REPO IS $REMOTE_REPO"
|
|
||||||
if [[ "$REMOTE_REPO" != *-vinyldns/vinyldns.git ]]; then
|
|
||||||
printf "\nCannot run a release from this repository as it is not the main repository: $REMOTE_REPO \n"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# If we are not on the master branch,then fail fast
|
|
||||||
BRANCH=$(git rev-parse --abbrev-ref HEAD)
|
|
||||||
if [[ "$BRANCH" != "master" ]]; then
|
|
||||||
printf "\nCannot run a release from this branch: $BRANCH is not master \n"
|
|
||||||
exit 1;
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf "\nrunning api func tests... \n"
|
|
||||||
"$DIR"/remove-vinyl-containers.sh
|
|
||||||
if ! "$DIR"/func-test-api.sh
|
|
||||||
then
|
|
||||||
printf "\nerror: bin/func-test-api.sh failed \n"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
"$DIR"/remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
printf "\nrunning portal func tests... \n"
|
|
||||||
if ! "$DIR"/func-test-portal.sh
|
|
||||||
then
|
|
||||||
printf "\nerror: bin/func-test-portal.sh failed \n"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
printf "\nrunning verify... \n"
|
|
||||||
if ! "$DIR"/verify.sh
|
|
||||||
then
|
|
||||||
printf "\nerror: bin/verify.sh failed \n"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
##
|
|
||||||
# run release
|
|
||||||
##
|
|
||||||
cd "$DIR"/../ && sbt release && cd $DIR
|
|
||||||
|
|
||||||
printf "\nrelease finished \n"
|
|
@ -1,30 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
#
|
|
||||||
# The local vinyldns setup used for testing relies on the
|
|
||||||
# following docker images:
|
|
||||||
# mysql:5.7
|
|
||||||
# s12v/elasticmq:0.13.8
|
|
||||||
# vinyldns/bind9
|
|
||||||
# vinyldns/api
|
|
||||||
# vinyldns/portal
|
|
||||||
# rroemhild/test-openldap
|
|
||||||
# localstack/localstack
|
|
||||||
#
|
|
||||||
# This script with kill and remove containers associated
|
|
||||||
# with these names and/or tags
|
|
||||||
#
|
|
||||||
# Note: this will not remove the actual images from your
|
|
||||||
# machine, just the running containers
|
|
||||||
|
|
||||||
IDS=$(docker ps -a | grep -e 'mysql:5.7' -e 's12v/elasticmq:0.13.8' -e 'vinyldns' -e 'flaviovs/mock-smtp' -e 'localstack/localstack' -e 'rroemhild/test-openldap' | awk '{print $1}')
|
|
||||||
|
|
||||||
echo "killing..."
|
|
||||||
echo $(echo "$IDS" | xargs -I {} docker kill {})
|
|
||||||
echo
|
|
||||||
|
|
||||||
echo "removing..."
|
|
||||||
echo $(echo "$IDS" | xargs -I {} docker rm -v {})
|
|
||||||
echo
|
|
||||||
|
|
||||||
echo "pruning network..."
|
|
||||||
docker network prune -f
|
|
@ -1,103 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
usage () {
|
|
||||||
echo -e "Description: Updates a user in VinylDNS to a support user, or removes the user as a support user.\n"
|
|
||||||
echo -e "Usage: update-support-user.sh [OPTIONS] <username> <enableSupport>\n"
|
|
||||||
echo -e "Required Parameters:"
|
|
||||||
echo -e "username\tThe VinylDNS user for which to change the support flag"
|
|
||||||
echo -e "enableSupport\t'true' to set the user as a support user; 'false' to remove support privileges\n"
|
|
||||||
echo -e "OPTIONS:"
|
|
||||||
echo -e "Must define as an environment variables the following (or pass them in on the command line)\n"
|
|
||||||
echo -e "DB_USER (user name for accessing the VinylDNS database)"
|
|
||||||
echo -e "DB_PASS (user password for accessing the VinylDNS database)"
|
|
||||||
echo -e "DB_HOST (host name for the mysql server of the VinylDNS database)"
|
|
||||||
echo -e "DB_NAME (name of the VinylDNS database, defaults to vinyldns)"
|
|
||||||
echo -e "DB_PORT (port of the VinylDNS database, defaults to 19002)\n"
|
|
||||||
echo -e " -u|--user \tDatabase user name for accessing the VinylDNS database"
|
|
||||||
echo -e " -p|--password\tDatabase user password for accessing the VinylDNS database"
|
|
||||||
echo -e " -h|--host\tDatabase host name for the mysql server"
|
|
||||||
echo -e " -n|--name\tName of the VinylDNS database, defaults to vinyldns"
|
|
||||||
echo -e " -c|--port\tPort of the VinylDNS database, defaults to 19002"
|
|
||||||
}
|
|
||||||
|
|
||||||
DIR=$( cd "$(dirname "$0")" || exit ; pwd -P )
|
|
||||||
VINYL_ROOT=$DIR/..
|
|
||||||
WORK_DIR=${VINYL_ROOT}/docker
|
|
||||||
|
|
||||||
DB_USER=$DB_USER
|
|
||||||
DB_PASS=$DB_PASS
|
|
||||||
DB_HOST=$DB_HOST
|
|
||||||
DB_NAME=${DB_NAME:-vinyldns}
|
|
||||||
DB_PORT=${DB_PORT:-19002}
|
|
||||||
|
|
||||||
while [ "$1" != "" ]; do
|
|
||||||
case "$1" in
|
|
||||||
-u | --user ) DB_USER="$2"; shift;;
|
|
||||||
-p | --password ) DB_PASS="$2"; shift;;
|
|
||||||
-h | --host ) DB_HOST="$2"; shift;;
|
|
||||||
-n | --name ) DB_NAME="$2"; shift;;
|
|
||||||
-c | --port ) DB_PORT="$2"; shift;;
|
|
||||||
* ) break;;
|
|
||||||
esac
|
|
||||||
shift
|
|
||||||
done
|
|
||||||
|
|
||||||
VINYL_USER="$1"
|
|
||||||
MAKE_SUPPORT="$2"
|
|
||||||
|
|
||||||
ERROR=
|
|
||||||
if [[ -z "$DB_USER" ]]
|
|
||||||
then
|
|
||||||
echo "No DB_USER environment variable found"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -z "$DB_PASS" ]]
|
|
||||||
then
|
|
||||||
echo "No DB_PASS environment variable found"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -z "$DB_HOST" ]]
|
|
||||||
then
|
|
||||||
echo "No DB_HOST environment variable found"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -z "$DB_NAME" ]]
|
|
||||||
then
|
|
||||||
echo "No DB_NAME environment variable found"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
|
|
||||||
if [[ -z "$VINYL_USER" ]]
|
|
||||||
then
|
|
||||||
echo "Parameter 'username' not specified"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -z "$MAKE_SUPPORT" ]]
|
|
||||||
then
|
|
||||||
echo "Parameter 'enableSupport' not specified"
|
|
||||||
ERROR="1"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ -n "$ERROR" ]]
|
|
||||||
then
|
|
||||||
usage
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Copy the proto definition to the Docker context and build
|
|
||||||
cp "${VINYL_ROOT}/modules/core/src/main/protobuf/VinylDNSProto.proto" "${WORK_DIR}/admin"
|
|
||||||
docker build -t vinyldns/admin "${WORK_DIR}/admin"
|
|
||||||
rm "${WORK_DIR}/admin/VinylDNSProto.proto"
|
|
||||||
|
|
||||||
docker run -it --rm \
|
|
||||||
-e "DB_USER=$DB_USER" \
|
|
||||||
-e "DB_PASS=$DB_PASS" \
|
|
||||||
-e "DB_HOST=$DB_HOST" \
|
|
||||||
-e "DB_NAME=$DB_NAME" \
|
|
||||||
-e "DB_PORT=$DB_PORT" \
|
|
||||||
vinyldns/admin:latest /app/update-support-user.py "$VINYL_USER" "$MAKE_SUPPORT"
|
|
@ -1,21 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
echo 'Running tests...'
|
|
||||||
|
|
||||||
echo 'Stopping any docker containers...'
|
|
||||||
./bin/remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
echo 'Starting up docker for integration testing and running unit and integration tests on all modules...'
|
|
||||||
sbt ";validate;verify"
|
|
||||||
verify_result=$?
|
|
||||||
|
|
||||||
echo 'Stopping any docker containers...'
|
|
||||||
./bin/remove-vinyl-containers.sh
|
|
||||||
|
|
||||||
if [ ${verify_result} -eq 0 ]
|
|
||||||
then
|
|
||||||
echo 'Verify successful!'
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo 'Verify failed!'
|
|
||||||
exit 1
|
|
||||||
fi
|
|
320
build.sbt
320
build.sbt
@ -1,18 +1,14 @@
|
|||||||
import Resolvers._
|
|
||||||
import Dependencies._
|
|
||||||
import CompilerOptions._
|
import CompilerOptions._
|
||||||
import com.typesafe.sbt.packager.docker._
|
import Dependencies._
|
||||||
import scoverage.ScoverageKeys.{coverageFailOnMinimum, coverageMinimum}
|
|
||||||
import org.scalafmt.sbt.ScalafmtPlugin._
|
|
||||||
import microsites._
|
import microsites._
|
||||||
import ReleaseTransformations._
|
import org.scalafmt.sbt.ScalafmtPlugin._
|
||||||
import sbtrelease.Version
|
import scoverage.ScoverageKeys.{coverageFailOnMinimum, coverageMinimum}
|
||||||
|
|
||||||
|
import scala.language.postfixOps
|
||||||
|
import scala.sys.env
|
||||||
import scala.util.Try
|
import scala.util.Try
|
||||||
|
|
||||||
resolvers ++= additionalResolvers
|
lazy val IntegrationTest = config("it").extend(Test)
|
||||||
|
|
||||||
lazy val IntegrationTest = config("it") extend Test
|
|
||||||
|
|
||||||
// settings that should be inherited by all projects
|
// settings that should be inherited by all projects
|
||||||
lazy val sharedSettings = Seq(
|
lazy val sharedSettings = Seq(
|
||||||
@ -21,23 +17,24 @@ lazy val sharedSettings = Seq(
|
|||||||
organizationName := "Comcast Cable Communications Management, LLC",
|
organizationName := "Comcast Cable Communications Management, LLC",
|
||||||
startYear := Some(2018),
|
startYear := Some(2018),
|
||||||
licenses += ("Apache-2.0", new URL("https://www.apache.org/licenses/LICENSE-2.0.txt")),
|
licenses += ("Apache-2.0", new URL("https://www.apache.org/licenses/LICENSE-2.0.txt")),
|
||||||
|
maintainer := "VinylDNS Maintainers",
|
||||||
scalacOptions ++= scalacOptionsByV(scalaVersion.value),
|
scalacOptions ++= scalacOptionsByV(scalaVersion.value),
|
||||||
scalacOptions in Test -= "-Ywarn-dead-code",
|
scalacOptions in(Compile, doc) += "-no-link-warnings",
|
||||||
scalacOptions in (Compile, doc) += "-no-link-warnings",
|
|
||||||
// Use wart remover to eliminate code badness
|
// Use wart remover to eliminate code badness
|
||||||
wartremoverErrors ++= Seq(
|
wartremoverErrors := (
|
||||||
|
if (getPropertyFlagOrDefault("build.lintOnCompile", true))
|
||||||
|
Seq(
|
||||||
Wart.EitherProjectionPartial,
|
Wart.EitherProjectionPartial,
|
||||||
Wart.IsInstanceOf,
|
Wart.IsInstanceOf,
|
||||||
Wart.JavaConversions,
|
Wart.JavaConversions,
|
||||||
Wart.Return,
|
Wart.Return,
|
||||||
Wart.LeakingSealed,
|
Wart.LeakingSealed,
|
||||||
Wart.ExplicitImplicitTypes
|
Wart.ExplicitImplicitTypes
|
||||||
|
)
|
||||||
|
else Seq.empty
|
||||||
),
|
),
|
||||||
|
|
||||||
// scala format
|
// scala format
|
||||||
scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", true),
|
scalafmtOnCompile := getPropertyFlagOrDefault("build.scalafmtOnCompile", false),
|
||||||
scalafmtOnCompile in IntegrationTest := getPropertyFlagOrDefault("build.scalafmtOnCompile", true),
|
|
||||||
|
|
||||||
// coverage options
|
// coverage options
|
||||||
coverageMinimum := 85,
|
coverageMinimum := 85,
|
||||||
coverageFailOnMinimum := true,
|
coverageFailOnMinimum := true,
|
||||||
@ -67,121 +64,26 @@ lazy val apiSettings = Seq(
|
|||||||
)
|
)
|
||||||
|
|
||||||
lazy val apiAssemblySettings = Seq(
|
lazy val apiAssemblySettings = Seq(
|
||||||
assemblyJarName in assembly := "vinyldns.jar",
|
assemblyOutputPath in assembly := file("artifacts/vinyldns-api.jar"),
|
||||||
test in assembly := {},
|
test in assembly := {},
|
||||||
mainClass in assembly := Some("vinyldns.api.Boot"),
|
mainClass in assembly := Some("vinyldns.api.Boot"),
|
||||||
mainClass in reStart := Some("vinyldns.api.Boot"),
|
mainClass in reStart := Some("vinyldns.api.Boot"),
|
||||||
// there are some odd things from dnsjava including update.java and dig.java that we don't use
|
|
||||||
assemblyMergeStrategy in assembly := {
|
assemblyMergeStrategy in assembly := {
|
||||||
case "update.class"| "dig.class" => MergeStrategy.discard
|
case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") =>
|
||||||
case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => MergeStrategy.discard
|
MergeStrategy.discard
|
||||||
case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => MergeStrategy.discard
|
case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") =>
|
||||||
|
MergeStrategy.discard
|
||||||
case x =>
|
case x =>
|
||||||
val oldStrategy = (assemblyMergeStrategy in assembly).value
|
val oldStrategy = (assemblyMergeStrategy in assembly).value
|
||||||
oldStrategy(x)
|
oldStrategy(x)
|
||||||
}
|
}
|
||||||
)
|
)
|
||||||
|
|
||||||
lazy val apiDockerSettings = Seq(
|
|
||||||
dockerBaseImage := "adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine",
|
|
||||||
dockerUsername := Some("vinyldns"),
|
|
||||||
packageName in Docker := "api",
|
|
||||||
dockerExposedPorts := Seq(9000),
|
|
||||||
dockerEntrypoint := Seq("/opt/docker/bin/api"),
|
|
||||||
dockerExposedVolumes := Seq("/opt/docker/lib_extra"), // mount extra libs to the classpath
|
|
||||||
dockerExposedVolumes := Seq("/opt/docker/conf"), // mount extra config to the classpath
|
|
||||||
|
|
||||||
// add extra libs to class path via mount
|
|
||||||
scriptClasspath in bashScriptDefines ~= (cp => cp :+ "${app_home}/../lib_extra/*"),
|
|
||||||
|
|
||||||
// adds config file to mount
|
|
||||||
bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/application.conf"""",
|
|
||||||
bashScriptExtraDefines += """addJava "-Dlogback.configurationFile=${app_home}/../conf/logback.xml"""", // adds logback
|
|
||||||
|
|
||||||
// this is the default version, can be overridden
|
|
||||||
bashScriptExtraDefines += s"""addJava "-Dvinyldns.base-version=${(version in ThisBuild).value}"""",
|
|
||||||
bashScriptExtraDefines += "(cd ${app_home} && ./wait-for-dependencies.sh && cd -)",
|
|
||||||
credentials in Docker := Seq(Credentials(Path.userHome / ".ivy2" / ".dockerCredentials")),
|
|
||||||
dockerCommands ++= Seq(
|
|
||||||
Cmd("USER", "root"), // switch to root so we can install netcat
|
|
||||||
ExecCmd("RUN", "apk", "add", "--update", "--no-cache", "netcat-openbsd", "bash"),
|
|
||||||
Cmd("USER", "1001:0") // switch back to the daemon user
|
|
||||||
),
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val portalDockerSettings = Seq(
|
|
||||||
dockerBaseImage := "adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine",
|
|
||||||
dockerUsername := Some("vinyldns"),
|
|
||||||
packageName in Docker := "portal",
|
|
||||||
dockerExposedPorts := Seq(9001),
|
|
||||||
dockerExposedVolumes := Seq("/opt/docker/lib_extra"), // mount extra libs to the classpath
|
|
||||||
dockerExposedVolumes := Seq("/opt/docker/conf"), // mount extra config to the classpath
|
|
||||||
|
|
||||||
// add extra libs to class path via mount
|
|
||||||
scriptClasspath in bashScriptDefines ~= (cp => cp :+ "${app_home}/../lib_extra/*"),
|
|
||||||
|
|
||||||
// adds config file to mount
|
|
||||||
bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/application.conf"""",
|
|
||||||
bashScriptExtraDefines += """addJava "-Dlogback.configurationFile=${app_home}/../conf/logback.xml"""",
|
|
||||||
|
|
||||||
// this is the default version, can be overridden
|
|
||||||
bashScriptExtraDefines += s"""addJava "-Dvinyldns.base-version=${(version in ThisBuild).value}"""",
|
|
||||||
|
|
||||||
// needed to avoid access issue in play for the RUNNING_PID
|
|
||||||
// https://github.com/lightbend/sbt-reactive-app/issues/177
|
|
||||||
bashScriptExtraDefines += s"""addJava "-Dplay.server.pidfile.path=/dev/null"""",
|
|
||||||
|
|
||||||
// wait for mysql
|
|
||||||
bashScriptExtraDefines += "(cd ${app_home}/../ && ls && ./wait-for-dependencies.sh && cd -)",
|
|
||||||
dockerCommands ++= Seq(
|
|
||||||
Cmd("USER", "root"), // switch to root so we can install netcat
|
|
||||||
ExecCmd("RUN", "apk", "add", "--update", "--no-cache", "netcat-openbsd", "bash"),
|
|
||||||
Cmd("USER", "1001:0") // switch back to the user that runs the process
|
|
||||||
),
|
|
||||||
|
|
||||||
credentials in Docker := Seq(Credentials(Path.userHome / ".ivy2" / ".dockerCredentials"))
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val noPublishSettings = Seq(
|
|
||||||
publish := {},
|
|
||||||
publishLocal := {},
|
|
||||||
publishArtifact := false
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val apiPublishSettings = Seq(
|
|
||||||
publishArtifact := false,
|
|
||||||
publishLocal := (publishLocal in Docker).value,
|
|
||||||
publish := (publish in Docker).value
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val portalPublishSettings = Seq(
|
|
||||||
publishArtifact := false,
|
|
||||||
publishLocal := (publishLocal in Docker).value,
|
|
||||||
publish := (publish in Docker).value,
|
|
||||||
// for sbt-native-packager (docker) to exclude local.conf
|
|
||||||
mappings in Universal ~= ( _.filterNot {
|
|
||||||
case (file, _) => file.getName.equals("local.conf")
|
|
||||||
}),
|
|
||||||
// for local.conf to be excluded in jars
|
|
||||||
mappings in (Compile, packageBin) ~= ( _.filterNot {
|
|
||||||
case (file, _) => file.getName.equals("local.conf")
|
|
||||||
})
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val pbSettings = Seq(
|
|
||||||
PB.targets in Compile := Seq(
|
|
||||||
PB.gens.java("2.6.1") -> (sourceManaged in Compile).value
|
|
||||||
),
|
|
||||||
PB.protocVersion := "-v261"
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val allApiSettings = Revolver.settings ++ Defaults.itSettings ++
|
lazy val allApiSettings = Revolver.settings ++ Defaults.itSettings ++
|
||||||
apiSettings ++
|
apiSettings ++
|
||||||
sharedSettings ++
|
sharedSettings ++
|
||||||
apiAssemblySettings ++
|
apiAssemblySettings ++
|
||||||
testSettings ++
|
testSettings
|
||||||
apiPublishSettings ++
|
|
||||||
apiDockerSettings
|
|
||||||
|
|
||||||
lazy val api = (project in file("modules/api"))
|
lazy val api = (project in file("modules/api"))
|
||||||
.enablePlugins(JavaAppPackaging, AutomateHeaderPlugin)
|
.enablePlugins(JavaAppPackaging, AutomateHeaderPlugin)
|
||||||
@ -196,35 +98,32 @@ lazy val api = (project in file("modules/api"))
|
|||||||
r53 % "compile->compile;it->it"
|
r53 % "compile->compile;it->it"
|
||||||
)
|
)
|
||||||
|
|
||||||
val killDocker = TaskKey[Unit]("killDocker", "Kills all vinyldns docker containers")
|
lazy val root = (project in file("."))
|
||||||
lazy val root = (project in file(".")).enablePlugins(DockerComposePlugin, AutomateHeaderPlugin)
|
.enablePlugins(AutomateHeaderPlugin)
|
||||||
.configs(IntegrationTest)
|
.configs(IntegrationTest)
|
||||||
.settings(headerSettings(IntegrationTest))
|
.settings(headerSettings(IntegrationTest))
|
||||||
.settings(sharedSettings)
|
.settings(sharedSettings)
|
||||||
.settings(
|
.settings(
|
||||||
inConfig(IntegrationTest)(scalafmtConfigSettings),
|
inConfig(IntegrationTest)(scalafmtConfigSettings)
|
||||||
killDocker := {
|
|
||||||
import scala.sys.process._
|
|
||||||
"./bin/remove-vinyl-containers.sh" !
|
|
||||||
},
|
|
||||||
)
|
)
|
||||||
.aggregate(core, api, portal, mysql, sqs, r53)
|
.aggregate(core, api, portal, mysql, sqs, r53)
|
||||||
|
|
||||||
lazy val coreBuildSettings = Seq(
|
lazy val coreBuildSettings = Seq(
|
||||||
name := "core",
|
name := "core",
|
||||||
|
|
||||||
// do not use unused params as NoOpCrypto ignores its constructor, we should provide a way
|
// do not use unused params as NoOpCrypto ignores its constructor, we should provide a way
|
||||||
// to write a crypto plugin so that we fall back to a noarg constructor
|
// to write a crypto plugin so that we fall back to a noarg constructor
|
||||||
scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params")
|
scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params"),
|
||||||
) ++ pbSettings
|
PB.targets in Compile := Seq(PB.gens.java("2.6.1") -> (sourceManaged in Compile).value),
|
||||||
|
PB.protocVersion := "-v261"
|
||||||
|
)
|
||||||
|
|
||||||
import xerial.sbt.Sonatype._
|
|
||||||
lazy val corePublishSettings = Seq(
|
lazy val corePublishSettings = Seq(
|
||||||
publishMavenStyle := true,
|
publishMavenStyle := true,
|
||||||
publishArtifact in Test := false,
|
publishArtifact in Test := false,
|
||||||
pomIncludeRepository := { _ => false },
|
pomIncludeRepository := { _ =>
|
||||||
|
false
|
||||||
|
},
|
||||||
autoAPIMappings := true,
|
autoAPIMappings := true,
|
||||||
publish in Docker := {},
|
|
||||||
mainClass := None,
|
mainClass := None,
|
||||||
homepage := Some(url("https://vinyldns.io")),
|
homepage := Some(url("https://vinyldns.io")),
|
||||||
scmInfo := Some(
|
scmInfo := Some(
|
||||||
@ -232,18 +131,11 @@ lazy val corePublishSettings = Seq(
|
|||||||
url("https://github.com/vinyldns/vinyldns"),
|
url("https://github.com/vinyldns/vinyldns"),
|
||||||
"scm:git@github.com:vinyldns/vinyldns.git"
|
"scm:git@github.com:vinyldns/vinyldns.git"
|
||||||
)
|
)
|
||||||
),
|
)
|
||||||
developers := List(
|
|
||||||
Developer(id="pauljamescleary", name="Paul James Cleary", email="pauljamescleary@gmail.com", url=url("https://github.com/pauljamescleary")),
|
|
||||||
Developer(id="rebstar6", name="Rebecca Star", email="rebstar6@gmail.com", url=url("https://github.com/rebstar6")),
|
|
||||||
Developer(id="nimaeskandary", name="Nima Eskandary", email="nimaesk1@gmail.com", url=url("https://github.com/nimaeskandary")),
|
|
||||||
Developer(id="mitruly", name="Michael Ly", email="michaeltrulyng@gmail.com", url=url("https://github.com/mitruly")),
|
|
||||||
Developer(id="britneywright", name="Britney Wright", email="blw06g@gmail.com", url=url("https://github.com/britneywright")),
|
|
||||||
),
|
|
||||||
sonatypeProfileName := "io.vinyldns"
|
|
||||||
)
|
)
|
||||||
|
|
||||||
lazy val core = (project in file("modules/core")).enablePlugins(AutomateHeaderPlugin)
|
lazy val core = (project in file("modules/core"))
|
||||||
|
.enablePlugins(AutomateHeaderPlugin)
|
||||||
.settings(sharedSettings)
|
.settings(sharedSettings)
|
||||||
.settings(coreBuildSettings)
|
.settings(coreBuildSettings)
|
||||||
.settings(corePublishSettings)
|
.settings(corePublishSettings)
|
||||||
@ -265,7 +157,8 @@ lazy val mysql = (project in file("modules/mysql"))
|
|||||||
.settings(libraryDependencies ++= mysqlDependencies ++ commonTestDependencies.map(_ % "test, it"))
|
.settings(libraryDependencies ++= mysqlDependencies ++ commonTestDependencies.map(_ % "test, it"))
|
||||||
.settings(
|
.settings(
|
||||||
organization := "io.vinyldns"
|
organization := "io.vinyldns"
|
||||||
).dependsOn(core % "compile->compile;test->test")
|
)
|
||||||
|
.dependsOn(core % "compile->compile;test->test")
|
||||||
.settings(name := "mysql")
|
.settings(name := "mysql")
|
||||||
|
|
||||||
lazy val sqs = (project in file("modules/sqs"))
|
lazy val sqs = (project in file("modules/sqs"))
|
||||||
@ -279,8 +172,9 @@ lazy val sqs = (project in file("modules/sqs"))
|
|||||||
.settings(Defaults.itSettings)
|
.settings(Defaults.itSettings)
|
||||||
.settings(libraryDependencies ++= sqsDependencies ++ commonTestDependencies.map(_ % "test, it"))
|
.settings(libraryDependencies ++= sqsDependencies ++ commonTestDependencies.map(_ % "test, it"))
|
||||||
.settings(
|
.settings(
|
||||||
organization := "io.vinyldns",
|
organization := "io.vinyldns"
|
||||||
).dependsOn(core % "compile->compile;test->test")
|
)
|
||||||
|
.dependsOn(core % "compile->compile;test->test")
|
||||||
.settings(name := "sqs")
|
.settings(name := "sqs")
|
||||||
|
|
||||||
lazy val r53 = (project in file("modules/r53"))
|
lazy val r53 = (project in file("modules/r53"))
|
||||||
@ -295,55 +189,58 @@ lazy val r53 = (project in file("modules/r53"))
|
|||||||
.settings(libraryDependencies ++= r53Dependencies ++ commonTestDependencies.map(_ % "test, it"))
|
.settings(libraryDependencies ++= r53Dependencies ++ commonTestDependencies.map(_ % "test, it"))
|
||||||
.settings(
|
.settings(
|
||||||
organization := "io.vinyldns",
|
organization := "io.vinyldns",
|
||||||
coverageMinimum := 65,
|
coverageMinimum := 65
|
||||||
).dependsOn(core % "compile->compile;test->test")
|
)
|
||||||
|
.dependsOn(core % "compile->compile;test->test")
|
||||||
.settings(name := "r53")
|
.settings(name := "r53")
|
||||||
|
|
||||||
val preparePortal = TaskKey[Unit]("preparePortal", "Runs NPM to prepare portal for start")
|
val preparePortal = TaskKey[Unit]("preparePortal", "Runs NPM to prepare portal for start")
|
||||||
val checkJsHeaders = TaskKey[Unit]("checkJsHeaders", "Runs script to check for APL 2.0 license headers")
|
val checkJsHeaders =
|
||||||
val createJsHeaders = TaskKey[Unit]("createJsHeaders", "Runs script to prepend APL 2.0 license headers to files")
|
TaskKey[Unit]("checkJsHeaders", "Runs script to check for APL 2.0 license headers")
|
||||||
|
val createJsHeaders =
|
||||||
|
TaskKey[Unit]("createJsHeaders", "Runs script to prepend APL 2.0 license headers to files")
|
||||||
|
|
||||||
lazy val portal = (project in file("modules/portal")).enablePlugins(PlayScala, AutomateHeaderPlugin)
|
lazy val portalSettings = Seq(
|
||||||
.settings(sharedSettings)
|
|
||||||
.settings(testSettings)
|
|
||||||
.settings(portalPublishSettings)
|
|
||||||
.settings(portalDockerSettings)
|
|
||||||
.settings(
|
|
||||||
name := "portal",
|
|
||||||
libraryDependencies ++= portalDependencies,
|
libraryDependencies ++= portalDependencies,
|
||||||
routesGenerator := InjectedRoutesGenerator,
|
routesGenerator := InjectedRoutesGenerator,
|
||||||
coverageExcludedPackages := "<empty>;views.html.*;router.*;controllers\\.javascript.*;.*Reverse.*",
|
coverageExcludedPackages := "<empty>;views.html.*;router.*;controllers\\.javascript.*;.*Reverse.*",
|
||||||
javaOptions in Test += "-Dconfig.file=conf/application-test.conf",
|
javaOptions in Test += "-Dconfig.file=conf/application-test.conf",
|
||||||
|
|
||||||
// ads the version when working locally with sbt run
|
// ads the version when working locally with sbt run
|
||||||
PlayKeys.devSettings += "vinyldns.base-version" -> (version in ThisBuild).value,
|
PlayKeys.devSettings += "vinyldns.base-version" -> (version in ThisBuild).value,
|
||||||
|
|
||||||
// adds an extra classpath to the portal loading so we can externalize jars, make sure to create the lib_extra
|
// adds an extra classpath to the portal loading so we can externalize jars, make sure to create the lib_extra
|
||||||
// directory and lay down any dependencies that are required when deploying
|
// directory and lay down any dependencies that are required when deploying
|
||||||
scriptClasspath in bashScriptDefines ~= (cp => cp :+ "lib_extra/*"),
|
scriptClasspath in bashScriptDefines ~= (cp => cp :+ "lib_extra/*"),
|
||||||
mainClass in reStart := None,
|
mainClass in reStart := None,
|
||||||
|
|
||||||
// we need to filter out unused for the portal as the play framework needs a lot of unused things
|
// we need to filter out unused for the portal as the play framework needs a lot of unused things
|
||||||
scalacOptions ~= { opts => opts.filterNot(p => p.contains("unused")) },
|
scalacOptions ~= { opts =>
|
||||||
|
opts.filterNot(p => p.contains("unused"))
|
||||||
|
},
|
||||||
// runs our prepare portal process
|
// runs our prepare portal process
|
||||||
preparePortal := {
|
preparePortal := {
|
||||||
import scala.sys.process._
|
import scala.sys.process._
|
||||||
"./modules/portal/prepare-portal.sh" !
|
"./modules/portal/prepare-portal.sh" !
|
||||||
},
|
},
|
||||||
|
|
||||||
checkJsHeaders := {
|
checkJsHeaders := {
|
||||||
import scala.sys.process._
|
import scala.sys.process._
|
||||||
"./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" !
|
"./utils/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" !
|
||||||
},
|
},
|
||||||
|
|
||||||
createJsHeaders := {
|
createJsHeaders := {
|
||||||
import scala.sys.process._
|
import scala.sys.process._
|
||||||
"./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js" !
|
"./utils/add-license-headers.sh -d=modules/portal/public/lib -f=js" !
|
||||||
},
|
},
|
||||||
|
|
||||||
// change the name of the output to portal.zip
|
// Change the path of the output to artifacts/vinyldns-portal.zip
|
||||||
packageName in Universal := "portal"
|
target in Universal := file("artifacts/"),
|
||||||
|
packageName in Universal := "vinyldns-portal"
|
||||||
|
)
|
||||||
|
|
||||||
|
lazy val portal = (project in file("modules/portal"))
|
||||||
|
.enablePlugins(PlayScala, AutomateHeaderPlugin)
|
||||||
|
.settings(sharedSettings)
|
||||||
|
.settings(testSettings)
|
||||||
|
.settings(portalSettings)
|
||||||
|
.settings(
|
||||||
|
name := "portal",
|
||||||
)
|
)
|
||||||
.dependsOn(mysql)
|
.dependsOn(mysql)
|
||||||
|
|
||||||
@ -352,13 +249,13 @@ lazy val docSettings = Seq(
|
|||||||
micrositeGithubOwner := "vinyldns",
|
micrositeGithubOwner := "vinyldns",
|
||||||
micrositeGithubRepo := "vinyldns",
|
micrositeGithubRepo := "vinyldns",
|
||||||
micrositeName := "VinylDNS",
|
micrositeName := "VinylDNS",
|
||||||
micrositeDescription := "DNS Governance",
|
micrositeDescription := "DNS Automation and Governance",
|
||||||
micrositeAuthor := "VinylDNS",
|
micrositeAuthor := "VinylDNS",
|
||||||
micrositeHomepage := "http://vinyldns.io",
|
micrositeHomepage := "https://vinyldns.io",
|
||||||
micrositeDocumentationUrl := "/api",
|
micrositeDocumentationUrl := "/api",
|
||||||
micrositeGitterChannelUrl := "vinyldns/Lobby",
|
|
||||||
micrositeTwitterCreator := "@vinyldns",
|
|
||||||
micrositeDocumentationLabelDescription := "API Documentation",
|
micrositeDocumentationLabelDescription := "API Documentation",
|
||||||
|
micrositeHighlightLanguages ++= Seq("json", "yaml", "bnf", "plaintext"),
|
||||||
|
micrositeGitterChannel := false,
|
||||||
micrositeExtraMdFiles := Map(
|
micrositeExtraMdFiles := Map(
|
||||||
file("CONTRIBUTING.md") -> ExtraMdFileConfig(
|
file("CONTRIBUTING.md") -> ExtraMdFileConfig(
|
||||||
"contributing.md",
|
"contributing.md",
|
||||||
@ -371,12 +268,18 @@ lazy val docSettings = Seq(
|
|||||||
ghpagesNoJekyll := false,
|
ghpagesNoJekyll := false,
|
||||||
fork in mdoc := true,
|
fork in mdoc := true,
|
||||||
mdocIn := (sourceDirectory in Compile).value / "mdoc",
|
mdocIn := (sourceDirectory in Compile).value / "mdoc",
|
||||||
micrositeCssDirectory := (resourceDirectory in Compile).value / "microsite" / "css",
|
micrositeFavicons := Seq(
|
||||||
micrositeCompilingDocsTool := WithMdoc,
|
MicrositeFavicon("favicon16x16.png", "16x16"),
|
||||||
micrositeFavicons := Seq(MicrositeFavicon("favicon16x16.png", "16x16"), MicrositeFavicon("favicon32x32.png", "32x32")),
|
MicrositeFavicon("favicon32x32.png", "32x32")
|
||||||
micrositeEditButton := Some(MicrositeEditButton("Improve this page", "/edit/master/modules/docs/src/main/mdoc/{{ page.path }}")),
|
),
|
||||||
|
micrositeEditButton := Some(
|
||||||
|
MicrositeEditButton(
|
||||||
|
"Improve this page",
|
||||||
|
"/edit/master/modules/docs/src/main/mdoc/{{ page.path }}"
|
||||||
|
)
|
||||||
|
),
|
||||||
micrositeFooterText := None,
|
micrositeFooterText := None,
|
||||||
micrositeHighlightTheme := "atom-one-light",
|
micrositeHighlightTheme := "hybrid",
|
||||||
includeFilter in makeSite := "*.html" | "*.css" | "*.png" | "*.jpg" | "*.jpeg" | "*.gif" | "*.js" | "*.swf" | "*.md" | "*.webm" | "*.ico" | "CNAME" | "*.yml" | "*.svg" | "*.json" | "*.csv"
|
includeFilter in makeSite := "*.html" | "*.css" | "*.png" | "*.jpg" | "*.jpeg" | "*.gif" | "*.js" | "*.swf" | "*.md" | "*.webm" | "*.ico" | "CNAME" | "*.yml" | "*.svg" | "*.json" | "*.csv"
|
||||||
)
|
)
|
||||||
|
|
||||||
@ -384,68 +287,14 @@ lazy val docs = (project in file("modules/docs"))
|
|||||||
.enablePlugins(MicrositesPlugin, MdocPlugin)
|
.enablePlugins(MicrositesPlugin, MdocPlugin)
|
||||||
.settings(docSettings)
|
.settings(docSettings)
|
||||||
|
|
||||||
// release stages
|
|
||||||
|
|
||||||
lazy val setSonatypeReleaseSettings = ReleaseStep(action = oldState => {
|
|
||||||
// sonatype publish target, and sonatype release steps, are different if version is SNAPSHOT
|
|
||||||
val extracted = Project.extract(oldState)
|
|
||||||
val v = extracted.get(Keys.version)
|
|
||||||
val snap = v.endsWith("SNAPSHOT")
|
|
||||||
if (!snap) {
|
|
||||||
val publishToSettings = Some("releases" at "https://oss.sonatype.org/" + "service/local/staging/deploy/maven2")
|
|
||||||
val newState = extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState)
|
|
||||||
|
|
||||||
// create sonatypeReleaseCommand with releaseSonatype step
|
|
||||||
val sonatypeCommand = Command.command("sonatypeReleaseCommand") {
|
|
||||||
"project core" ::
|
|
||||||
"publish" ::
|
|
||||||
"sonatypeRelease" ::
|
|
||||||
_
|
|
||||||
}
|
|
||||||
|
|
||||||
newState.copy(definedCommands = newState.definedCommands :+ sonatypeCommand)
|
|
||||||
} else {
|
|
||||||
val publishToSettings = Some("snapshots" at "https://oss.sonatype.org/" + "content/repositories/snapshots")
|
|
||||||
val newState = extracted.appendWithSession(Seq(publishTo in core := publishToSettings), oldState)
|
|
||||||
|
|
||||||
// create sonatypeReleaseCommand without releaseSonatype step
|
|
||||||
val sonatypeCommand = Command.command("sonatypeReleaseCommand") {
|
|
||||||
"project core" ::
|
|
||||||
"publish" ::
|
|
||||||
_
|
|
||||||
}
|
|
||||||
|
|
||||||
newState.copy(definedCommands = newState.definedCommands :+ sonatypeCommand)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
|
|
||||||
lazy val sonatypePublishStage = Seq[ReleaseStep](
|
|
||||||
releaseStepCommandAndRemaining(";sonatypeReleaseCommand")
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val initReleaseStage = Seq[ReleaseStep](
|
|
||||||
inquireVersions, // have a developer confirm versions
|
|
||||||
setReleaseVersion,
|
|
||||||
setSonatypeReleaseSettings
|
|
||||||
)
|
|
||||||
|
|
||||||
lazy val finalReleaseStage = Seq[ReleaseStep] (
|
|
||||||
releaseStepCommand("project root"), // use version.sbt file from root
|
|
||||||
commitReleaseVersion,
|
|
||||||
setNextVersion,
|
|
||||||
commitNextVersion
|
|
||||||
)
|
|
||||||
|
|
||||||
def getPropertyFlagOrDefault(name: String, value: Boolean): Boolean =
|
def getPropertyFlagOrDefault(name: String, value: Boolean): Boolean =
|
||||||
sys.props.get(name).flatMap(propValue => Try(propValue.toBoolean).toOption).getOrElse(value)
|
sys.props.get(name).flatMap(propValue => Try(propValue.toBoolean).toOption).getOrElse(value)
|
||||||
|
|
||||||
releaseProcess :=
|
|
||||||
initReleaseStage ++
|
|
||||||
sonatypePublishStage ++
|
|
||||||
finalReleaseStage
|
|
||||||
|
|
||||||
// Let's do things in parallel!
|
// Let's do things in parallel!
|
||||||
addCommandAlias("validate", "; root/clean; " +
|
addCommandAlias(
|
||||||
|
"validate",
|
||||||
|
"; root/clean; " +
|
||||||
"all core/headerCheck core/test:headerCheck " +
|
"all core/headerCheck core/test:headerCheck " +
|
||||||
"api/headerCheck api/test:headerCheck api/it:headerCheck " +
|
"api/headerCheck api/test:headerCheck api/it:headerCheck " +
|
||||||
"mysql/headerCheck mysql/test:headerCheck mysql/it:headerCheck " +
|
"mysql/headerCheck mysql/test:headerCheck mysql/it:headerCheck " +
|
||||||
@ -456,10 +305,11 @@ addCommandAlias("validate", "; root/clean; " +
|
|||||||
"root/compile;root/test:compile;root/it:compile"
|
"root/compile;root/test:compile;root/it:compile"
|
||||||
)
|
)
|
||||||
|
|
||||||
addCommandAlias("verify", "; project root; killDocker; dockerComposeUp; " +
|
addCommandAlias(
|
||||||
"project root; coverage; " +
|
"verify",
|
||||||
|
"; project root; coverage; " +
|
||||||
"all test it:test; " +
|
"all test it:test; " +
|
||||||
"project root; coverageReport; coverageAggregate; killDocker"
|
"project root; coverageReport; coverageAggregate"
|
||||||
)
|
)
|
||||||
|
|
||||||
// Build the artifacts for release
|
// Build the artifacts for release
|
||||||
|
114
build/README.md
114
build/README.md
@ -1,104 +1,14 @@
|
|||||||
## Building VinylDNS
|
# Build
|
||||||
|
|
||||||
This folder contains scripts and everything you need to build and test VinylDNS from your own machine.
|
|
||||||
|
|
||||||
## Pre-requisites
|
|
||||||
|
|
||||||
- `docker` - you will need docker and docker-compose installed locally
|
|
||||||
|
|
||||||
## Local Build and Test
|
|
||||||
|
|
||||||
1. `./docker-release.sh --clean`
|
|
||||||
1. Open up `version.sbt` in the root to know the directory (or capture in the script output)
|
|
||||||
1. Once complete, run a test `./start.sh --version 0.9.4-SNAPSHOT` (replace 0.9.4 with the value in version.sbt).
|
|
||||||
1. Login to the portal at http://localhost:9001 to verify everything looks good
|
|
||||||
1. Run `./stop.sh` to bring everything down
|
|
||||||
|
|
||||||
### Release Process
|
|
||||||
|
|
||||||
1. If you are using image signing / docker notary, be sure you set the environment variable `export DOCKER_CONTENT_TRUST=1`.
|
|
||||||
Whether you sign or not is up to your organization. You need to have notary setup to be able to sign properly.
|
|
||||||
1. Be sure to login to your docker registry, typically done by `docker login` in the terminal you will release from.
|
|
||||||
1. The actual version number is pulled from the local `version.sbt` based on the branch specified (defaults to master)
|
|
||||||
1. Run `./docker-release.sh --push --clean --tag [your tag here] --branch [your branch here]`
|
|
||||||
1. typically the `tag` is a build number that you maintain, for example a build number in Jenkins. Using this field is recommended. This value will be appended to the generated version as `-b[TAG]`; for example `0.9.4-b123` if using `123` for the tag.
|
|
||||||
1. the `branch` defaults to `master` if not specified, you can choose any branch or tag from https://github.com/vinyldns/vinyldns
|
|
||||||
1. The version generated will be whatever the version is in the `version.sbt` on the `branch` specified (defaults to master)
|
|
||||||
1. Each of the images are built using the branch specified and the correct version
|
|
||||||
1. The func tests are run with only smoke tests against the API image to verify it is working
|
|
||||||
1. If everything passes, and the user specifies `--push`, the images are tagged and released to the docker repository (defaults to docker hub)
|
|
||||||
|
|
||||||
### Release Script
|
|
||||||
Does a clean build off of remote master and tags it with
|
|
||||||
`./docker-release.sh --clean --push --tag 123`
|
|
||||||
|
|
||||||
The release script is used for doing a release. It takes the following parameters:
|
|
||||||
|
|
||||||
- `-b | --branch [BRANCH]` - what branch to pull from, can be any PR branch or a tag like `v0.9.3`, defaults to `master`
|
|
||||||
- `-c | --clean` - a flag that indicates to perform a build. If omitted, the release script will look for a
|
|
||||||
pre-built image locally
|
|
||||||
- `-p | --push` - a flag that indicates to push to the remote docker registry. The default docker registry
|
|
||||||
is `docker.io`
|
|
||||||
- `-r | --repository [REPOSITORY]` - a URL to your docker registry, defaults to `docker.io`
|
|
||||||
- `-t | --tag [TAG]` - a build qualifer for this build. For example, pass in the build number for your
|
|
||||||
continuous integration tool
|
|
||||||
- `-v | --version [VERSION]` - overrides the version calculation and forces the version passed in. Used primarily for official releases
|
|
||||||
|
|
||||||
## Docker Images
|
|
||||||
|
|
||||||
The build will generate several VinylDNS docker images that are used to deploy into any environment VinylDNS
|
|
||||||
|
|
||||||
- `vinyldns/api` - this is the heart of the VinylDNS system, the backend API
|
|
||||||
- `vinyldns/portal` - the VinylDNS web UI
|
|
||||||
- `vinyldns/test-bind9` - a DNS server that is configured to support running the functional tests
|
|
||||||
- `vinyldns/test` - a container that will execute functional tests, and exit success or failure when the tests are complete
|
|
||||||
|
|
||||||
### vinyldns/api
|
|
||||||
|
|
||||||
The default build for vinyldns api assumes an **ALL MYSQL** installation.
|
|
||||||
|
|
||||||
**Environment Variables**
|
|
||||||
- `VINYLDNS_VERSION` - this is the version of VinylDNS the API is running, typically you will not set this as
|
|
||||||
it is set as part of the container build
|
|
||||||
|
|
||||||
**Volumes**
|
|
||||||
- `/opt/docker/conf/` - if you need to have your own application config file. This is **MANDATORY** for
|
|
||||||
any production environments. Typically, you will add your own `application.conf` file in here with your settings.
|
|
||||||
- `/opt/docker/lib_extra/` - if you need to have additional jar files available to your VinylDNS instance.
|
|
||||||
Rarely used, but if you want to bring your own message queue or database you can put the `jar` files there
|
|
||||||
|
|
||||||
### vinyldns/portal
|
|
||||||
|
|
||||||
The default build for vinyldns portal assumes an **ALL MYSQL** installation.
|
|
||||||
|
|
||||||
**Environment Variables**
|
|
||||||
- `VINYLDNS_VERSION` - this is the version of VinylDNS the API is running, typically you will not set this as
|
|
||||||
it is set as part of the container build
|
|
||||||
|
|
||||||
**Volumes**
|
|
||||||
- `/opt/docker/conf/` - if you need to have your own application config file. This is **MANDATORY** for
|
|
||||||
any production environments. Typically, you will add your own `application.conf` file in here with your settings.
|
|
||||||
- `/opt/docker/lib_extra/` - if you need to have additional jar files available to your VinylDNS instance.
|
|
||||||
Rarely used, but if you want to bring your own message queue or database you can put the `jar` files there
|
|
||||||
|
|
||||||
### vinyldns/test-bind9
|
|
||||||
|
|
||||||
This pulls correct DNS configuration to run func tests. You can largely disregard what is in here
|
|
||||||
|
|
||||||
### vinyldns/test
|
|
||||||
|
|
||||||
This is used to run functional tests against a vinyldns instance. **This is very useful for verifying
|
|
||||||
your environment as part of doing an upgrade.** By default, it will run against a local docker-compose setup.
|
|
||||||
|
|
||||||
**Environment Variables**
|
|
||||||
- `VINYLDNS_URL` - the url to the vinyldns you will test against
|
|
||||||
- `DNS_IP` - the IP address to the `vinyldns/test-bind9` container that you will use for test purposes
|
|
||||||
- `TEST_PATTERN` - the actual functional test you want to run. *Important, set to empty string to run
|
|
||||||
ALL test; otherwise, omit the environment variable when you run to just run smoke tests*.
|
|
||||||
|
|
||||||
**Example**
|
|
||||||
|
|
||||||
This example will run all functional tests on the given VinylDNS url and DNS IP address
|
|
||||||
`docker run -e VINYLDNS_URL="https://my.vinyldns.example.com" -e DNS_IP="1.2.3.4" -e TEST_PATTERN=""`
|
|
||||||
|
|
||||||
|
This folder contains scripts for building VinylDNS and it's related artifacts.
|
||||||
|
|
||||||
|
| Path | Description |
|
||||||
|
|-----------------------|-----------------------------------------------------------------------------------------|
|
||||||
|
| `assemble_api_jar.sh` | Builds the VinylDNS API jar file. You can find the resulting `jar` file in `assembly/`. |
|
||||||
|
| `deep_clean.sh` | Removes all of the build artifacts and all `target/` directories recursively. |
|
||||||
|
| `func-test-api.sh` | Runs the functional tests for the API |
|
||||||
|
| `func-test-portal.sh` | Runs the functional tests for the Portal |
|
||||||
|
| `publish_docs.sh` | Publishes the documentation site |
|
||||||
|
| `run_all_tests.sh` | Runs all of the tests: unit, integration, and functional |
|
||||||
|
| `sbt.sh` | Runs `sbt` in a Docker container with the current project bind-mounted in `/build` |
|
||||||
|
| `verify.sh` | Runs all of the unit and integration tests |
|
||||||
|
49
build/assemble_api.sh
Executable file
49
build/assemble_api.sh
Executable file
@ -0,0 +1,49 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This script will build the vinyldns-api.jar file using Docker. The file will
|
||||||
|
# be placed in the configured location (currently `artifacts/` off of the root)
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
pwd -P
|
||||||
|
)
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo "USAGE: assemble_api.sh [options]"
|
||||||
|
echo -e "\t-n, --no-cache do not use cache when building the artifact"
|
||||||
|
echo -e "\t-u, --update update the underlying docker image"
|
||||||
|
}
|
||||||
|
|
||||||
|
NO_CACHE=0
|
||||||
|
UPDATE_DOCKER=0
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--no-cache | -n)
|
||||||
|
NO_CACHE=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--update | -u)
|
||||||
|
UPDATE_DOCKER=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $NO_CACHE -eq 1 ]]; then
|
||||||
|
rm -rf "${DIR}/../artifacts/vinyldns-api.jar" &> /dev/null || true
|
||||||
|
docker rmi vinyldns:api-artifact &> /dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $UPDATE_DOCKER -eq 1 ]]; then
|
||||||
|
echo "Pulling latest version of 'vinyldns/build:base-build'"
|
||||||
|
docker pull vinyldns/build:base-build
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Building VinylDNS API artifact"
|
||||||
|
make -C "${DIR}/docker/api" artifact
|
49
build/assemble_portal.sh
Executable file
49
build/assemble_portal.sh
Executable file
@ -0,0 +1,49 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This script will build the vinyldns-portal.zip file using Docker. The file will
|
||||||
|
# be placed in the configured location (currently `artifacts/` off of the root)
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
pwd -P
|
||||||
|
)
|
||||||
|
|
||||||
|
usage() {
|
||||||
|
echo "USAGE: assemble_portal.sh [options]"
|
||||||
|
echo -e "\t-n, --no-cache do not use cache when building the artifact"
|
||||||
|
echo -e "\t-u, --update update the underlying docker image"
|
||||||
|
}
|
||||||
|
|
||||||
|
NO_CACHE=0
|
||||||
|
UPDATE_DOCKER=0
|
||||||
|
while [[ $# -gt 0 ]]; do
|
||||||
|
case "$1" in
|
||||||
|
--no-clean | -n)
|
||||||
|
NO_CACHE=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
--update | -u)
|
||||||
|
UPDATE_DOCKER=1
|
||||||
|
shift
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
usage
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ $NO_CACHE -eq 1 ]]; then
|
||||||
|
rm -rf "${DIR}/../artifacts/vinyldns-portal.zip" &> /dev/null || true
|
||||||
|
docker rmi vinyldns:portal-artifact &> /dev/null || true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $UPDATE_DOCKER -eq 1 ]]; then
|
||||||
|
echo "Pulling latest version of 'vinyldns/build:base-build'"
|
||||||
|
docker pull vinyldns/build:base-build
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Building VinylDNS Portal artifact"
|
||||||
|
make -C "${DIR}/docker/portal" artifact
|
15
build/deep_clean.sh
Executable file
15
build/deep_clean.sh
Executable file
@ -0,0 +1,15 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This script will delete all target/ directories and the assembly/ directory
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
DIR=$(
|
||||||
|
cd "$(dirname "$0")"
|
||||||
|
pwd -P
|
||||||
|
)
|
||||||
|
|
||||||
|
echo "Performing deep clean"
|
||||||
|
find "${DIR}/.." -type d -name target -o -name assembly | while read -r p; do if [ -d "$p" ]; then
|
||||||
|
echo -n "Removing $(realpath --relative-to="$DIR" "$p").." && \
|
||||||
|
{ { rm -rf "$p" &> /dev/null && echo "done."; } || { echo -e "\e[93mERROR\e[0m: you may need to be root"; exit 1; } }
|
||||||
|
fi; done
|
@ -1,138 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CURDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
|
|
||||||
function usage() {
|
|
||||||
printf "usage: docker-release.sh [OPTIONS]\n\n"
|
|
||||||
printf "builds and releases vinyldns artifacts\n\n"
|
|
||||||
printf "options:\n"
|
|
||||||
printf "\t-b, --branch: the branch of tag to use for the build; default is master\n"
|
|
||||||
printf "\t-c, --clean: indicates a fresh build or attempt to work with existing images; default is off\n"
|
|
||||||
printf "\t-l, --latest: indicates docker will tag image with latest; default is off\n"
|
|
||||||
printf "\t-p, --push: indicates docker will push to the repository; default is off\n"
|
|
||||||
printf "\t-r, --repository [REPOSITORY]: the docker repository where this image will be pushed; default is docker.io\n"
|
|
||||||
printf "\t-t, --tag [TAG]: sets the qualifier for the semver version; default is to not use a build tag\n"
|
|
||||||
printf "\t-v, --version [VERSION]: overrides version calculation and forces the version specified\n"
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default the build to -SNAPSHOT if not set
|
|
||||||
BUILD_TAG=
|
|
||||||
REPOSITORY="docker.io"
|
|
||||||
DOCKER_PUSH=0
|
|
||||||
DO_BUILD=0
|
|
||||||
BRANCH="master"
|
|
||||||
V=
|
|
||||||
TAG_LATEST=0
|
|
||||||
|
|
||||||
while [ "$1" != "" ]; do
|
|
||||||
case "$1" in
|
|
||||||
-b | --branch)
|
|
||||||
BRANCH="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-c | --clean)
|
|
||||||
DO_BUILD=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-l | --latest)
|
|
||||||
TAG_LATEST=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-p | --push)
|
|
||||||
DOCKER_PUSH=1
|
|
||||||
shift
|
|
||||||
;;
|
|
||||||
-r | --repository)
|
|
||||||
REPOSITORY="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-t | --tag)
|
|
||||||
BUILD_TAG="-b$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
-v | --version)
|
|
||||||
V="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
usage
|
|
||||||
exit
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
BASEDIR=$CURDIR/../
|
|
||||||
|
|
||||||
# Clear out our target
|
|
||||||
rm -rf $CURDIR/target && mkdir -p $CURDIR/target
|
|
||||||
|
|
||||||
# Download just the version.sbt file from the branch specified, we use this to calculate the version
|
|
||||||
wget "https://raw.githubusercontent.com/vinyldns/vinyldns/${BRANCH}/version.sbt" -P "${CURDIR}/target"
|
|
||||||
|
|
||||||
if [ -z "$V" ]; then
|
|
||||||
# Calculate the version by using version.sbt, this will pull out something like 0.9.4
|
|
||||||
V=$(find $CURDIR/target -name "version.sbt" | head -n1 | xargs grep "[ \\t]*version in ThisBuild :=" | head -n1 | sed 's/.*"\(.*\)".*/\1/')
|
|
||||||
echo "VERSION ON BRANCH ${BRANCH} IS ${V}"
|
|
||||||
VINYLDNS_VERSION=
|
|
||||||
|
|
||||||
if [[ "$V" == *-SNAPSHOT ]]; then
|
|
||||||
if [ -z "$BUILD_TAG" ]; then
|
|
||||||
# build tag is not defined, so assume -SNAPSHOT
|
|
||||||
VINYLDNS_VERSION="$V"
|
|
||||||
else
|
|
||||||
# build tag IS defined, drop the SNAPSHOT and append the build tag
|
|
||||||
VINYLDNS_VERSION="${V%?????????}${BUILD_TAG}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# NOT a -SNAPSHOT, append the build tag if there is one otherwise it will be empty!
|
|
||||||
VINYLDNS_VERSION="$V${BUILD_TAG}"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
VINYLDNS_VERSION="$V"
|
|
||||||
fi
|
|
||||||
export VINYLDNS_VERSION=$VINYLDNS_VERSION
|
|
||||||
|
|
||||||
echo "VINYLDNS VERSION BEING RELEASED IS $VINYLDNS_VERSION"
|
|
||||||
|
|
||||||
if [ $DO_BUILD -eq 1 ]; then
|
|
||||||
docker-compose -f $CURDIR/docker/docker-compose.yml build \
|
|
||||||
--no-cache \
|
|
||||||
--parallel \
|
|
||||||
--build-arg VINYLDNS_VERSION="${VINYLDNS_VERSION}" \
|
|
||||||
--build-arg BRANCH="${BRANCH}"
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
# Runs smoke tests to make sure the new images are sound
|
|
||||||
docker-compose -f $CURDIR/docker/docker-compose.yml --log-level ERROR up --exit-code-from functest
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
docker tag vinyldns/test-bind9:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test-bind9:$VINYLDNS_VERSION
|
|
||||||
docker tag vinyldns/test:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test:$VINYLDNS_VERSION
|
|
||||||
docker tag vinyldns/api:$VINYLDNS_VERSION $REPOSITORY/vinyldns/api:$VINYLDNS_VERSION
|
|
||||||
docker tag vinyldns/portal:$VINYLDNS_VERSION $REPOSITORY/vinyldns/portal:$VINYLDNS_VERSION
|
|
||||||
|
|
||||||
if [ $TAG_LATEST -eq 1 ]; then
|
|
||||||
echo "Tagging latest..."
|
|
||||||
docker tag vinyldns/test-bind9:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test-bind9:latest
|
|
||||||
docker tag vinyldns/test:$VINYLDNS_VERSION $REPOSITORY/vinyldns/test:latest
|
|
||||||
docker tag vinyldns/api:$VINYLDNS_VERSION $REPOSITORY/vinyldns/api:latest
|
|
||||||
docker tag vinyldns/portal:$VINYLDNS_VERSION $REPOSITORY/vinyldns/portal:latest
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $DOCKER_PUSH -eq 1 ]; then
|
|
||||||
docker push $REPOSITORY/vinyldns/test-bind9:$VINYLDNS_VERSION
|
|
||||||
docker push $REPOSITORY/vinyldns/test:$VINYLDNS_VERSION
|
|
||||||
docker push $REPOSITORY/vinyldns/api:$VINYLDNS_VERSION
|
|
||||||
docker push $REPOSITORY/vinyldns/portal:$VINYLDNS_VERSION
|
|
||||||
|
|
||||||
if [ $TAG_LATEST -eq 1 ]; then
|
|
||||||
echo "Pushing latest..."
|
|
||||||
docker push $REPOSITORY/vinyldns/test-bind9:latest
|
|
||||||
docker push $REPOSITORY/vinyldns/test:latest
|
|
||||||
docker push $REPOSITORY/vinyldns/api:latest
|
|
||||||
docker push $REPOSITORY/vinyldns/portal:latest
|
|
||||||
fi
|
|
||||||
fi
|
|
19
build/docker/.env
Normal file
19
build/docker/.env
Normal file
@ -0,0 +1,19 @@
|
|||||||
|
# Portal settings
|
||||||
|
PORTAL_PORT=9001
|
||||||
|
PLAY_HTTP_SECRET_KEY=change-this-for-prod
|
||||||
|
VINYLDNS_BACKEND_URL=http://vinyldns-api:9000
|
||||||
|
LDAP_PROVIDER_URL=ldap://vinyldns-ldap:19004
|
||||||
|
TEST_LOGIN=false
|
||||||
|
|
||||||
|
# API Settings
|
||||||
|
REST_PORT=9000
|
||||||
|
SQS_SERVICE_ENDPOINT=http://vinyldns-integration:19003
|
||||||
|
SNS_SERVICE_ENDPOINT=http://vinyldns-integration:19003
|
||||||
|
MYSQL_ENDPOINT=vinyldns-integration:19002
|
||||||
|
DEFAULT_DNS_ADDRESS=vinyldns-integration:19001
|
||||||
|
|
||||||
|
JDBC_DRIVER=org.mariadb.jdbc.Driver
|
||||||
|
JDBC_URL=jdbc:mariadb://vinyldns-integration:19002/vinyldns?user=root&password=pass
|
||||||
|
JDBC_MIGRATION_URL=jdbc:mariadb://vinyldns-integration:19002/?user=root&password=pass
|
||||||
|
JDBC_USER=root
|
||||||
|
JDBC_PASSWORD=pass
|
41
build/docker/README.md
Normal file
41
build/docker/README.md
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
# Docker Images
|
||||||
|
|
||||||
|
This folder contains the tools to create Docker images for the VinylDNS API and Portal
|
||||||
|
|
||||||
|
|
||||||
|
## Docker Images
|
||||||
|
|
||||||
|
- `vinyldns/api` - this is the heart of the VinylDNS system, the backend API
|
||||||
|
- `vinyldns/portal` - the VinylDNS web UI
|
||||||
|
|
||||||
|
### `vinyldns/api`
|
||||||
|
|
||||||
|
The default build for vinyldns api assumes an **ALL MYSQL** installation.
|
||||||
|
|
||||||
|
#### Environment Variables
|
||||||
|
|
||||||
|
- `VINYLDNS_VERSION` - this is the version of VinylDNS the API is running, typically you will not set this as it is set
|
||||||
|
as part of the container build
|
||||||
|
|
||||||
|
#### Volumes
|
||||||
|
|
||||||
|
- `/opt/vinyldns/conf/` - if you need to have your own application config file. This is **MANDATORY** for any production
|
||||||
|
environments. Typically, you will add your own `application.conf` file in here with your settings.
|
||||||
|
- `/opt/vinyldns/lib_extra/` - if you need to have additional jar files available to your VinylDNS instance. Rarely
|
||||||
|
used, but if you want to bring your own message queue or database you can put the `jar` files there
|
||||||
|
|
||||||
|
### `vinyldns/portal`
|
||||||
|
|
||||||
|
The default build for vinyldns portal assumes an **ALL MYSQL** installation.
|
||||||
|
|
||||||
|
#### Environment Variables
|
||||||
|
|
||||||
|
- `VINYLDNS_VERSION` - this is the version of VinylDNS the API is running, typically you will not set this as it is set
|
||||||
|
as part of the container build
|
||||||
|
|
||||||
|
#### Volumes
|
||||||
|
|
||||||
|
- `/opt/vinyldns/conf/` - if you need to have your own application config file. This is **MANDATORY** for any production
|
||||||
|
environments. Typically, you will add your own `application.conf` file in here with your settings.
|
||||||
|
- `/opt/vinyldns/lib_extra/` - if you need to have additional jar files available to your VinylDNS instance. Rarely
|
||||||
|
used, but if you want to bring your own message queue or database you can put the `jar` files there
|
@ -1,6 +0,0 @@
|
|||||||
-Xms512M
|
|
||||||
-Xmx1024M
|
|
||||||
-Xss2M
|
|
||||||
-XX:MaxMetaspaceSize=512M
|
|
||||||
-XX:ReservedCodeCacheSize=512M
|
|
||||||
-Djava.net.preferIPv4Stack=true
|
|
@ -1,31 +1,38 @@
|
|||||||
FROM hseeberger/scala-sbt:11.0.8_1.3.13_2.11.12 as builder
|
# Build VinylDNS API if the JAR doesn't already exist
|
||||||
|
FROM vinyldns/build:base-build as base-build
|
||||||
|
COPY . /build/
|
||||||
|
WORKDIR /build
|
||||||
|
|
||||||
ARG BRANCH=master
|
## Run the build if we don't already have a vinyldns-api.jar
|
||||||
|
RUN mkdir -p /opt/vinyldns/conf && \
|
||||||
|
if [ -f artifacts/vinyldns-api.jar ]; then cp artifacts/vinyldns-api.jar /opt/vinyldns/; fi && \
|
||||||
|
if [ ! -f /opt/vinyldns/vinyldns-api.jar ]; then \
|
||||||
|
env SBT_OPTS="-Xmx2G -Xms512M -Xss2M -XX:MaxMetaspaceSize=2G" \
|
||||||
|
sbt -Dbuild.scalafmtOnCompile=false -Dbuild.lintOnCompile=fase ";project api;coverageOff;assembly" \
|
||||||
|
&& cp artifacts/vinyldns-api.jar /opt/vinyldns/; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
FROM openjdk:11-slim
|
||||||
|
ARG DOCKER_FILE_PATH
|
||||||
ARG VINYLDNS_VERSION
|
ARG VINYLDNS_VERSION
|
||||||
|
|
||||||
RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns/vinyldns.git /vinyldns
|
RUN test -n "VINYLDNS_VERSION" || (echo "VINYLDNS_VERSION not set" && false) && \
|
||||||
|
test -n "DOCKER_FILE_PATH" || (echo "DOCKER_FILE_PATH not set" && false) && \
|
||||||
|
mkdir -p /opt/vinyldns/lib_extra && \
|
||||||
|
echo "${VINYLDNS_VERSION}" > /opt/vinyldns/version
|
||||||
|
|
||||||
# The default jvmopts are huge, meant for running everything, use a paired down version
|
COPY --from=base-build /opt/vinyldns /opt/vinyldns
|
||||||
COPY .jvmopts /vinyldns
|
COPY ${DOCKER_FILE_PATH}/application.conf /opt/vinyldns/conf
|
||||||
|
COPY ${DOCKER_FILE_PATH}/logback.xml /opt/vinyldns/conf
|
||||||
RUN cd /vinyldns ; sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"" api/stage
|
|
||||||
|
|
||||||
FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine
|
|
||||||
|
|
||||||
RUN apk add --update --no-cache netcat-openbsd bash
|
|
||||||
|
|
||||||
COPY --from=builder /vinyldns/modules/api/target/universal/stage /opt/docker
|
|
||||||
|
|
||||||
# This will set the vinyldns version, make sure to have this in config... version = ${?VINYLDNS_VERSION}
|
|
||||||
ARG VINYLDNS_VERSION
|
|
||||||
ENV VINYLDNS_VERSION=$VINYLDNS_VERSION
|
|
||||||
|
|
||||||
RUN mkdir -p /opt/docker/lib_extra
|
|
||||||
|
|
||||||
# Mount the volume for config file and lib extras
|
# Mount the volume for config file and lib extras
|
||||||
# Note: These volume names are used in the build.sbt
|
VOLUME ["/opt/vinyldns/lib_extra/", "/opt/vinyldns/conf/"]
|
||||||
VOLUME ["/opt/docker/lib_extra/", "/opt/docker/conf"]
|
|
||||||
|
|
||||||
EXPOSE 9000
|
EXPOSE 9000
|
||||||
|
|
||||||
ENTRYPOINT ["/opt/docker/bin/api"]
|
ENV JVM_OPTS=""
|
||||||
|
ENTRYPOINT ["/bin/bash", "-c", "java ${JVM_OPTS} -Dconfig.file=/opt/vinyldns/conf/application.conf \
|
||||||
|
-Dlogback.configurationFile=/opt/vinyldns/conf/logback.xml \
|
||||||
|
-Dvinyldns.version=$(cat /opt/vinyldns/version) \
|
||||||
|
-cp /opt/vinyldns/lib_extra/* \
|
||||||
|
-jar /opt/vinyldns/vinyldns-api.jar" ]
|
||||||
|
67
build/docker/api/Makefile
Normal file
67
build/docker/api/Makefile
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
SHELL=bash
|
||||||
|
IMAGE_TAG=$(shell awk -F'"' '{print $$2}' ../../../version.sbt)
|
||||||
|
IMAGE_NAME=vinyldns/api
|
||||||
|
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
|
||||||
|
|
||||||
|
# Check that the required version of make is being used
|
||||||
|
REQ_MAKE_VER:=3.82
|
||||||
|
ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER))))
|
||||||
|
$(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION))
|
||||||
|
endif
|
||||||
|
|
||||||
|
# Extract arguments for `make run`
|
||||||
|
EXTRACT_ARGS=false
|
||||||
|
ifeq (run,$(firstword $(MAKECMDGOALS)))
|
||||||
|
EXTRACT_ARGS=true
|
||||||
|
endif
|
||||||
|
ifeq ($(EXTRACT_ARGS),true)
|
||||||
|
# use the rest as arguments for "run"
|
||||||
|
WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
|
||||||
|
endif
|
||||||
|
ifneq ($(WITH_ARGS),)
|
||||||
|
ARG_SEPARATOR=--
|
||||||
|
endif
|
||||||
|
|
||||||
|
%:
|
||||||
|
@:
|
||||||
|
|
||||||
|
.ONESHELL:
|
||||||
|
|
||||||
|
.PHONY: all artifact build run publish build-vnext publish-vnext
|
||||||
|
|
||||||
|
all: build run
|
||||||
|
|
||||||
|
artifact:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --target base-build --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t "vinyldns:api-artifact" -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
USE_TTY="" && test -t 1 && USE_TTY="-t"
|
||||||
|
docker run -i $${USE_TTY} --rm -v "$$(pwd)/:/output" vinyldns:api-artifact /bin/bash -c "mkdir -p /output/artifacts/ && cp /build/artifacts/*.jar /output/artifacts/"
|
||||||
|
|
||||||
|
build:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t $(IMAGE_NAME):$(IMAGE_TAG) -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
docker tag $(IMAGE_NAME):$(IMAGE_TAG) $(IMAGE_NAME):latest
|
||||||
|
|
||||||
|
run:
|
||||||
|
@set -euo pipefail
|
||||||
|
docker network create --driver bridge vinyldns_net &> /dev/null || true
|
||||||
|
USE_TTY="" && test -t 1 && USE_TTY="-t"
|
||||||
|
docker run -i $${USE_TTY} --rm --env-file "$(ROOT_DIR)/../.env" --network vinyldns_net $(DOCKER_PARAMS) -v "$$(pwd)/:/opt/vinyldns/conf/" -p 9000:9000 $(IMAGE_NAME):$(IMAGE_TAG) $(ARG_SEPARATOR) $(WITH_ARGS)
|
||||||
|
|
||||||
|
publish: build
|
||||||
|
@set -euo pipefail
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):$(IMAGE_TAG)
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):latest
|
||||||
|
|
||||||
|
build-vnext:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="vnext" -t $(IMAGE_NAME):vnext -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
docker tag $(IMAGE_NAME):vnext "$(IMAGE_NAME):vnext-$$(date -u +"%Y%m%d")"
|
||||||
|
|
||||||
|
publish-vnext: build-vnext
|
||||||
|
@set -euo pipefail
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):vnext
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push "$(IMAGE_NAME):vnext-$$(date -u +"%Y%m%d")"
|
@ -1,69 +1,194 @@
|
|||||||
vinyldns {
|
vinyldns {
|
||||||
version = "unknown"
|
base-version = "0.0.0-local-dev"
|
||||||
version = ${?VINYLDNS_VERSION}
|
version = ${vinyldns.base-version} # default to the base version if not overridden
|
||||||
|
version = ${?VINYLDNS_VERSION} # override the base version via env var
|
||||||
|
|
||||||
|
# How often to any particular zone can be synchronized in milliseconds
|
||||||
|
sync-delay = 10000
|
||||||
|
sync-delay = ${?SYNC_DELAY}
|
||||||
|
|
||||||
|
# If we should start up polling for change requests, set this to false for the inactive cluster
|
||||||
|
processing-disabled = false
|
||||||
|
processing-disabled = ${?PROCESSING_DISABLED}
|
||||||
|
|
||||||
|
# Number of records that can be in a zone
|
||||||
|
max-zone-size = 60000
|
||||||
|
max-zone-size = ${?MAX_ZONE_SIZE}
|
||||||
|
|
||||||
|
# Types of unowned records that users can access in shared zones
|
||||||
|
shared-approved-types = ["A", "AAAA", "CNAME", "PTR", "TXT"]
|
||||||
|
|
||||||
|
# Batch change settings
|
||||||
|
batch-change-limit = 1000
|
||||||
|
batch-change-limit = ${?BATCH_CHANGE_LIMIT}
|
||||||
|
manual-batch-review-enabled = true
|
||||||
|
manual-batch-review-enabled = ${?MANUAL_BATCH_REVIEW_ENABLED}
|
||||||
|
scheduled-changes-enabled = true
|
||||||
|
scheduled-changes-enabled = ${?SCHEDULED_CHANGES_ENABLED}
|
||||||
|
multi-record-batch-change-enabled = true
|
||||||
|
multi-record-batch-change-enabled = ${?MULTI_RECORD_BATCH_CHANGE_ENABLED}
|
||||||
|
|
||||||
|
# configured backend providers
|
||||||
|
backend {
|
||||||
|
# Use "default" when dns backend legacy = true
|
||||||
|
# otherwise, use the id of one of the connections in any of your backends
|
||||||
|
default-backend-id = "default"
|
||||||
|
|
||||||
|
# this is where we can save additional backends
|
||||||
|
backend-providers = [
|
||||||
|
{
|
||||||
|
class-name = "vinyldns.api.backend.dns.DnsBackendProviderLoader"
|
||||||
|
settings = {
|
||||||
|
legacy = false
|
||||||
|
backends = [
|
||||||
|
{
|
||||||
|
id = "default"
|
||||||
|
zone-connection = {
|
||||||
|
name = "vinyldns."
|
||||||
|
key-name = "vinyldns."
|
||||||
|
key-name = ${?DEFAULT_DNS_KEY_NAME}
|
||||||
|
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
||||||
|
key = ${?DEFAULT_DNS_KEY_SECRET}
|
||||||
|
primary-server = "127.0.0.1:19001"
|
||||||
|
primary-server = ${?DEFAULT_DNS_ADDRESS}
|
||||||
|
}
|
||||||
|
transfer-connection = {
|
||||||
|
name = "vinyldns."
|
||||||
|
key-name = "vinyldns."
|
||||||
|
key-name = ${?DEFAULT_DNS_KEY_NAME}
|
||||||
|
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
||||||
|
key = ${?DEFAULT_DNS_KEY_SECRET}
|
||||||
|
primary-server = "127.0.0.1:19001"
|
||||||
|
primary-server = ${?DEFAULT_DNS_ADDRESS}
|
||||||
|
},
|
||||||
|
tsig-usage = "always"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
id = "func-test-backend"
|
||||||
|
zone-connection = {
|
||||||
|
name = "vinyldns."
|
||||||
|
key-name = "vinyldns."
|
||||||
|
key-name = ${?DEFAULT_DNS_KEY_NAME}
|
||||||
|
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
||||||
|
key = ${?DEFAULT_DNS_KEY_SECRET}
|
||||||
|
primary-server = "127.0.0.1:19001"
|
||||||
|
primary-server = ${?DEFAULT_DNS_ADDRESS}
|
||||||
|
}
|
||||||
|
transfer-connection = {
|
||||||
|
name = "vinyldns."
|
||||||
|
key-name = "vinyldns."
|
||||||
|
key-name = ${?DEFAULT_DNS_KEY_NAME}
|
||||||
|
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
||||||
|
key = ${?DEFAULT_DNS_KEY_SECRET}
|
||||||
|
primary-server = "127.0.0.1:19001"
|
||||||
|
primary-server = ${?DEFAULT_DNS_ADDRESS}
|
||||||
|
},
|
||||||
|
tsig-usage = "always"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
queue {
|
queue {
|
||||||
class-name = "vinyldns.mysql.queue.MySqlMessageQueueProvider"
|
class-name = "vinyldns.sqs.queue.SqsMessageQueueProvider"
|
||||||
polling-interval = 250.millis
|
|
||||||
messages-per-poll = 10
|
messages-per-poll = 10
|
||||||
|
polling-interval = 250.millis
|
||||||
|
|
||||||
settings = {
|
settings {
|
||||||
name = "vinyldns"
|
# AWS access key and secret.
|
||||||
driver = "org.mariadb.jdbc.Driver"
|
access-key = "test"
|
||||||
migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass"
|
access-key = ${?AWS_ACCESS_KEY}
|
||||||
url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass"
|
secret-key = "test"
|
||||||
user = "root"
|
secret-key = ${?AWS_SECRET_ACCESS_KEY}
|
||||||
password = "pass"
|
|
||||||
|
|
||||||
# see https://github.com/brettwooldridge/HikariCP
|
# Regional endpoint to make your requests (eg. 'us-west-2', 'us-east-1', etc.). This is the region where your queue is housed.
|
||||||
connection-timeout-millis = 1000
|
signing-region = "us-east-1"
|
||||||
idle-timeout = 10000
|
signing-region = ${?SQS_REGION}
|
||||||
max-lifetime = 30000
|
|
||||||
maximum-pool-size = 5
|
|
||||||
minimum-idle = 0
|
|
||||||
|
|
||||||
my-sql-properties = {
|
# Endpoint to access queue
|
||||||
cachePrepStmts=true
|
service-endpoint = "http://localhost:19003/"
|
||||||
prepStmtCacheSize=250
|
service-endpoint = ${?SQS_SERVICE_ENDPOINT}
|
||||||
prepStmtCacheSqlLimit=2048
|
|
||||||
rewriteBatchedStatements=true
|
# Queue name. Should be used in conjunction with service endpoint, rather than using a queue url which is subject to change.
|
||||||
|
queue-name = "vinyldns"
|
||||||
|
queue-name = ${?SQS_QUEUE_NAME}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
email {
|
||||||
|
class-name = "vinyldns.api.notifier.email.EmailNotifierProvider"
|
||||||
|
class-name = ${?EMAIL_CLASS_NAME}
|
||||||
|
settings = {
|
||||||
|
from = "VinylDNS <do-not-reply@vinyldns.io>"
|
||||||
|
from = ${?EMAIL_FROM}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
sns {
|
||||||
|
class-name = "vinyldns.apadi.notifier.sns.SnsNotifierProvider"
|
||||||
|
class-name = ${?SNS_CLASS_NAME}
|
||||||
|
settings {
|
||||||
|
topic-arn = "arn:aws:sns:us-east-1:000000000000:batchChanges"
|
||||||
|
topic-arn = ${?SNS_TOPIC_ARN}
|
||||||
|
access-key = "test"
|
||||||
|
access-key = ${?SNS_ACCESS_KEY}
|
||||||
|
secret-key = "test"
|
||||||
|
secret-key = ${?SNS_SECRET_KEY}
|
||||||
|
service-endpoint = "http://localhost:19003"
|
||||||
|
service-endpoint = ${?SNS_SERVICE_ENDPOINT}
|
||||||
|
signing-region = "us-east-1"
|
||||||
|
signing-region = ${?SNS_REGION}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
rest {
|
rest {
|
||||||
host = "0.0.0.0"
|
host = "0.0.0.0"
|
||||||
port = 9000
|
port = 9000
|
||||||
|
port=${?API_SERVICE_PORT}
|
||||||
}
|
}
|
||||||
|
|
||||||
sync-delay = 10000
|
|
||||||
|
|
||||||
|
approved-name-servers = [
|
||||||
|
"172.17.42.1.",
|
||||||
|
"ns1.parent.com."
|
||||||
|
"ns1.parent.com1."
|
||||||
|
"ns1.parent.com2."
|
||||||
|
"ns1.parent.com3."
|
||||||
|
"ns1.parent.com4."
|
||||||
|
]
|
||||||
|
|
||||||
|
# Note: This MUST match the Portal or strange errors will ensue, NoOpCrypto should not be used for production
|
||||||
crypto {
|
crypto {
|
||||||
type = "vinyldns.core.crypto.NoOpCrypto"
|
type = "vinyldns.core.crypto.NoOpCrypto"
|
||||||
|
type = ${?CRYPTO_TYPE}
|
||||||
|
secret = ${?CRYPTO_SECRET}
|
||||||
}
|
}
|
||||||
|
|
||||||
data-stores = ["mysql"]
|
data-stores = ["mysql"]
|
||||||
|
|
||||||
mysql {
|
mysql {
|
||||||
settings {
|
settings {
|
||||||
# JDBC Settings, these are all values in scalikejdbc-config, not our own
|
# JDBC Settings, these are all values in scalikejdbc-config, not our own
|
||||||
# these must be overridden to use MYSQL for production use
|
# these must be overridden to use MYSQL for production use
|
||||||
# assumes a docker or mysql instance running locally
|
# assumes a docker or mysql instance running locally
|
||||||
name = "vinyldns"
|
name = "vinyldns"
|
||||||
|
name = ${?DATABASE_NAME}
|
||||||
driver = "org.mariadb.jdbc.Driver"
|
driver = "org.mariadb.jdbc.Driver"
|
||||||
migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass"
|
driver = ${?JDBC_DRIVER}
|
||||||
url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass"
|
migration-url = "jdbc:mariadb://localhost:19002/?user=root&password=pass"
|
||||||
|
migration-url = ${?JDBC_MIGRATION_URL}
|
||||||
|
url = "jdbc:mariadb://localhost:19002/vinyldns?user=root&password=pass"
|
||||||
|
url = ${?JDBC_URL}
|
||||||
user = "root"
|
user = "root"
|
||||||
|
user = ${?JDBC_USER}
|
||||||
password = "pass"
|
password = "pass"
|
||||||
# see https://github.com/brettwooldridge/HikariCP
|
password = ${?JDBC_PASSWORD}
|
||||||
connection-timeout-millis = 1000
|
|
||||||
idle-timeout = 10000
|
|
||||||
max-lifetime = 600000
|
|
||||||
maximum-pool-size = 20
|
|
||||||
minimum-idle = 20
|
|
||||||
register-mbeans = true
|
|
||||||
}
|
}
|
||||||
# Repositories that use this data store are listed here
|
|
||||||
|
# TODO: Remove the need for these useless configuration blocks
|
||||||
repositories {
|
repositories {
|
||||||
zone {
|
zone {
|
||||||
}
|
}
|
||||||
@ -86,39 +211,7 @@ vinyldns {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
defaultZoneConnection {
|
backends = []
|
||||||
name = "vinyldns."
|
|
||||||
keyName = "vinyldns."
|
|
||||||
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
|
||||||
primaryServer = "vinyldns-bind9"
|
|
||||||
}
|
|
||||||
|
|
||||||
defaultTransferConnection {
|
|
||||||
name = "vinyldns."
|
|
||||||
keyName = "vinyldns."
|
|
||||||
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
|
||||||
primaryServer = "vinyldns-bind9"
|
|
||||||
}
|
|
||||||
|
|
||||||
backends = [
|
|
||||||
{
|
|
||||||
id = "func-test-backend"
|
|
||||||
zone-connection {
|
|
||||||
name = "vinyldns."
|
|
||||||
key-name = "vinyldns."
|
|
||||||
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
|
||||||
primary-server = "vinyldns-bind9"
|
|
||||||
}
|
|
||||||
transfer-connection {
|
|
||||||
name = "vinyldns."
|
|
||||||
key-name = "vinyldns."
|
|
||||||
key = "nzisn+4G2ldMn0q1CV3vsg=="
|
|
||||||
primary-server = "vinyldns-bind9"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
]
|
|
||||||
|
|
||||||
batch-change-limit = 1000
|
|
||||||
|
|
||||||
# FQDNs / IPs that cannot be modified via VinylDNS
|
# FQDNs / IPs that cannot be modified via VinylDNS
|
||||||
# regex-list used for all record types except PTR
|
# regex-list used for all record types except PTR
|
||||||
@ -136,10 +229,78 @@ vinyldns {
|
|||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
# types of unowned records that users can access in shared zones
|
# FQDNs / IPs / zone names that require manual review upon submission in batch change interface
|
||||||
shared-approved-types = ["A", "AAAA", "CNAME", "PTR", "TXT"]
|
# domain-list used for all record types except PTR
|
||||||
|
# ip-list used exclusively for PTR records
|
||||||
|
manual-review-domains = {
|
||||||
|
domain-list = [
|
||||||
|
"needs-review.*"
|
||||||
|
]
|
||||||
|
ip-list = [
|
||||||
|
"192.0.1.254",
|
||||||
|
"192.0.1.255",
|
||||||
|
"192.0.2.254",
|
||||||
|
"192.0.2.255",
|
||||||
|
"192.0.3.254",
|
||||||
|
"192.0.3.255",
|
||||||
|
"192.0.4.254",
|
||||||
|
"192.0.4.255",
|
||||||
|
"fd69:27cc:fe91:0:0:0:ffff:1",
|
||||||
|
"fd69:27cc:fe91:0:0:0:ffff:2",
|
||||||
|
"fd69:27cc:fe92:0:0:0:ffff:1",
|
||||||
|
"fd69:27cc:fe92:0:0:0:ffff:2",
|
||||||
|
"fd69:27cc:fe93:0:0:0:ffff:1",
|
||||||
|
"fd69:27cc:fe93:0:0:0:ffff:2",
|
||||||
|
"fd69:27cc:fe94:0:0:0:ffff:1",
|
||||||
|
"fd69:27cc:fe94:0:0:0:ffff:2"
|
||||||
|
]
|
||||||
|
zone-name-list = [
|
||||||
|
"zone.requires.review."
|
||||||
|
"zone.requires.review1."
|
||||||
|
"zone.requires.review2."
|
||||||
|
"zone.requires.review3."
|
||||||
|
"zone.requires.review4."
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
manual-batch-review-enabled = true
|
# FQDNs / IPs that cannot be modified via VinylDNS
|
||||||
|
# regex-list used for all record types except PTR
|
||||||
|
# ip-list used exclusively for PTR records
|
||||||
|
high-value-domains = {
|
||||||
|
regex-list = [
|
||||||
|
"high-value-domain.*" # for testing
|
||||||
|
]
|
||||||
|
ip-list = [
|
||||||
|
# using reverse zones in the vinyldns/bind9 docker image for testing
|
||||||
|
"192.0.1.252",
|
||||||
|
"192.0.1.253",
|
||||||
|
"192.0.2.252",
|
||||||
|
"192.0.2.253",
|
||||||
|
"192.0.3.252",
|
||||||
|
"192.0.3.253",
|
||||||
|
"192.0.4.252",
|
||||||
|
"192.0.4.253",
|
||||||
|
"fd69:27cc:fe91:0:0:0:0:ffff",
|
||||||
|
"fd69:27cc:fe91:0:0:0:ffff:0",
|
||||||
|
"fd69:27cc:fe92:0:0:0:0:ffff",
|
||||||
|
"fd69:27cc:fe92:0:0:0:ffff:0",
|
||||||
|
"fd69:27cc:fe93:0:0:0:0:ffff",
|
||||||
|
"fd69:27cc:fe93:0:0:0:ffff:0",
|
||||||
|
"fd69:27cc:fe94:0:0:0:0:ffff",
|
||||||
|
"fd69:27cc:fe94:0:0:0:ffff:0"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
|
||||||
|
global-acl-rules = [
|
||||||
|
{
|
||||||
|
group-ids: ["global-acl-group-id"],
|
||||||
|
fqdn-regex-list: [".*shared[0-9]{1}."]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
group-ids: ["another-global-acl-group"],
|
||||||
|
fqdn-regex-list: [".*ok[0-9]{1}."]
|
||||||
|
}
|
||||||
|
]
|
||||||
}
|
}
|
||||||
|
|
||||||
akka {
|
akka {
|
||||||
@ -168,3 +329,7 @@ akka.http {
|
|||||||
illegal-header-warnings = on
|
illegal-header-warnings = on
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# You can provide configuration overrides via local.conf if you don't want to replace everything in
|
||||||
|
# this configuration file
|
||||||
|
include "local.conf"
|
||||||
|
@ -1,77 +0,0 @@
|
|||||||
version: "3.0"
|
|
||||||
services:
|
|
||||||
mysql:
|
|
||||||
image: "mysql:5.7"
|
|
||||||
container_name: "vinyldns-mysql"
|
|
||||||
environment:
|
|
||||||
MYSQL_ROOT_PASSWORD: 'pass'
|
|
||||||
MYSQL_ROOT_HOST: '%'
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
ports:
|
|
||||||
- "19002:3306"
|
|
||||||
|
|
||||||
bind9:
|
|
||||||
build:
|
|
||||||
context: ./test-bind9
|
|
||||||
args:
|
|
||||||
BRANCH: master
|
|
||||||
image: "vinyldns/test-bind9:${VINYLDNS_VERSION}"
|
|
||||||
container_name: "vinyldns-bind9"
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
ports:
|
|
||||||
- "19001:53/udp"
|
|
||||||
- "19001:53"
|
|
||||||
|
|
||||||
api:
|
|
||||||
build:
|
|
||||||
context: ./api
|
|
||||||
image: "vinyldns/api:${VINYLDNS_VERSION}"
|
|
||||||
container_name: "vinyldns-api"
|
|
||||||
environment:
|
|
||||||
MYSQL_ROOT_PASSWORD: 'pass'
|
|
||||||
MYSQL_ROOT_HOST: '%'
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
ports:
|
|
||||||
- "9000:9000"
|
|
||||||
volumes:
|
|
||||||
- ./api/application.conf:/opt/docker/conf/application.conf
|
|
||||||
- ./api/logback.xml:/opt/docker/conf/logback.xml
|
|
||||||
depends_on:
|
|
||||||
- mysql
|
|
||||||
|
|
||||||
ldap:
|
|
||||||
image: rroemhild/test-openldap
|
|
||||||
container_name: "vinyldns-ldap"
|
|
||||||
ports:
|
|
||||||
- "19008:389"
|
|
||||||
|
|
||||||
portal:
|
|
||||||
build:
|
|
||||||
context: ./portal
|
|
||||||
image: "vinyldns/portal:${VINYLDNS_VERSION}"
|
|
||||||
container_name: "vinyldns-portal"
|
|
||||||
environment:
|
|
||||||
MYSQL_ROOT_PASSWORD: 'pass'
|
|
||||||
MYSQL_ROOT_HOST: '%'
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
ports:
|
|
||||||
- "9001:9000"
|
|
||||||
volumes:
|
|
||||||
- ./portal/application.conf:/opt/docker/conf/application.conf
|
|
||||||
depends_on:
|
|
||||||
- api
|
|
||||||
- ldap
|
|
||||||
|
|
||||||
functest:
|
|
||||||
build:
|
|
||||||
context: ./test
|
|
||||||
image: "vinyldns/test:${VINYLDNS_VERSION}"
|
|
||||||
environment:
|
|
||||||
TEST_PATTERN: "test_verify_production"
|
|
||||||
container_name: "vinyldns-functest"
|
|
||||||
depends_on:
|
|
||||||
- api
|
|
@ -1,6 +0,0 @@
|
|||||||
-Xms512M
|
|
||||||
-Xmx1024M
|
|
||||||
-Xss2M
|
|
||||||
-XX:MaxMetaspaceSize=512M
|
|
||||||
-XX:ReservedCodeCacheSize=512M
|
|
||||||
-Djava.net.preferIPv4Stack=true
|
|
@ -1,44 +1,46 @@
|
|||||||
FROM hseeberger/scala-sbt:11.0.8_1.3.13_2.11.12 as builder
|
FROM vinyldns/build:base-build-portal as base-build
|
||||||
|
ARG VINYLDNS_VERSION
|
||||||
|
COPY . /build
|
||||||
|
WORKDIR /build
|
||||||
|
|
||||||
ARG BRANCH=master
|
RUN mkdir -p /opt/vinyldns/conf && \
|
||||||
|
if [ -f artifacts/vinyldns-portal.zip ]; then \
|
||||||
|
unzip artifacts/vinyldns-portal.zip -d /opt/vinyldns && \
|
||||||
|
mv /opt/vinyldns/vinyldns-portal/{lib,share,conf} /opt/vinyldns && \
|
||||||
|
rm -rf /opt/vinyldns/vinyldns-portal*; \
|
||||||
|
fi && \
|
||||||
|
if [ ! -f /opt/vinyldns/lib/vinyldns.portal*.jar ]; then \
|
||||||
|
cp /build/node_modules.tar.xz /build/modules/portal && \
|
||||||
|
cd /build/modules/portal && tar Jxf node_modules.tar.xz && \
|
||||||
|
cd /build && \
|
||||||
|
modules/portal/prepare-portal.sh && \
|
||||||
|
sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"; project portal; dist" && \
|
||||||
|
unzip artifacts/vinyldns-portal.zip -d /opt/vinyldns && \
|
||||||
|
mv /opt/vinyldns/vinyldns-portal/{lib,share,conf} /opt/vinyldns && \
|
||||||
|
rm -rf /opt/vinyldns/vinyldns-portal*; \
|
||||||
|
fi
|
||||||
|
|
||||||
|
FROM openjdk:11-slim
|
||||||
|
ARG DOCKER_FILE_PATH
|
||||||
ARG VINYLDNS_VERSION
|
ARG VINYLDNS_VERSION
|
||||||
|
|
||||||
RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns/vinyldns.git /vinyldns
|
RUN test -n "VINYLDNS_VERSION" || (echo "VINYLDNS_VERSION not set" && false) && \
|
||||||
|
test -n "DOCKER_FILE_PATH" || (echo "DOCKER_FILE_PATH not set" && false) && \
|
||||||
|
mkdir -p /opt/vinyldns/lib_extra && \
|
||||||
|
echo "${VINYLDNS_VERSION}" > /opt/vinyldns/version
|
||||||
|
|
||||||
# The default jvmopts are huge, meant for running everything, use a paired down version
|
COPY --from=base-build /opt/vinyldns /opt/vinyldns
|
||||||
COPY .jvmopts /vinyldns
|
COPY ${DOCKER_FILE_PATH}/application.conf /opt/vinyldns/conf
|
||||||
|
COPY ${DOCKER_FILE_PATH}/logback.xml /opt/vinyldns/conf
|
||||||
|
|
||||||
# Needed for preparePortal
|
|
||||||
RUN apt-get update \
|
|
||||||
&& apt-get install -y \
|
|
||||||
apt-transport-https \
|
|
||||||
curl \
|
|
||||||
gnupg \
|
|
||||||
&& curl -sL https://deb.nodesource.com/setup_12.x | bash - \
|
|
||||||
&& apt-get install -y nodejs \
|
|
||||||
&& npm install -g grunt-cli
|
|
||||||
|
|
||||||
RUN cd /vinyldns ; sbt "set version in ThisBuild := \"${VINYLDNS_VERSION}\"" portal/preparePortal universal:packageZipTarball
|
VOLUME ["/opt/vinyldns/lib_extra/", "/opt/vinyldns/conf/"]
|
||||||
|
|
||||||
FROM adoptopenjdk/openjdk11:jdk-11.0.8_10-alpine
|
EXPOSE 9001
|
||||||
|
|
||||||
RUN apk add --update --no-cache netcat-openbsd bash
|
ENV JVM_OPTS=""
|
||||||
|
ENTRYPOINT ["/bin/bash","-c", "java ${JVM_OPTS} -Dvinyldns.version=$(cat /opt/vinyldns/version) \
|
||||||
COPY --from=builder /vinyldns/modules/portal/target/universal/portal.tgz /
|
-Dlogback.configurationFile=/opt/vinyldns/conf/logback.xml \
|
||||||
|
-Dconfig.file=/opt/vinyldns/conf/application.conf \
|
||||||
RUN mkdir -p /opt && \
|
-cp /opt/vinyldns/conf:/opt/vinyldns/lib/*:/opt/vinyldns/lib_extra/* \
|
||||||
tar -xzvf /portal.tgz && \
|
play.core.server.ProdServerStart"]
|
||||||
mv /portal /opt/docker && \
|
|
||||||
mkdir -p /opt/docker/lib_extra
|
|
||||||
|
|
||||||
# This will set the vinyldns version, make sure to have this in config... version = ${?VINYLDNS_VERSION}
|
|
||||||
ARG VINYLDNS_VERSION
|
|
||||||
ENV VINYLDNS_VERSION=$VINYLDNS_VERSION
|
|
||||||
|
|
||||||
# Mount the volume for config file and lib extras
|
|
||||||
# Note: These volume names are used in the build.sbt
|
|
||||||
VOLUME ["/opt/docker/lib_extra/", "/opt/docker/conf"]
|
|
||||||
|
|
||||||
EXPOSE 9000
|
|
||||||
|
|
||||||
ENTRYPOINT ["/opt/docker/bin/portal"]
|
|
||||||
|
67
build/docker/portal/Makefile
Normal file
67
build/docker/portal/Makefile
Normal file
@ -0,0 +1,67 @@
|
|||||||
|
SHELL=bash
|
||||||
|
IMAGE_TAG=$(shell awk -F'"' '{print $$2}' ../../../version.sbt)
|
||||||
|
IMAGE_NAME=vinyldns/portal
|
||||||
|
ROOT_DIR:=$(shell dirname $(realpath $(lastword $(MAKEFILE_LIST))))
|
||||||
|
|
||||||
|
# Check that the required version of make is being used
|
||||||
|
REQ_MAKE_VER:=3.82
|
||||||
|
ifneq ($(REQ_MAKE_VER),$(firstword $(sort $(MAKE_VERSION) $(REQ_MAKE_VER))))
|
||||||
|
$(error The version of MAKE $(REQ_MAKE_VER) or higher is required; you are running $(MAKE_VERSION))
|
||||||
|
endif
|
||||||
|
|
||||||
|
# Extract arguments for `make run`
|
||||||
|
EXTRACT_ARGS=false
|
||||||
|
ifeq (run,$(firstword $(MAKECMDGOALS)))
|
||||||
|
EXTRACT_ARGS=true
|
||||||
|
endif
|
||||||
|
ifeq ($(EXTRACT_ARGS),true)
|
||||||
|
# use the rest as arguments for "run"
|
||||||
|
WITH_ARGS ?= $(wordlist 2,$(words $(MAKECMDGOALS)),$(MAKECMDGOALS))
|
||||||
|
endif
|
||||||
|
ifneq ($(WITH_ARGS),)
|
||||||
|
ARG_SEPARATOR=--
|
||||||
|
endif
|
||||||
|
|
||||||
|
%:
|
||||||
|
@:
|
||||||
|
|
||||||
|
.ONESHELL:
|
||||||
|
|
||||||
|
.PHONY: all artifact build run publish build-vnext publish-vnext
|
||||||
|
|
||||||
|
all: build run
|
||||||
|
|
||||||
|
artifact:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --target base-build --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t "vinyldns:portal-artifact" -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
USE_TTY="" && test -t 1 && USE_TTY="-t"
|
||||||
|
docker run -i $${USE_TTY} --rm -v "$$(pwd)/:/output" vinyldns:portal-artifact /bin/bash -c "mkdir -p /output/artifacts/ && cp /build/artifacts/*.zip /output/artifacts/"
|
||||||
|
|
||||||
|
build:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="$(IMAGE_TAG)" -t $(IMAGE_NAME):$(IMAGE_TAG) -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
docker tag $(IMAGE_NAME):$(IMAGE_TAG) $(IMAGE_NAME):latest
|
||||||
|
|
||||||
|
run:
|
||||||
|
@set -euo pipefail
|
||||||
|
docker network create --driver bridge vinyldns_net &> /dev/null || true
|
||||||
|
USE_TTY="" && test -t 1 && USE_TTY="-t"
|
||||||
|
docker run -i $${USE_TTY} --rm --network vinyldns_net --env-file "$(ROOT_DIR)/../.env" $(DOCKER_PARAMS) -v "$$(pwd)/application.conf:/opt/vinyldns/conf/application.conf" -v "$$(pwd)/logback.xml:/opt/vinyldns/conf/logback.xml" -p 9001:9001 $(IMAGE_NAME):$(IMAGE_TAG) $(ARG_SEPARATOR) $(WITH_ARGS)
|
||||||
|
|
||||||
|
publish: build
|
||||||
|
@set -euo pipefail
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):$(IMAGE_TAG)
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):latest
|
||||||
|
|
||||||
|
build-vnext:
|
||||||
|
@set -euo pipefail
|
||||||
|
cd ../../..
|
||||||
|
docker build $(BUILD_ARGS) --build-arg DOCKER_FILE_PATH="$$(realpath --relative-to="." "$(ROOT_DIR)")" --build-arg VINYLDNS_VERSION="vnext" -t $(IMAGE_NAME):vnext -f "$(ROOT_DIR)/Dockerfile" .
|
||||||
|
docker tag $(IMAGE_NAME):vnext "$(IMAGE_NAME):vnext-$$(date -u +'%Y%m%d')"
|
||||||
|
|
||||||
|
publish-vnext: build-vnext
|
||||||
|
@set -euo pipefail
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push $(IMAGE_NAME):vnext
|
||||||
|
DOCKER_CONTENT_TRUST=1 docker push "$(IMAGE_NAME):vnext-$$(date -u +'%Y%m%d')"
|
@ -11,73 +11,52 @@ LDAP {
|
|||||||
# This will be the name of the LDAP field that carries the user's login id (what they enter in the username in login form)
|
# This will be the name of the LDAP field that carries the user's login id (what they enter in the username in login form)
|
||||||
userNameAttribute = "uid"
|
userNameAttribute = "uid"
|
||||||
|
|
||||||
# For ogranization, leave empty for this demo, the domainName is what matters, and that is the LDAP structure
|
# For organization, leave empty for this demo, the domainName is what matters, and that is the LDAP structure
|
||||||
# to search for users that require login
|
# to search for users that require login
|
||||||
searchBase = [
|
searchBase = [
|
||||||
{organization = "", domainName = "ou=people,dc=planetexpress,dc=com"},
|
{organization = "", domainName = "ou=people,dc=planetexpress,dc=com"},
|
||||||
]
|
]
|
||||||
context {
|
context {
|
||||||
initialContextFactory = "com.sun.jndi.ldap.LdapCtxFactory"
|
initialContextFactory = "com.sun.jndi.ldap.LdapCtxFactory"
|
||||||
|
initialContextFactory = ${?LDAP_INITIAL_CONTEXT_CLASS}
|
||||||
securityAuthentication = "simple"
|
securityAuthentication = "simple"
|
||||||
|
securityAuthentication = ${?LDAP_SECURITY_AUTH}
|
||||||
|
|
||||||
# Note: The following assumes a purely docker setup, using container_name = vinyldns-ldap
|
# Note: The following assumes a purely docker setup, using container_name = vinyldns-ldap
|
||||||
providerUrl = "ldap://vinyldns-ldap:389"
|
providerUrl = "ldap://vinyldns-ldap:19004"
|
||||||
|
providerUrl = ${?LDAP_PROVIDER_URL}
|
||||||
}
|
}
|
||||||
|
|
||||||
# This is only needed if keeping vinyldns user store in sync with ldap (to auto lock out users who left your
|
# This is only needed if keeping vinyldns user store in sync with ldap (to auto lock out users who left your
|
||||||
# company for example)
|
# company for example)
|
||||||
user-sync {
|
user-sync {
|
||||||
enabled = false
|
enabled = false
|
||||||
|
enabled = ${?USER_SYNC_ENABLED}
|
||||||
hours-polling-interval = 1
|
hours-polling-interval = 1
|
||||||
|
hours-polling-interval = ${?USER_SYNC_POLL_INTERVAL}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
# Note: This MUST match the API or strange errors will ensure, NoCrypto should not be used for production
|
# Note: This MUST match the API or strange errors will ensue, NoOpCrypto should not be used for production
|
||||||
crypto {
|
crypto {
|
||||||
type = "vinyldns.core.crypto.NoOpCrypto"
|
type = "vinyldns.core.crypto.NoOpCrypto"
|
||||||
|
type = ${?CRYPTO_TYPE}
|
||||||
|
secret = ${?CRYPTO_SECRET}
|
||||||
}
|
}
|
||||||
|
|
||||||
http.port = 9000
|
http.port = 9001
|
||||||
|
http.port = ${?PORTAL_PORT}
|
||||||
|
|
||||||
data-stores = ["mysql"]
|
data-stores = ["mysql"]
|
||||||
|
|
||||||
portal.vinyldns.backend.url = "http://vinyldns-api:9000"
|
# Must be true to manage shared zones through the portal
|
||||||
|
shared-display-enabled = true
|
||||||
# Note: The default mysql settings assume a local docker compose setup with mysql named vinyldns-mysql
|
shared-display-enabled = ${?SHARED_ZONES_ENABLED}
|
||||||
# follow the configuration guide to point to your mysql
|
|
||||||
# Only 3 repositories are needed for portal: user, task, user-change
|
|
||||||
mysql {
|
|
||||||
settings {
|
|
||||||
# JDBC Settings, these are all values in scalikejdbc-config, not our own
|
|
||||||
# these must be overridden to use MYSQL for production use
|
|
||||||
# assumes a docker or mysql instance running locally
|
|
||||||
name = "vinyldns"
|
|
||||||
driver = "org.mariadb.jdbc.Driver"
|
|
||||||
migration-url = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass"
|
|
||||||
url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass"
|
|
||||||
user = "root"
|
|
||||||
password = "pass"
|
|
||||||
# see https://github.com/brettwooldridge/HikariCP
|
|
||||||
connection-timeout-millis = 1000
|
|
||||||
idle-timeout = 10000
|
|
||||||
max-lifetime = 600000
|
|
||||||
maximum-pool-size = 20
|
|
||||||
minimum-idle = 20
|
|
||||||
register-mbeans = true
|
|
||||||
}
|
|
||||||
|
|
||||||
repositories {
|
|
||||||
user {
|
|
||||||
}
|
|
||||||
task {
|
|
||||||
}
|
|
||||||
user-change {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# You generate this yourself following https://www.playframework.com/documentation/2.7.x/ApplicationSecret
|
# You generate this yourself following https://www.playframework.com/documentation/2.7.x/ApplicationSecret
|
||||||
play.http.secret.key = "rpkTGtoJvLIdIV?WU=0@yW^x:pcEGyAt`^p/P3G0fpbj9:uDnD@caSjCDqA0@tB="
|
play.http.secret.key = "changeme"
|
||||||
|
play.http.secret.key = ${?PLAY_HTTP_SECRET_KEY}
|
||||||
|
|
||||||
vinyldns.version = "unknown"
|
# You can provide configuration overrides via local.conf if you don't want to replace everything in
|
||||||
vinyldns.version = ${?VINYLDNS_VERSION}
|
# this configuration file
|
||||||
|
include "local.conf"
|
||||||
|
21
build/docker/portal/logback.xml
Normal file
21
build/docker/portal/logback.xml
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
<configuration>
|
||||||
|
|
||||||
|
<conversionRule conversionWord="coloredLevel" converterClass="play.api.libs.logback.ColoredLevel" />
|
||||||
|
|
||||||
|
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
|
||||||
|
<encoder>
|
||||||
|
<pattern>%d{"yyyy-MM-dd HH:mm:ss,SSS"} %coloredLevel - %logger - %message%n%xException</pattern>
|
||||||
|
</encoder>
|
||||||
|
</appender>
|
||||||
|
<!--
|
||||||
|
The logger name is typically the Java/Scala package name.
|
||||||
|
This configures the log level to log at for a package and its children packages.
|
||||||
|
-->
|
||||||
|
<logger name="play" level="INFO" />
|
||||||
|
<logger name="application" level="DEBUG" />
|
||||||
|
|
||||||
|
<root level="INFO">
|
||||||
|
<appender-ref ref="STDOUT" />
|
||||||
|
</root>
|
||||||
|
|
||||||
|
</configuration>
|
@ -1,11 +0,0 @@
|
|||||||
FROM alpine/git:1.0.7 as gitcheckout
|
|
||||||
|
|
||||||
ARG BRANCH=master
|
|
||||||
|
|
||||||
RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns/vinyldns.git /vinyldns
|
|
||||||
|
|
||||||
FROM vinyldns/bind9:0.0.5
|
|
||||||
|
|
||||||
COPY --from=gitcheckout /vinyldns/docker/bind9/zones/* /var/cache/bind/zones/
|
|
||||||
|
|
||||||
COPY --from=gitcheckout /vinyldns/docker/bind9/etc/named.conf.local /var/cache/bind/config
|
|
@ -1,28 +0,0 @@
|
|||||||
FROM alpine/git:1.0.7 as gitcheckout
|
|
||||||
|
|
||||||
ARG BRANCH=master
|
|
||||||
|
|
||||||
RUN git clone -b ${BRANCH} --single-branch --depth 1 https://github.com/vinyldns/vinyldns.git /vinyldns
|
|
||||||
|
|
||||||
FROM python:2.7.16-alpine3.9
|
|
||||||
|
|
||||||
RUN apk add --update --no-cache bind-tools netcat-openbsd bash curl
|
|
||||||
|
|
||||||
# The run script is what actually runs our func tests
|
|
||||||
COPY run.sh /app/run.sh
|
|
||||||
COPY run-tests.py /app/run-tests.py
|
|
||||||
|
|
||||||
RUN chmod a+x /app/run.sh && chmod a+x /app/run-tests.py
|
|
||||||
|
|
||||||
# Copy over the functional test directory, this must have been copied into the build context previous to this building!
|
|
||||||
COPY --from=gitcheckout /vinyldns/modules/api/functional_test/ /app/
|
|
||||||
|
|
||||||
# Install our func test requirements
|
|
||||||
RUN pip install --index-url https://pypi.python.org/simple/ -r /app/requirements.txt
|
|
||||||
|
|
||||||
ENV VINYLDNS_URL=""
|
|
||||||
ENV DNS_IP=""
|
|
||||||
ENV TEST_PATTERN="test_verify_production"
|
|
||||||
|
|
||||||
# set the entry point for the container to start vinyl, specify the config resource
|
|
||||||
ENTRYPOINT ["/app/run.sh"]
|
|
@ -1,16 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
basedir = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports')
|
|
||||||
if not os.path.exists(report_dir):
|
|
||||||
os.system('mkdir -p ' + report_dir)
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
result = 1
|
|
||||||
result = pytest.main(list(sys.argv[1:]))
|
|
||||||
|
|
||||||
sys.exit(result)
|
|
@ -1,76 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Assume defaults of local docker-compose if not set
|
|
||||||
if [ -z "${VINYLDNS_URL}" ]; then
|
|
||||||
VINYLDNS_URL="http://vinyldns-api:9000"
|
|
||||||
fi
|
|
||||||
if [ -z "${DNS_IP}" ]; then
|
|
||||||
DNS_IP=$(dig +short vinyldns-bind9)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Assume all tests if not specified
|
|
||||||
if [ -z "${TEST_PATTERN}" ]; then
|
|
||||||
TEST_PATTERN=
|
|
||||||
else
|
|
||||||
TEST_PATTERN="-k ${TEST_PATTERN}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Waiting for API to be ready at ${VINYLDNS_URL} ..."
|
|
||||||
DATA=""
|
|
||||||
RETRY=60
|
|
||||||
SLEEP_DURATION=1
|
|
||||||
while [ "$RETRY" -gt 0 ]
|
|
||||||
do
|
|
||||||
DATA=$(curl -I -s "${VINYLDNS_URL}/ping" -o /dev/null -w "%{http_code}")
|
|
||||||
if [ $? -eq 0 ]
|
|
||||||
then
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep "$SLEEP_DURATION"
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Exceeded retries waiting for VinylDNS API to be ready, failing"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Running live tests against ${VINYLDNS_URL} and DNS server ${DNS_IP}"
|
|
||||||
|
|
||||||
cd /app
|
|
||||||
|
|
||||||
# Cleanup any errant cached file copies
|
|
||||||
find . -name "*.pyc" -delete
|
|
||||||
find . -name "__pycache__" -delete
|
|
||||||
|
|
||||||
ls -al
|
|
||||||
|
|
||||||
# -m plays havoc with -k, using variables is a headache, so doing this by hand
|
|
||||||
# run parallel tests first (not serial)
|
|
||||||
set -x
|
|
||||||
./run-tests.py live_tests -n2 -v -m "not skip_production and not serial" ${TEST_PATTERN} --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} --teardown=False
|
|
||||||
ret1=$?
|
|
||||||
|
|
||||||
# IMPORTANT! pytest exists status code 5 if no tests are run, force that to 0
|
|
||||||
if [ "$ret1" = 5 ]; then
|
|
||||||
echo "No tests collected."
|
|
||||||
ret1=0
|
|
||||||
fi
|
|
||||||
|
|
||||||
./run-tests.py live_tests -n0 -v -m "not skip_production and serial" ${TEST_PATTERN} --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} --teardown=True
|
|
||||||
ret2=$?
|
|
||||||
if [ "$ret2" = 5 ]; then
|
|
||||||
echo "No tests collected."
|
|
||||||
ret2=0
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $ret1 -ne 0 ] || [ $ret2 -ne 0 ]; then
|
|
||||||
exit 1
|
|
||||||
else
|
|
||||||
exit 0
|
|
||||||
fi
|
|
||||||
|
|
10
build/func-test-api.sh
Executable file
10
build/func-test-api.sh
Executable file
@ -0,0 +1,10 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This script will perform the functional tests for the API using Docker
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
cd "$DIR/../test/api/functional"
|
||||||
|
make
|
10
build/func-test-portal.sh
Executable file
10
build/func-test-portal.sh
Executable file
@ -0,0 +1,10 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
#
|
||||||
|
# This script will perform the functional tests for the Portal using Docker
|
||||||
|
#
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
cd "$DIR/../test/portal/functional"
|
||||||
|
make
|
6
build/publish_docs.sh
Executable file
6
build/publish_docs.sh
Executable file
@ -0,0 +1,6 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
USE_TTY="" && test -t 1 && USE_TTY="-t"
|
||||||
|
docker run -i ${USE_TTY} --rm -e RUN_SERVICES=none -e SBT_MICROSITES_PUBLISH_TOKEN="${SBT_MICROSITES_PUBLISH_TOKEN}" -v "${DIR}/../:/build" vinyldns/build:base-build-docs /bin/bash -c "sbt ';project docs; publishMicrosite'"
|
27
build/run_all_tests.sh
Executable file
27
build/run_all_tests.sh
Executable file
@ -0,0 +1,27 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
source "${DIR}/../utils/includes/terminal_colors.sh"
|
||||||
|
|
||||||
|
if [ ! -d "${DIR}/../artifacts" ] || [ ! -f "${DIR}/../artifacts/vinyldns-api.jar" ]; then
|
||||||
|
echo -e "${F_YELLOW}Warning:${F_RESET} you might want to run 'build/assemble_api.sh' first to improve performance"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Running unit and integration tests..."
|
||||||
|
if ! "${DIR}/verify.sh"; then
|
||||||
|
echo "Error running unit and integration tests."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Running API functional tests..."
|
||||||
|
if ! "${DIR}/func-test-api.sh"; then
|
||||||
|
echo "Error running API functional tests"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Running Portal functional tests..."
|
||||||
|
if ! "${DIR}/func-test-portal.sh"; then
|
||||||
|
echo "Error running Portal functional tests"
|
||||||
|
exit 1
|
||||||
|
fi
|
7
build/sbt.sh
Executable file
7
build/sbt.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
cd "$DIR/../test/api/integration"
|
||||||
|
make build DOCKER_PARAMS="--build-arg SKIP_API_BUILD=true" && make run-local WITH_ARGS="sbt" DOCKER_PARAMS="-e RUN_SERVICES=none --env-file \"$DIR/../test/api/integration/.env.integration\""
|
@ -1,73 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CURDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
|
|
||||||
function usage() {
|
|
||||||
printf "usage: start.sh [OPTIONS]\n\n"
|
|
||||||
printf "starts a specific version of vinyldns\n\n"
|
|
||||||
printf "options:\n"
|
|
||||||
printf "\t-v, --version: the version to start up; required\n"
|
|
||||||
}
|
|
||||||
|
|
||||||
function wait_for_url() {
|
|
||||||
URL=$1
|
|
||||||
DATA=""
|
|
||||||
RETRY="60"
|
|
||||||
echo "pinging $URL ..."
|
|
||||||
while [ "$RETRY" -gt 0 ]; do
|
|
||||||
DATA=$(curl -I -s "${URL}" -o /dev/null -w "%{http_code}")
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "Succeeded in connecting to ${URL}!"
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep 1
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]; then
|
|
||||||
echo "Exceeded retries waiting for ${URL} to be ready, failing"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
}
|
|
||||||
|
|
||||||
# Default the build to -SNAPSHOT if not set
|
|
||||||
VINYLDNS_VERSION=
|
|
||||||
|
|
||||||
while [ "$1" != "" ]; do
|
|
||||||
case "$1" in
|
|
||||||
-v | --version)
|
|
||||||
VINYLDNS_VERSION="$2"
|
|
||||||
shift 2
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
usage
|
|
||||||
exit
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
done
|
|
||||||
|
|
||||||
if [ -z "$VINYLDNS_VERSION" ]; then
|
|
||||||
echo "VINYLDNS_VERSION not set"
|
|
||||||
usage
|
|
||||||
exit
|
|
||||||
else
|
|
||||||
export VINYLDNS_VERSION=$VINYLDNS_VERSION
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Actually starts up our docker images
|
|
||||||
docker-compose -f $CURDIR/docker/docker-compose.yml up --no-build -d api portal
|
|
||||||
|
|
||||||
# Waits for the URL to be available
|
|
||||||
wait_for_url "http://localhost:9001"
|
|
||||||
|
|
||||||
if [ $? -eq 0 ]; then
|
|
||||||
echo "VinylDNS started and available at http://localhost:9001"
|
|
||||||
exit 0
|
|
||||||
else
|
|
||||||
echo "VinylDNS startup failed!"
|
|
||||||
$CURDIR/stop.sh
|
|
||||||
exit 1
|
|
||||||
fi
|
|
@ -1,5 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
CURDIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
|
||||||
|
|
||||||
docker-compose -f $CURDIR/docker/docker-compose.yml down
|
|
7
build/verify.sh
Executable file
7
build/verify.sh
Executable file
@ -0,0 +1,7 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
DIR=$(cd -P -- "$(dirname -- "$0")" && pwd -P)
|
||||||
|
|
||||||
|
cd "${DIR}/../test/api/integration"
|
||||||
|
make build && make run DOCKER_PARAMS="-v \"$(pwd)/../../../target:/build/target\"" WITH_ARGS="bash -c \"sbt ';validate' && sbt ';verify'\""
|
17
docker/.env
17
docker/.env
@ -1,17 +0,0 @@
|
|||||||
REST_PORT=9000
|
|
||||||
# Do not use quotes around the environment variables.
|
|
||||||
MYSQL_ROOT_PASSWORD=pass
|
|
||||||
# This is required as mysql is currently locked down to localhost
|
|
||||||
MYSQL_ROOT_HOST=%
|
|
||||||
# Host URL for queue
|
|
||||||
QUEUE_HOST=vinyldns-elasticmq
|
|
||||||
|
|
||||||
# portal settings
|
|
||||||
PORTAL_PORT=9001
|
|
||||||
PLAY_HTTP_SECRET_KEY=change-this-for-prod
|
|
||||||
VINYLDNS_BACKEND_URL=http://vinyldns-api:9000
|
|
||||||
SQS_ENDPOINT=http://vinyldns-localstack:19007
|
|
||||||
MYSQL_ENDPOINT=vinyldns-mysql:3306
|
|
||||||
USER_TABLE_NAME=users
|
|
||||||
USER_CHANGE_TABLE_NAME=userChange
|
|
||||||
TEST_LOGIN=true
|
|
@ -1,17 +0,0 @@
|
|||||||
REST_PORT=9000
|
|
||||||
# Do not use quotes around the environment variables.
|
|
||||||
MYSQL_ROOT_PASSWORD=pass
|
|
||||||
# This is required as mysql is currently locked down to localhost
|
|
||||||
MYSQL_ROOT_HOST=%
|
|
||||||
# Host URL for queue
|
|
||||||
QUEUE_HOST=vinyldns-elasticmq
|
|
||||||
|
|
||||||
# portal settings
|
|
||||||
PORTAL_PORT=9001
|
|
||||||
PLAY_HTTP_SECRET_KEY=change-this-for-prod
|
|
||||||
VINYLDNS_BACKEND_URL=http://vinyldns-api:9000
|
|
||||||
SQS_ENDPOINT=http://vinyldns-localstack:19007
|
|
||||||
MYSQL_ENDPOINT=vinyldns-mysql:3306
|
|
||||||
USER_TABLE_NAME=users
|
|
||||||
USER_CHANGE_TABLE_NAME=userChange
|
|
||||||
TEST_LOGIN=true
|
|
@ -1,18 +0,0 @@
|
|||||||
FROM znly/protoc:0.4.0 as pbcompile
|
|
||||||
|
|
||||||
# Needs to protoc compile modules/core/src/main/protobuf/VinylDNSProto.proto
|
|
||||||
COPY VinylDNSProto.proto /vinyldns/target/
|
|
||||||
|
|
||||||
# Create a compiled protobuf in /vinyldns/target
|
|
||||||
RUN protoc --version && \
|
|
||||||
protoc --proto_path=/vinyldns/target --python_out=/vinyldns/target /vinyldns/target/VinylDNSProto.proto
|
|
||||||
|
|
||||||
|
|
||||||
FROM python:3.7-alpine
|
|
||||||
|
|
||||||
RUN pip install mysql-connector-python
|
|
||||||
|
|
||||||
COPY --from=pbcompile /vinyldns/target /app/
|
|
||||||
COPY update-support-user.py /app/update-support-user.py
|
|
||||||
|
|
||||||
WORKDIR /app
|
|
@ -1,5 +0,0 @@
|
|||||||
.DS_Store
|
|
||||||
.dockerignore
|
|
||||||
.git
|
|
||||||
.gitignore
|
|
||||||
classes
|
|
@ -1,18 +0,0 @@
|
|||||||
FROM adoptopenjdk/openjdk11:jdk-11.0.7_10-alpine
|
|
||||||
|
|
||||||
RUN apk add --update --no-cache netcat-openbsd bash
|
|
||||||
|
|
||||||
# install the jar onto the server, asserts this Dockerfile is copied to target/scala-2.12 after a build
|
|
||||||
COPY vinyldns.jar /app/vinyldns-server.jar
|
|
||||||
COPY run.sh /app/run.sh
|
|
||||||
RUN chmod a+x /app/run.sh
|
|
||||||
|
|
||||||
COPY docker.conf /app/docker.conf
|
|
||||||
|
|
||||||
EXPOSE 9000
|
|
||||||
EXPOSE 2551
|
|
||||||
|
|
||||||
# set the entry point for the container to start vinyl, specify the config resource
|
|
||||||
ENTRYPOINT ["/app/run.sh"]
|
|
||||||
|
|
||||||
|
|
@ -1,12 +0,0 @@
|
|||||||
<configuration>
|
|
||||||
<!-- Test configuration, log to console so we can get the docker logs -->
|
|
||||||
<appender name="CONSOLE" class="ch.qos.logback.core.ConsoleAppender">
|
|
||||||
<encoder class="ch.qos.logback.classic.encoder.PatternLayoutEncoder">
|
|
||||||
<pattern>%d [test] %-5p | \(%logger{4}:%line\) | %msg %n</pattern>
|
|
||||||
</encoder>
|
|
||||||
</appender>
|
|
||||||
|
|
||||||
<root level="INFO">
|
|
||||||
<appender-ref ref="CONSOLE"/>
|
|
||||||
</root>
|
|
||||||
</configuration>
|
|
@ -1,51 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# gets the docker-ized ip address, sets it to an environment variable
|
|
||||||
export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'`
|
|
||||||
|
|
||||||
export DYNAMO_ADDRESS="vinyldns-dynamodb"
|
|
||||||
export DYNAMO_PORT=8000
|
|
||||||
export JOURNAL_HOST="vinyldns-dynamodb"
|
|
||||||
export JOURNAL_PORT=8000
|
|
||||||
export MYSQL_ADDRESS="vinyldns-mysql"
|
|
||||||
export MYSQL_PORT=3306
|
|
||||||
export JDBC_USER=root
|
|
||||||
export JDBC_PASSWORD=pass
|
|
||||||
export DNS_ADDRESS="vinyldns-bind9"
|
|
||||||
export DYNAMO_KEY="local"
|
|
||||||
export DYNAMO_SECRET="local"
|
|
||||||
export DYNAMO_TABLE_PREFIX=""
|
|
||||||
export ELASTICMQ_ADDRESS="vinyldns-elasticmq"
|
|
||||||
export DYNAMO_ENDPOINT="http://${DYNAMO_ADDRESS}:${DYNAMO_PORT}"
|
|
||||||
export JDBC_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/vinyldns?user=${JDBC_USER}&password=${JDBC_PASSWORD}"
|
|
||||||
export JDBC_MIGRATION_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/?user=${JDBC_USER}&password=${JDBC_PASSWORD}"
|
|
||||||
|
|
||||||
# wait until mysql is ready...
|
|
||||||
echo 'Waiting for MYSQL to be ready...'
|
|
||||||
DATA=""
|
|
||||||
RETRY=40
|
|
||||||
SLEEP_DURATION=1
|
|
||||||
while [ "$RETRY" -gt 0 ]
|
|
||||||
do
|
|
||||||
DATA=$(nc -vzw1 vinyldns-mysql 3306)
|
|
||||||
if [ $? -eq 0 ]
|
|
||||||
then
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep "$SLEEP_DURATION"
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Exceeded retries waiting for MYSQL to be ready, failing"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Starting up Vinyl..."
|
|
||||||
sleep 2
|
|
||||||
java -Djava.net.preferIPv4Stack=true -Dconfig.file=/app/docker.conf -Dakka.loglevel=INFO -Dlogback.configurationFile=test/logback.xml -jar /app/vinyldns-server.jar vinyldns.api.Boot
|
|
||||||
|
|
@ -1,227 +0,0 @@
|
|||||||
//
|
|
||||||
// Do any local configuration here
|
|
||||||
//
|
|
||||||
|
|
||||||
// Consider adding the 1918 zones here, if they are not used in your
|
|
||||||
// organization
|
|
||||||
//include "/etc/bind/zones.rfc1918";
|
|
||||||
|
|
||||||
key "vinyldns." {
|
|
||||||
algorithm hmac-md5;
|
|
||||||
secret "nzisn+4G2ldMn0q1CV3vsg==";
|
|
||||||
};
|
|
||||||
|
|
||||||
key "vinyldns-sha1." {
|
|
||||||
algorithm hmac-sha1;
|
|
||||||
secret "0nIhR1zS/nHUg2n0AIIUyJwXUyQ=";
|
|
||||||
};
|
|
||||||
|
|
||||||
key "vinyldns-sha224." {
|
|
||||||
algorithm hmac-sha224;
|
|
||||||
secret "yud/F666YjcnfqPSulHaYXrNObNnS1Jv+rX61A==";
|
|
||||||
};
|
|
||||||
|
|
||||||
key "vinyldns-sha256." {
|
|
||||||
algorithm hmac-sha256;
|
|
||||||
secret "wzLsDGgPRxFaC6z/9Bc0n1W4KrnmaUdFCgCn2+7zbPU=";
|
|
||||||
};
|
|
||||||
|
|
||||||
key "vinyldns-sha384." {
|
|
||||||
algorithm hmac-sha384;
|
|
||||||
secret "ne9jSUJ7PBGveM37aOX+ZmBXQgz1EqkbYBO1s5l/LNpjEno4OfYvGo1Lv1rnw3pE";
|
|
||||||
};
|
|
||||||
|
|
||||||
key "vinyldns-sha512." {
|
|
||||||
algorithm hmac-sha512;
|
|
||||||
secret "xfKA0DYb88tiUGND+cWddwUg3/SugYSsdvCfBOJ1jr8MEdgbVRyrlVDEXLsfTUGorQ3ShENdymw2yw+rTr+lwA==";
|
|
||||||
};
|
|
||||||
|
|
||||||
// Consider adding the 1918 zones here, if they are not used in your
|
|
||||||
// organization
|
|
||||||
//include "/etc/bind/zones.rfc1918";
|
|
||||||
zone "vinyldns" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/vinyldns.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "old-vinyldns2" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/old-vinyldns2.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "old-vinyldns3" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/old-vinyldns3.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "dummy" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/dummy.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "ok" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/ok.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "shared" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/shared.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "non.test.shared" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/non.test.shared.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "system-test" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/system-test.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "system-test-history" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/system-test-history.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "10.10.in-addr.arpa" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/10.10.in-addr.arpa";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "2.0.192.in-addr.arpa" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/2.0.192.in-addr.arpa";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "192/30.2.0.192.in-addr.arpa" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/192^30.2.0.192.in-addr.arpa";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/0.0.0.1.1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "one-time" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/one-time.hosts";
|
|
||||||
allow-update { key "vinyldns."; key "vinyldns-sha1."; key "vinyldns-sha224."; key "vinyldns-sha256."; key "vinyldns-sha384."; key "vinyldns-sha512."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "sync-test" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/sync-test.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "invalid-zone" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/invalid-zone.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-zones-test-searched-1" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-zones-test-searched-1.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-zones-test-searched-2" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-zones-test-searched-2.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-zones-test-searched-3" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-zones-test-searched-3.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-zones-test-unfiltered-1" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-zones-test-unfiltered-1.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-zones-test-unfiltered-2" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-zones-test-unfiltered-2.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "one-time-shared" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/one-time-shared.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "parent.com" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/parent.com.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "child.parent.com" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/child.parent.com.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "example.com" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/example.com.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "dskey.example.com" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/dskey.example.com.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "not.loaded" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/not.loaded.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "zone.requires.review" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/zone.requires.review.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "list-records" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/list-records.hosts";
|
|
||||||
allow-update { key "vinyldns."; };
|
|
||||||
};
|
|
||||||
|
|
||||||
zone "open" {
|
|
||||||
type master;
|
|
||||||
file "/var/bind/open.hosts";
|
|
||||||
allow-update { any; };
|
|
||||||
allow-transfer { any; };
|
|
||||||
};
|
|
@ -1,60 +0,0 @@
|
|||||||
version: "3.0"
|
|
||||||
services:
|
|
||||||
mysql:
|
|
||||||
image: "mysql:5.7"
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
container_name: "vinyldns-mysql"
|
|
||||||
ports:
|
|
||||||
- "19002:3306"
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
|
|
||||||
bind9:
|
|
||||||
image: "vinyldns/test-bind9:0.9.4"
|
|
||||||
container_name: "vinyldns-bind9"
|
|
||||||
ports:
|
|
||||||
- "19001:53/tcp"
|
|
||||||
- "19001:53/udp"
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
|
|
||||||
localstack:
|
|
||||||
image: localstack/localstack:0.10.4
|
|
||||||
container_name: "vinyldns-localstack"
|
|
||||||
ports:
|
|
||||||
- "19000:19000"
|
|
||||||
- "19006:19006"
|
|
||||||
- "19007:19007"
|
|
||||||
- "19009:19009"
|
|
||||||
environment:
|
|
||||||
- SERVICES=sns:19006,sqs:19007,route53:19009
|
|
||||||
- START_WEB=0
|
|
||||||
- HOSTNAME_EXTERNAL=vinyldns-localstack
|
|
||||||
|
|
||||||
# this file is copied into the target directory to get the jar! won't run in place as is!
|
|
||||||
api:
|
|
||||||
build:
|
|
||||||
context: api
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
container_name: "vinyldns-api"
|
|
||||||
ports:
|
|
||||||
- "9000:9000"
|
|
||||||
depends_on:
|
|
||||||
- mysql
|
|
||||||
- bind9
|
|
||||||
- localstack
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
|
|
||||||
functest:
|
|
||||||
build:
|
|
||||||
context: functest
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
environment:
|
|
||||||
- PAR_CPU=${PAR_CPU}
|
|
||||||
container_name: "vinyldns-functest"
|
|
||||||
depends_on:
|
|
||||||
- api
|
|
@ -1,62 +0,0 @@
|
|||||||
version: "3.0"
|
|
||||||
services:
|
|
||||||
mysql:
|
|
||||||
image: "mysql:5.7"
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
container_name: "vinyldns-mysql"
|
|
||||||
ports:
|
|
||||||
- "19002:3306"
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
|
|
||||||
bind9:
|
|
||||||
image: "vinyldns/bind9:0.0.4"
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
container_name: "vinyldns-bind9"
|
|
||||||
volumes:
|
|
||||||
- ./bind9/etc:/var/cache/bind/config
|
|
||||||
- ./bind9/zones:/var/cache/bind/zones
|
|
||||||
ports:
|
|
||||||
- "19001:53/tcp"
|
|
||||||
- "19001:53/udp"
|
|
||||||
logging:
|
|
||||||
driver: none
|
|
||||||
|
|
||||||
localstack:
|
|
||||||
image: localstack/localstack:0.10.4
|
|
||||||
container_name: "vinyldns-localstack"
|
|
||||||
ports:
|
|
||||||
- "19006:19006"
|
|
||||||
- "19007:19007"
|
|
||||||
- "19009:19009"
|
|
||||||
environment:
|
|
||||||
- SERVICES=sns:19006,sqs:19007,route53:19009
|
|
||||||
- START_WEB=0
|
|
||||||
- HOSTNAME_EXTERNAL=vinyldns-localstack
|
|
||||||
|
|
||||||
# this file is copied into the target directory to get the jar! won't run in place as is!
|
|
||||||
api:
|
|
||||||
build:
|
|
||||||
context: api
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
container_name: "vinyldns-api"
|
|
||||||
ports:
|
|
||||||
- "9000:9000"
|
|
||||||
depends_on:
|
|
||||||
- mysql
|
|
||||||
- bind9
|
|
||||||
- localstack
|
|
||||||
|
|
||||||
functest:
|
|
||||||
build:
|
|
||||||
context: functest
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
environment:
|
|
||||||
- PAR_CPU=${PAR_CPU}
|
|
||||||
container_name: "vinyldns-functest"
|
|
||||||
depends_on:
|
|
||||||
- api
|
|
@ -1,68 +0,0 @@
|
|||||||
version: "3.0"
|
|
||||||
services:
|
|
||||||
mysql:
|
|
||||||
image: "mysql:5.7"
|
|
||||||
env_file:
|
|
||||||
.env.quickstart
|
|
||||||
container_name: "vinyldns-mysql"
|
|
||||||
ports:
|
|
||||||
- "19002:3306"
|
|
||||||
|
|
||||||
bind9:
|
|
||||||
image: "vinyldns/bind9:0.0.4"
|
|
||||||
env_file:
|
|
||||||
.env.quickstart
|
|
||||||
container_name: "vinyldns-bind9"
|
|
||||||
ports:
|
|
||||||
- "19001:53/udp"
|
|
||||||
- "19001:53"
|
|
||||||
volumes:
|
|
||||||
- ./bind9/etc:/var/cache/bind/config
|
|
||||||
- ./bind9/zones:/var/cache/bind/zones
|
|
||||||
|
|
||||||
localstack:
|
|
||||||
image: localstack/localstack:0.10.4
|
|
||||||
container_name: "vinyldns-localstack"
|
|
||||||
ports:
|
|
||||||
- "19006:19006"
|
|
||||||
- "19007:19007"
|
|
||||||
- "19009:19009"
|
|
||||||
environment:
|
|
||||||
- SERVICES=sns:19006,sqs:19007,route53:19009
|
|
||||||
- START_WEB=0
|
|
||||||
- HOSTNAME_EXTERNAL=vinyldns-localstack
|
|
||||||
|
|
||||||
ldap:
|
|
||||||
image: rroemhild/test-openldap
|
|
||||||
container_name: "vinyldns-ldap"
|
|
||||||
ports:
|
|
||||||
- "19008:389"
|
|
||||||
|
|
||||||
api:
|
|
||||||
image: "vinyldns/api:${VINYLDNS_VERSION}"
|
|
||||||
env_file:
|
|
||||||
.env.quickstart
|
|
||||||
container_name: "vinyldns-api"
|
|
||||||
ports:
|
|
||||||
- "${REST_PORT}:${REST_PORT}"
|
|
||||||
volumes:
|
|
||||||
- ./api/docker.conf:/opt/docker/conf/application.conf
|
|
||||||
- ./api/logback.xml:/opt/docker/conf/logback.xml
|
|
||||||
depends_on:
|
|
||||||
- mysql
|
|
||||||
- bind9
|
|
||||||
- localstack
|
|
||||||
|
|
||||||
portal:
|
|
||||||
image: "vinyldns/portal:${VINYLDNS_VERSION}"
|
|
||||||
env_file:
|
|
||||||
.env.quickstart
|
|
||||||
ports:
|
|
||||||
- "${PORTAL_PORT}:${PORTAL_PORT}"
|
|
||||||
container_name: "vinyldns-portal"
|
|
||||||
volumes:
|
|
||||||
- ./portal/application.ini:/opt/docker/conf/application.ini
|
|
||||||
- ./portal/application.conf:/opt/docker/conf/application.conf
|
|
||||||
depends_on:
|
|
||||||
- api
|
|
||||||
- ldap
|
|
@ -1,41 +0,0 @@
|
|||||||
version: "3.0"
|
|
||||||
services:
|
|
||||||
mysql:
|
|
||||||
image: mysql:5.7<skipPull>
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
ports:
|
|
||||||
- "19002:3306"
|
|
||||||
|
|
||||||
bind9:
|
|
||||||
image: vinyldns/bind9:0.0.4<skipPull>
|
|
||||||
env_file:
|
|
||||||
.env
|
|
||||||
ports:
|
|
||||||
- "19001:53/udp"
|
|
||||||
- "19001:53"
|
|
||||||
volumes:
|
|
||||||
- ./bind9/etc:/var/cache/bind/config
|
|
||||||
- ./bind9/zones:/var/cache/bind/zones
|
|
||||||
|
|
||||||
localstack:
|
|
||||||
image: localstack/localstack:0.10.4<skipPull>
|
|
||||||
ports:
|
|
||||||
- "19006:19006"
|
|
||||||
- "19007:19007"
|
|
||||||
- "19009:19009"
|
|
||||||
environment:
|
|
||||||
- SERVICES=sns:19006,sqs:19007,route53:19009
|
|
||||||
- START_WEB=0
|
|
||||||
|
|
||||||
mail:
|
|
||||||
image: flaviovs/mock-smtp:0.0.2<skipPull>
|
|
||||||
ports:
|
|
||||||
- "19025:25"
|
|
||||||
volumes:
|
|
||||||
- ./email:/var/lib/mock-smtp
|
|
||||||
|
|
||||||
ldap:
|
|
||||||
image: rroemhild/test-openldap:latest<skipPull>
|
|
||||||
ports:
|
|
||||||
- "19008:389"
|
|
@ -1,10 +0,0 @@
|
|||||||
FROM alpine:3.2
|
|
||||||
FROM anapsix/alpine-java:8_server-jre
|
|
||||||
|
|
||||||
EXPOSE 9324
|
|
||||||
|
|
||||||
COPY run.sh /elasticmq/run.sh
|
|
||||||
COPY custom.conf /elasticmq/custom.conf
|
|
||||||
COPY elasticmq-server-0.13.2.jar /elasticmq/server.jar
|
|
||||||
|
|
||||||
ENTRYPOINT ["/elasticmq/run.sh"]
|
|
@ -1,22 +0,0 @@
|
|||||||
node-address {
|
|
||||||
protocol = http
|
|
||||||
host = "localhost"
|
|
||||||
host = ${?QUEUE_HOST}
|
|
||||||
port = 9324
|
|
||||||
context-path = ""
|
|
||||||
}
|
|
||||||
|
|
||||||
rest-sqs {
|
|
||||||
enabled = true
|
|
||||||
bind-port = 9324
|
|
||||||
bind-hostname = "0.0.0.0"
|
|
||||||
// Possible values: relaxed, strict
|
|
||||||
sqs-limits = relaxed
|
|
||||||
}
|
|
||||||
|
|
||||||
queues {
|
|
||||||
vinyldns {
|
|
||||||
defaultVisibilityTimeout = 10 seconds
|
|
||||||
receiveMessageWait = 0 seconds
|
|
||||||
}
|
|
||||||
}
|
|
@ -1,8 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# gets the docker-ized ip address, sets it to an environment variable
|
|
||||||
export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'`
|
|
||||||
|
|
||||||
echo "APP HOST = ${APP_HOST}"
|
|
||||||
|
|
||||||
java -Djava.net.preferIPv4Stack=true -Dconfig.file=/elasticmq/custom.conf -jar /elasticmq/server.jar
|
|
2
docker/email/.gitignore
vendored
2
docker/email/.gitignore
vendored
@ -1,2 +0,0 @@
|
|||||||
*
|
|
||||||
!.gitignore
|
|
@ -1,23 +0,0 @@
|
|||||||
FROM python:2.7.15-stretch
|
|
||||||
|
|
||||||
# Install dns utils so we can run dig
|
|
||||||
RUN apt-get update && apt-get install dnsutils -y
|
|
||||||
|
|
||||||
# The run script is what actually runs our func tests
|
|
||||||
COPY run.sh /app/run.sh
|
|
||||||
RUN chmod a+x /app/run.sh
|
|
||||||
|
|
||||||
COPY run-tests.py /app/run-tests.py
|
|
||||||
RUN chmod a+x /app/run-tests.py
|
|
||||||
|
|
||||||
# Copy over the functional test directory, this must have been copied into the build context previous to this building!
|
|
||||||
ADD functional_test /app
|
|
||||||
|
|
||||||
# Install our func test requirements
|
|
||||||
RUN pip install --index-url https://pypi.python.org/simple/ -r /app/requirements.txt
|
|
||||||
|
|
||||||
# Specifies how many CPUs to use for func tests; the more the better or specifiy "auto" for optimal results
|
|
||||||
ENV PAR_CPU=2
|
|
||||||
|
|
||||||
# set the entry point for the container to start vinyl, specify the config resource
|
|
||||||
ENTRYPOINT ["/app/run.sh"]
|
|
@ -1,18 +0,0 @@
|
|||||||
#!/usr/bin/env python
|
|
||||||
import os
|
|
||||||
import sys
|
|
||||||
|
|
||||||
basedir = os.path.dirname(os.path.realpath(__file__))
|
|
||||||
|
|
||||||
report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports')
|
|
||||||
if not os.path.exists(report_dir):
|
|
||||||
os.system('mkdir -p ' + report_dir)
|
|
||||||
|
|
||||||
import pytest
|
|
||||||
|
|
||||||
result = 1
|
|
||||||
result = pytest.main(list(sys.argv[1:]))
|
|
||||||
|
|
||||||
sys.exit(result)
|
|
||||||
|
|
||||||
|
|
@ -1,81 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Assume defaults of local docker-compose if not set
|
|
||||||
if [ -z "${VINYLDNS_URL}" ]; then
|
|
||||||
VINYLDNS_URL="http://vinyldns-api:9000"
|
|
||||||
fi
|
|
||||||
if [ -z "${DNS_IP}" ]; then
|
|
||||||
DNS_IP=$(dig +short vinyldns-bind9)
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Assume all tests if not specified
|
|
||||||
if [ -z "${TEST_PATTERN}" ]; then
|
|
||||||
TEST_PATTERN=
|
|
||||||
else
|
|
||||||
TEST_PATTERN="-k ${TEST_PATTERN}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ -z "${PAR_CPU}" ]; then
|
|
||||||
export PAR_CPU=2
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "Waiting for API to be ready at ${VINYLDNS_URL} ..."
|
|
||||||
DATA=""
|
|
||||||
RETRY=60
|
|
||||||
SLEEP_DURATION=1
|
|
||||||
while [ "$RETRY" -gt 0 ]
|
|
||||||
do
|
|
||||||
DATA=$(curl -I -s "${VINYLDNS_URL}/ping" -o /dev/null -w "%{http_code}")
|
|
||||||
if [ $? -eq 0 ]
|
|
||||||
then
|
|
||||||
break
|
|
||||||
else
|
|
||||||
echo "Retrying" >&2
|
|
||||||
|
|
||||||
let RETRY-=1
|
|
||||||
sleep "$SLEEP_DURATION"
|
|
||||||
|
|
||||||
if [ "$RETRY" -eq 0 ]
|
|
||||||
then
|
|
||||||
echo "Exceeded retries waiting for VinylDNS API to be ready, failing"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
echo "Running live tests against ${VINYLDNS_URL} and DNS server ${DNS_IP}"
|
|
||||||
|
|
||||||
cd /app
|
|
||||||
|
|
||||||
# Cleanup any errant cached file copies
|
|
||||||
find . -name "*.pyc" -delete
|
|
||||||
find . -name "__pycache__" -delete
|
|
||||||
|
|
||||||
result=0
|
|
||||||
# If PROD_ENV is not true, we are in a local docker environment so do not skip anything
|
|
||||||
if [ "${PROD_ENV}" = "true" ]; then
|
|
||||||
# -m plays havoc with -k, using variables is a headache, so doing this by hand
|
|
||||||
# run parallel tests first (not serial)
|
|
||||||
echo "./run-tests.py live_tests -n${PAR_CPU} -v -m \"not skip_production and not serial\" -v --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False"
|
|
||||||
./run-tests.py live_tests -n${PAR_CPU} -v -m "not skip_production and not serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False
|
|
||||||
result=$?
|
|
||||||
if [ $result -eq 0 ]; then
|
|
||||||
# run serial tests second (serial marker)
|
|
||||||
echo "./run-tests.py live_tests -n0 -v -m \"not skip_production and serial\" -v --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True"
|
|
||||||
./run-tests.py live_tests -n0 -v -m "not skip_production and serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True
|
|
||||||
result=$?
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
# run parallel tests first (not serial)
|
|
||||||
echo "./run-tests.py live_tests -n${PAR_CPU} -v -m \"not serial\" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False"
|
|
||||||
./run-tests.py live_tests -n${PAR_CPU} -v -m "not serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=False
|
|
||||||
result=$?
|
|
||||||
if [ $result -eq 0 ]; then
|
|
||||||
# run serial tests second (serial marker)
|
|
||||||
echo "./run-tests.py live_tests -n0 -v -m \"serial\" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True"
|
|
||||||
./run-tests.py live_tests -n0 -v -m "serial" --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} ${TEST_PATTERN} --teardown=True
|
|
||||||
result=$?
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
|
|
||||||
exit $result
|
|
@ -1,59 +0,0 @@
|
|||||||
LDAP {
|
|
||||||
# For OpenLDAP, this would be a full DN to the admin for LDAP / user that can see all users
|
|
||||||
user = "cn=admin,dc=planetexpress,dc=com"
|
|
||||||
|
|
||||||
# Password for the admin account
|
|
||||||
password = "GoodNewsEveryone"
|
|
||||||
|
|
||||||
# Keep this as an empty string for OpenLDAP
|
|
||||||
domain = ""
|
|
||||||
|
|
||||||
# This will be the name of the LDAP field that carries the user's login id (what they enter in the username in login form)
|
|
||||||
userNameAttribute = "uid"
|
|
||||||
|
|
||||||
# For ogranization, leave empty for this demo, the domainName is what matters, and that is the LDAP structure
|
|
||||||
# to search for users that require login
|
|
||||||
searchBase = [
|
|
||||||
{organization = "", domainName = "ou=people,dc=planetexpress,dc=com"},
|
|
||||||
]
|
|
||||||
context {
|
|
||||||
initialContextFactory = "com.sun.jndi.ldap.LdapCtxFactory"
|
|
||||||
securityAuthentication = "simple"
|
|
||||||
|
|
||||||
# Note: The following assumes a purely docker setup, using container_name = vinyldns-ldap
|
|
||||||
providerUrl = "ldap://vinyldns-ldap:389"
|
|
||||||
}
|
|
||||||
|
|
||||||
# This is only needed if keeping vinyldns user store in sync with ldap (to auto lock out users who left your
|
|
||||||
# company for example)
|
|
||||||
user-sync {
|
|
||||||
enabled = false
|
|
||||||
hours-polling-interval = 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# Note: This MUST match the API or strange errors will ensure, NoCrypto should not be used for production
|
|
||||||
crypto {
|
|
||||||
type = "vinyldns.core.crypto.NoOpCrypto"
|
|
||||||
}
|
|
||||||
|
|
||||||
http.port = 9001
|
|
||||||
|
|
||||||
data-stores = ["mysql"]
|
|
||||||
|
|
||||||
# Note: The default mysql settings assume a local docker compose setup with mysql named vinyldns-mysql
|
|
||||||
# follow the configuration guide to point to your mysql
|
|
||||||
# Only 3 repositories are needed for portal: user, task, user-change
|
|
||||||
mysql {
|
|
||||||
repositories {
|
|
||||||
user {
|
|
||||||
}
|
|
||||||
task {
|
|
||||||
}
|
|
||||||
user-change {
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
# You generate this yourself following https://www.playframework.com/documentation/2.7.x/ApplicationSecret
|
|
||||||
play.http.secret.key = "rpkTGtoJvLIdIV?WU=0@yW^x:pcEGyAt`^p/P3G0fpbj9:uDnD@caSjCDqA0@tB="
|
|
@ -1,3 +0,0 @@
|
|||||||
# uncomment to set custom trustStore
|
|
||||||
# don't forget to mount trustStore to docker image
|
|
||||||
#-Djavax.net.ssl.trustStore=/opt/docker/conf/trustStore.jks
|
|
Binary file not shown.
Before Width: | Height: | Size: 55 KiB |
BIN
img/vinyldns_overview.png
Normal file
BIN
img/vinyldns_overview.png
Normal file
Binary file not shown.
After Width: | Height: | Size: 91 KiB |
@ -1,12 +0,0 @@
|
|||||||
#!/bin/bash -e
|
|
||||||
|
|
||||||
if [ ! -d "./.virtualenv" ]; then
|
|
||||||
echo "Creating virtualenv..."
|
|
||||||
virtualenv --clear --python="$(which python2.7)" ./.virtualenv
|
|
||||||
fi
|
|
||||||
|
|
||||||
if ! diff ./requirements.txt ./.virtualenv/requirements.txt &> /dev/null; then
|
|
||||||
echo "Installing dependencies..."
|
|
||||||
.virtualenv/bin/python ./.virtualenv/bin/pip install --index-url https://pypi.python.org/simple/ -r ./requirements.txt
|
|
||||||
cp ./requirements.txt ./.virtualenv/
|
|
||||||
fi
|
|
@ -1,103 +0,0 @@
|
|||||||
import logging
|
|
||||||
|
|
||||||
from datetime import datetime
|
|
||||||
from hashlib import sha256
|
|
||||||
|
|
||||||
from boto.dynamodb2.layer1 import DynamoDBConnection
|
|
||||||
|
|
||||||
import requests.compat as urlparse
|
|
||||||
|
|
||||||
logger = logging.getLogger(__name__)
|
|
||||||
|
|
||||||
__all__ = [u'BotoRequestSigner']
|
|
||||||
|
|
||||||
|
|
||||||
class BotoRequestSigner(object):
|
|
||||||
|
|
||||||
def __init__(self, index_url, access_key, secret_access_key):
|
|
||||||
url = urlparse.urlparse(index_url)
|
|
||||||
self.boto_connection = DynamoDBConnection(
|
|
||||||
host = url.hostname,
|
|
||||||
port = url.port,
|
|
||||||
aws_access_key_id = access_key,
|
|
||||||
aws_secret_access_key = secret_access_key,
|
|
||||||
is_secure = False)
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def canonical_date(headers):
|
|
||||||
"""Derive canonical date (ISO 8601 string) from headers if possible,
|
|
||||||
or synthesize it if no usable header exists."""
|
|
||||||
iso_format = u'%Y%m%dT%H%M%SZ'
|
|
||||||
http_format = u'%a, %d %b %Y %H:%M:%S GMT'
|
|
||||||
|
|
||||||
def try_parse(date_string, format):
|
|
||||||
if date_string is None:
|
|
||||||
return None
|
|
||||||
try:
|
|
||||||
return datetime.strptime(date_string, format)
|
|
||||||
except ValueError:
|
|
||||||
return None
|
|
||||||
|
|
||||||
amz_date = try_parse(headers.get(u'X-Amz-Date'), iso_format)
|
|
||||||
http_date = try_parse(headers.get(u'Date'), http_format)
|
|
||||||
fallback_date = datetime.utcnow()
|
|
||||||
|
|
||||||
date = next(d for d in [amz_date, http_date, fallback_date] if d is not None)
|
|
||||||
return date.strftime(iso_format)
|
|
||||||
|
|
||||||
def build_auth_header(self, method, path, headers, body, params=None):
|
|
||||||
"""Construct an Authorization header, using boto."""
|
|
||||||
|
|
||||||
request = self.boto_connection.build_base_http_request(
|
|
||||||
method=method,
|
|
||||||
path=path,
|
|
||||||
auth_path=path,
|
|
||||||
headers=headers,
|
|
||||||
data=body,
|
|
||||||
params=params or {})
|
|
||||||
|
|
||||||
auth_handler = self.boto_connection._auth_handler
|
|
||||||
|
|
||||||
timestamp = BotoRequestSigner.canonical_date(headers)
|
|
||||||
request.timestamp = timestamp[0:8]
|
|
||||||
|
|
||||||
request.region_name = u'us-east-1'
|
|
||||||
request.service_name = u'VinylDNS'
|
|
||||||
|
|
||||||
credential_scope = u'/'.join([request.timestamp, request.region_name, request.service_name, u'aws4_request'])
|
|
||||||
|
|
||||||
canonical_request = auth_handler.canonical_request(request)
|
|
||||||
split_request = canonical_request.split('\n')
|
|
||||||
|
|
||||||
if params != {} and split_request[2] == '':
|
|
||||||
split_request[2] = self.generate_canonical_query_string(params)
|
|
||||||
canonical_request = '\n'.join(split_request)
|
|
||||||
hashed_request = sha256(canonical_request.encode(u'utf-8')).hexdigest()
|
|
||||||
|
|
||||||
string_to_sign = u'\n'.join([u'AWS4-HMAC-SHA256', timestamp, credential_scope, hashed_request])
|
|
||||||
signature = auth_handler.signature(request, string_to_sign)
|
|
||||||
headers_to_sign = auth_handler.headers_to_sign(request)
|
|
||||||
|
|
||||||
auth_header = u','.join([
|
|
||||||
u'AWS4-HMAC-SHA256 Credential=%s' % auth_handler.scope(request),
|
|
||||||
u'SignedHeaders=%s' % auth_handler.signed_headers(headers_to_sign),
|
|
||||||
u'Signature=%s' % signature])
|
|
||||||
|
|
||||||
return auth_header
|
|
||||||
|
|
||||||
@staticmethod
|
|
||||||
def generate_canonical_query_string(params):
|
|
||||||
"""
|
|
||||||
Using in place of canonical_query_string from boto/auth.py to support POST requests with query parameters
|
|
||||||
"""
|
|
||||||
post_params = []
|
|
||||||
for param in sorted(params):
|
|
||||||
value = params[param].encode('utf-8')
|
|
||||||
import urllib
|
|
||||||
try:
|
|
||||||
post_params.append('%s=%s' % (urllib.parse.quote(param, safe='-_.~'),
|
|
||||||
urllib.parse.quote(value, safe='-_.~')))
|
|
||||||
except:
|
|
||||||
post_params.append('%s=%s' % (urllib.quote(param, safe='-_.~'),
|
|
||||||
urllib.quote(value, safe='-_.~')))
|
|
||||||
return '&'.join(post_params)
|
|
@ -1,103 +0,0 @@
|
|||||||
import os
|
|
||||||
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_addoption(parser):
|
|
||||||
"""
|
|
||||||
Adds additional options that we can parse when we run the tests, stores them in the parser / py.test context
|
|
||||||
"""
|
|
||||||
parser.addoption("--url", dest="url", action="store", default="http://localhost:9000",
|
|
||||||
help="URL for application to root")
|
|
||||||
parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001",
|
|
||||||
help="The ip address for the dns server to use for the tests")
|
|
||||||
parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.",
|
|
||||||
help="The zone name that will be used for testing")
|
|
||||||
parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.",
|
|
||||||
help="The name of the key used to sign updates for the zone")
|
|
||||||
parser.addoption("--dns-key", dest="dns_key", action="store", default="nzisn+4G2ldMn0q1CV3vsg==",
|
|
||||||
help="The tsig key")
|
|
||||||
|
|
||||||
# optional
|
|
||||||
parser.addoption("--basic-auth", dest="basic_auth_creds",
|
|
||||||
help="Basic auth credentials in 'user:pass' format")
|
|
||||||
parser.addoption("--basic-auth-realm", dest="basic_auth_realm",
|
|
||||||
help="Basic auth realm to use with credentials supplied by \"-b\"")
|
|
||||||
parser.addoption("--iauth-creds", dest="iauth_creds",
|
|
||||||
help="Intermediary auth (codebig style) in 'key:secret' format")
|
|
||||||
parser.addoption("--oauth-creds", dest="oauth_creds",
|
|
||||||
help="OAuth credentials in consumer:secret format")
|
|
||||||
parser.addoption("--environment", dest="cim_env", action="store", default="test",
|
|
||||||
help="CIM_ENV that we are testing against.")
|
|
||||||
parser.addoption("--teardown", dest="teardown", action="store", default="True",
|
|
||||||
help="True | False - Whether to teardown the test fixture, or leave it for another run")
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_configure(config):
|
|
||||||
"""
|
|
||||||
Loads the test context since we are no longer using run.py
|
|
||||||
"""
|
|
||||||
|
|
||||||
# Monkey patch ssl so we do not verify ssl certs
|
|
||||||
import ssl
|
|
||||||
try:
|
|
||||||
_create_unverified_https_context = ssl._create_unverified_context
|
|
||||||
except AttributeError:
|
|
||||||
# Legacy Python that doesn't verify HTTPS certificates by default
|
|
||||||
pass
|
|
||||||
else:
|
|
||||||
# Handle target environment that doesn't support HTTPS verification
|
|
||||||
ssl._create_default_https_context = _create_unverified_https_context
|
|
||||||
|
|
||||||
url = config.getoption("url", default="http://localhost:9000/")
|
|
||||||
if not url.endswith('/'):
|
|
||||||
url += '/'
|
|
||||||
|
|
||||||
import sys
|
|
||||||
sys.dont_write_bytecode = True
|
|
||||||
|
|
||||||
VinylDNSTestContext.configure(config.getoption("dns_ip"),
|
|
||||||
config.getoption("dns_zone"),
|
|
||||||
config.getoption("dns_key_name"),
|
|
||||||
config.getoption("dns_key"),
|
|
||||||
config.getoption("url"),
|
|
||||||
config.getoption("teardown"))
|
|
||||||
|
|
||||||
from shared_zone_test_context import SharedZoneTestContext
|
|
||||||
if not hasattr(config, 'workerinput'):
|
|
||||||
print 'Master, standing up the test fixture...'
|
|
||||||
# use the fixture file if it exists
|
|
||||||
if os.path.isfile('tmp.out'):
|
|
||||||
print 'Fixture file found, assuming the fixture file'
|
|
||||||
SharedZoneTestContext('tmp.out')
|
|
||||||
else:
|
|
||||||
print 'No fixture file found, loading a new test fixture'
|
|
||||||
ctx = SharedZoneTestContext()
|
|
||||||
ctx.out_fixture_file("tmp.out")
|
|
||||||
else:
|
|
||||||
print 'This is a worker'
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_unconfigure(config):
|
|
||||||
# this attribute is only set on workers
|
|
||||||
print 'Master exiting...'
|
|
||||||
if not hasattr(config, 'workerinput') and VinylDNSTestContext.teardown:
|
|
||||||
print 'Master cleaning up...'
|
|
||||||
from shared_zone_test_context import SharedZoneTestContext
|
|
||||||
ctx = SharedZoneTestContext('tmp.out')
|
|
||||||
ctx.tear_down()
|
|
||||||
os.remove('tmp.out')
|
|
||||||
else:
|
|
||||||
print 'Worker exiting...'
|
|
||||||
|
|
||||||
|
|
||||||
def pytest_report_header(config):
|
|
||||||
"""
|
|
||||||
Overrides the test result header like we do in pyfunc test
|
|
||||||
"""
|
|
||||||
header = "Testing against CIM_ENV " + config.getoption("cim_env")
|
|
||||||
header += "\r\nURL: " + config.getoption("url")
|
|
||||||
header += "\r\nRunning from directory " + os.getcwd()
|
|
||||||
header += '\r\nTest shim directory ' + os.path.dirname(__file__)
|
|
||||||
header += "\r\nDNS IP: " + config.getoption("dns_ip")
|
|
||||||
return header
|
|
@ -1,156 +0,0 @@
|
|||||||
from hamcrest import *
|
|
||||||
from utils import *
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.mark.serial
|
|
||||||
@pytest.mark.manual_batch_review
|
|
||||||
def test_approve_pending_batch_change_success(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Test approving a batch change succeeds for a support user
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
approver = shared_zone_test_context.support_user_client
|
|
||||||
batch_change_input = {
|
|
||||||
"changes": [
|
|
||||||
get_change_A_AAAA_json("test-approve-success.not.loaded.", address="4.3.2.1"),
|
|
||||||
get_change_A_AAAA_json("needs-review.not.loaded.", address="4.3.2.1"),
|
|
||||||
get_change_A_AAAA_json("zone-name-flagged-for-manual-review.zone.requires.review.")
|
|
||||||
],
|
|
||||||
"ownerGroupId": shared_zone_test_context.ok_group['id']
|
|
||||||
}
|
|
||||||
|
|
||||||
to_delete = []
|
|
||||||
to_disconnect = None
|
|
||||||
try:
|
|
||||||
result = client.create_batch_change(batch_change_input, status=202)
|
|
||||||
get_batch = client.get_batch_change(result['id'])
|
|
||||||
assert_that(get_batch['status'], is_('PendingReview'))
|
|
||||||
assert_that(get_batch['approvalStatus'], is_('PendingReview'))
|
|
||||||
assert_that(get_batch['changes'][0]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError'))
|
|
||||||
assert_that(get_batch['changes'][1]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(get_batch['changes'][1]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview'))
|
|
||||||
assert_that(get_batch['changes'][2]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(get_batch['changes'][2]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview'))
|
|
||||||
|
|
||||||
# need to create the zone so the change can succeed
|
|
||||||
zone = {
|
|
||||||
'name': 'not.loaded.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'adminGroupId': shared_zone_test_context.ok_group['id'],
|
|
||||||
'backendId': 'func-test-backend',
|
|
||||||
'shared': True
|
|
||||||
}
|
|
||||||
zone_create = approver.create_zone(zone, status=202)
|
|
||||||
to_disconnect = zone_create['zone']
|
|
||||||
approver.wait_until_zone_active(to_disconnect['id'])
|
|
||||||
|
|
||||||
approved = approver.approve_batch_change(result['id'], status=202)
|
|
||||||
completed_batch = client.wait_until_batch_change_completed(approved)
|
|
||||||
to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']]
|
|
||||||
|
|
||||||
assert_that(completed_batch['status'], is_('Complete'))
|
|
||||||
for change in completed_batch['changes']:
|
|
||||||
assert_that(change['status'], is_('Complete'))
|
|
||||||
assert_that(len(change['validationErrors']), is_(0))
|
|
||||||
assert_that(completed_batch['approvalStatus'], is_('ManuallyApproved'))
|
|
||||||
assert_that(completed_batch['reviewerId'], is_('support-user-id'))
|
|
||||||
assert_that(completed_batch['reviewerUserName'], is_('support-user'))
|
|
||||||
assert_that(completed_batch, has_key('reviewTimestamp'))
|
|
||||||
assert_that(get_batch, not(has_key('cancelledTimestamp')))
|
|
||||||
finally:
|
|
||||||
clear_zoneid_rsid_tuple_list(to_delete, client)
|
|
||||||
if to_disconnect:
|
|
||||||
approver.abandon_zones(to_disconnect['id'], status=202)
|
|
||||||
|
|
||||||
@pytest.mark.manual_batch_review
|
|
||||||
def test_approve_pending_batch_change_fails_if_there_are_still_errors(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Test approving a batch change fails if there are still errors
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
approver = shared_zone_test_context.support_user_client
|
|
||||||
batch_change_input = {
|
|
||||||
"changes": [
|
|
||||||
get_change_A_AAAA_json("needs-review.nonexistent.", address="4.3.2.1"),
|
|
||||||
get_change_A_AAAA_json("zone.does.not.exist.")
|
|
||||||
],
|
|
||||||
"ownerGroupId": shared_zone_test_context.ok_group['id']
|
|
||||||
}
|
|
||||||
complete_rs = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = client.create_batch_change(batch_change_input, status=202)
|
|
||||||
get_batch = client.get_batch_change(result['id'])
|
|
||||||
assert_that(get_batch['status'], is_('PendingReview'))
|
|
||||||
assert_that(get_batch['approvalStatus'], is_('PendingReview'))
|
|
||||||
assert_that(get_batch['changes'][0]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(get_batch['changes'][0]['validationErrors'][0]['errorType'], is_('RecordRequiresManualReview'))
|
|
||||||
assert_that(get_batch['changes'][1]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(get_batch['changes'][1]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError'))
|
|
||||||
|
|
||||||
approval_response = approver.approve_batch_change(result['id'], status=400)
|
|
||||||
assert_that((approval_response[0]['errors'][0]), contains_string('Zone Discovery Failed'))
|
|
||||||
assert_that((approval_response[1]['errors'][0]), contains_string('Zone Discovery Failed'))
|
|
||||||
|
|
||||||
updated_batch = client.get_batch_change(result['id'], status=200)
|
|
||||||
assert_that(updated_batch['status'], is_('PendingReview'))
|
|
||||||
assert_that(updated_batch['approvalStatus'], is_('PendingReview'))
|
|
||||||
assert_that(updated_batch, not(has_key('reviewerId')))
|
|
||||||
assert_that(updated_batch, not(has_key('reviewerUserName')))
|
|
||||||
assert_that(updated_batch, not(has_key('reviewTimestamp')))
|
|
||||||
assert_that(updated_batch, not(has_key('cancelledTimestamp')))
|
|
||||||
assert_that(updated_batch['changes'][0]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(updated_batch['changes'][0]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError'))
|
|
||||||
assert_that(updated_batch['changes'][1]['status'], is_('NeedsReview'))
|
|
||||||
assert_that(updated_batch['changes'][1]['validationErrors'][0]['errorType'], is_('ZoneDiscoveryError'))
|
|
||||||
finally:
|
|
||||||
if complete_rs:
|
|
||||||
delete_result = client.delete_recordset(complete_rs['zoneId'], complete_rs['id'], status=202)
|
|
||||||
client.wait_until_recordset_change_status(delete_result, 'Complete')
|
|
||||||
|
|
||||||
@pytest.mark.manual_batch_review
|
|
||||||
def test_approve_batch_change_with_invalid_batch_change_id_fails(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Test approving a batch change with invalid batch change ID
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
|
|
||||||
error = client.approve_batch_change("some-id", status=404)
|
|
||||||
assert_that(error, is_("Batch change with id some-id cannot be found"))
|
|
||||||
|
|
||||||
@pytest.mark.manual_batch_review
|
|
||||||
def test_approve_batch_change_with_comments_exceeding_max_length_fails(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Test approving a batch change with comments exceeding 1024 characters fails
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
approve_batch_change_input = {
|
|
||||||
"reviewComment": "a"*1025
|
|
||||||
}
|
|
||||||
errors = client.approve_batch_change("some-id", approve_batch_change_input, status=400)['errors']
|
|
||||||
assert_that(errors, contains_inanyorder("Comment length must not exceed 1024 characters."))
|
|
||||||
|
|
||||||
@pytest.mark.manual_batch_review
|
|
||||||
def test_approve_batch_change_fails_with_forbidden_error_for_non_system_admins(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Test approving a batch change if the reviewer is not a super user or support user
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
batch_change_input = {
|
|
||||||
"changes": [
|
|
||||||
get_change_A_AAAA_json("no-owner-group-id.ok.", address="4.3.2.1")
|
|
||||||
]
|
|
||||||
}
|
|
||||||
to_delete = []
|
|
||||||
|
|
||||||
try:
|
|
||||||
result = client.create_batch_change(batch_change_input, status=202)
|
|
||||||
completed_batch = client.wait_until_batch_change_completed(result)
|
|
||||||
to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']]
|
|
||||||
error = client.approve_batch_change(completed_batch['id'], status=403)
|
|
||||||
assert_that(error, is_("User does not have access to item " + completed_batch['id']))
|
|
||||||
finally:
|
|
||||||
clear_zoneid_rsid_tuple_list(to_delete, client)
|
|
@ -1,7 +0,0 @@
|
|||||||
import pytest
|
|
||||||
|
|
||||||
|
|
||||||
@pytest.fixture(scope="session")
|
|
||||||
def shared_zone_test_context(request):
|
|
||||||
from shared_zone_test_context import SharedZoneTestContext
|
|
||||||
return SharedZoneTestContext("tmp.out")
|
|
@ -1,106 +0,0 @@
|
|||||||
import time
|
|
||||||
from hamcrest import *
|
|
||||||
from utils import *
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
from vinyldns_python import VinylDNSClient
|
|
||||||
|
|
||||||
|
|
||||||
class ListBatchChangeSummariesTestContext():
|
|
||||||
def __init__(self, shared_zone_test_context):
|
|
||||||
# Note: this fixture is designed so it will load summaries instead of creating them
|
|
||||||
self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listBatchSummariesAccessKey',
|
|
||||||
'listBatchSummariesSecretKey')
|
|
||||||
self.completed_changes = []
|
|
||||||
self.to_delete = None
|
|
||||||
|
|
||||||
acl_rule = generate_acl_rule('Write', userId='list-batch-summaries-id')
|
|
||||||
add_ok_acl_rules(shared_zone_test_context, [acl_rule])
|
|
||||||
|
|
||||||
initial_db_check = self.client.list_batch_change_summaries(status=200)
|
|
||||||
self.group = self.client.get_group('list-summaries-group', status=200)
|
|
||||||
|
|
||||||
batch_change_input_one = {
|
|
||||||
"comments": "first",
|
|
||||||
"changes": [
|
|
||||||
get_change_CNAME_json("test-first.ok.", cname="one.")
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
batch_change_input_two = {
|
|
||||||
"comments": "second",
|
|
||||||
"changes": [
|
|
||||||
get_change_CNAME_json("test-second.ok.", cname="two.")
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
batch_change_input_three = {
|
|
||||||
"comments": "last",
|
|
||||||
"changes": [
|
|
||||||
get_change_CNAME_json("test-last.ok.", cname="three.")
|
|
||||||
]
|
|
||||||
}
|
|
||||||
|
|
||||||
batch_change_inputs = [batch_change_input_one, batch_change_input_two, batch_change_input_three]
|
|
||||||
|
|
||||||
record_set_list = []
|
|
||||||
self.completed_changes = []
|
|
||||||
|
|
||||||
if len(initial_db_check['batchChanges']) == 0:
|
|
||||||
print "\r\n!!! CREATING NEW SUMMARIES"
|
|
||||||
# make some batch changes
|
|
||||||
for input in batch_change_inputs:
|
|
||||||
change = self.client.create_batch_change(input, status=202)
|
|
||||||
|
|
||||||
if 'Review' not in change['status']:
|
|
||||||
completed = self.client.wait_until_batch_change_completed(change)
|
|
||||||
assert_that(completed["comments"], equal_to(input["comments"]))
|
|
||||||
record_set_list += [(change['zoneId'], change['recordSetId']) for change in completed['changes']]
|
|
||||||
|
|
||||||
# sleep for consistent ordering of timestamps, must be at least one second apart
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
self.completed_changes = self.client.list_batch_change_summaries(status=200)['batchChanges']
|
|
||||||
assert_that(len(self.completed_changes), equal_to(len(batch_change_inputs)))
|
|
||||||
else:
|
|
||||||
print "\r\n!!! USING EXISTING SUMMARIES"
|
|
||||||
self.completed_changes = initial_db_check['batchChanges']
|
|
||||||
self.to_delete = set(record_set_list)
|
|
||||||
|
|
||||||
def tear_down(self, shared_zone_test_context):
|
|
||||||
for result_rs in self.to_delete:
|
|
||||||
delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(result_rs[0], result_rs[1],
|
|
||||||
status=202)
|
|
||||||
shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete')
|
|
||||||
clear_ok_acl_rules(shared_zone_test_context)
|
|
||||||
|
|
||||||
def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False,
|
|
||||||
max_items=100, approval_status=False):
|
|
||||||
# validate fields
|
|
||||||
if next_id:
|
|
||||||
assert_that(summaries_page, has_key('nextId'))
|
|
||||||
else:
|
|
||||||
assert_that(summaries_page, is_not(has_key('nextId')))
|
|
||||||
if start_from:
|
|
||||||
assert_that(summaries_page['startFrom'], is_(start_from))
|
|
||||||
else:
|
|
||||||
assert_that(summaries_page, is_not(has_key('startFrom')))
|
|
||||||
if approval_status:
|
|
||||||
assert_that(summaries_page, has_key('approvalStatus'))
|
|
||||||
else:
|
|
||||||
assert_that(summaries_page, is_not(has_key('approvalStatus')))
|
|
||||||
assert_that(summaries_page['maxItems'], is_(max_items))
|
|
||||||
|
|
||||||
# validate actual page
|
|
||||||
list_batch_change_summaries = summaries_page['batchChanges']
|
|
||||||
assert_that(list_batch_change_summaries, has_length(size))
|
|
||||||
|
|
||||||
for i, summary in enumerate(list_batch_change_summaries):
|
|
||||||
assert_that(summary["userId"], equal_to("list-batch-summaries-id"))
|
|
||||||
assert_that(summary["userName"], equal_to("list-batch-summaries-user"))
|
|
||||||
assert_that(summary["comments"], equal_to(self.completed_changes[i + start_from]["comments"]))
|
|
||||||
assert_that(summary["createdTimestamp"],
|
|
||||||
equal_to(self.completed_changes[i + start_from]["createdTimestamp"]))
|
|
||||||
assert_that(summary["totalChanges"], equal_to(self.completed_changes[i + start_from]["totalChanges"]))
|
|
||||||
assert_that(summary["status"], equal_to(self.completed_changes[i + start_from]["status"]))
|
|
||||||
assert_that(summary["id"], equal_to(self.completed_changes[i + start_from]["id"]))
|
|
||||||
assert_that(summary, is_not(has_key("reviewerId")))
|
|
@ -1,35 +0,0 @@
|
|||||||
from hamcrest import *
|
|
||||||
from utils import *
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
from vinyldns_python import VinylDNSClient
|
|
||||||
|
|
||||||
|
|
||||||
class ListGroupsTestContext(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, access_key='listGroupAccessKey',
|
|
||||||
secret_key='listGroupSecretKey')
|
|
||||||
self.support_user_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'supportUserAccessKey',
|
|
||||||
'supportUserSecretKey')
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
try:
|
|
||||||
for runner in range(0, 50):
|
|
||||||
new_group = {
|
|
||||||
'name': "test-list-my-groups-{0:0>3}".format(runner),
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'members': [{'id': 'list-group-user'}],
|
|
||||||
'admins': [{'id': 'list-group-user'}]
|
|
||||||
}
|
|
||||||
self.client.create_group(new_group, status=200)
|
|
||||||
|
|
||||||
except:
|
|
||||||
# teardown if there was any issue in setup
|
|
||||||
try:
|
|
||||||
self.tear_down()
|
|
||||||
except:
|
|
||||||
pass
|
|
||||||
raise
|
|
||||||
|
|
||||||
def tear_down(self):
|
|
||||||
clear_zones(self.client)
|
|
||||||
clear_groups(self.client)
|
|
@ -1,89 +0,0 @@
|
|||||||
from hamcrest import *
|
|
||||||
from utils import *
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
from vinyldns_python import VinylDNSClient
|
|
||||||
|
|
||||||
|
|
||||||
class ListRecordSetsTestContext(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listRecordsAccessKey', 'listRecordsSecretKey')
|
|
||||||
self.zone = None
|
|
||||||
self.all_records = []
|
|
||||||
self.group = None
|
|
||||||
get_zone = self.client.get_zone_by_name('list-records.', status=(200, 404))
|
|
||||||
if get_zone and 'zone' in get_zone:
|
|
||||||
self.zone = get_zone['zone']
|
|
||||||
self.all_records = self.client.list_recordsets_by_zone(self.zone['id'])['recordSets']
|
|
||||||
my_groups = self.client.list_my_groups(group_name_filter='list-records-group')
|
|
||||||
if my_groups and 'groups' in my_groups and len(my_groups['groups']) > 0:
|
|
||||||
self.group = my_groups['groups'][0]
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
# Only call this if the context needs to be built
|
|
||||||
self.tear_down()
|
|
||||||
group = {
|
|
||||||
'name': 'list-records-group',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'list-records-user'}],
|
|
||||||
'admins': [{'id': 'list-records-user'}]
|
|
||||||
}
|
|
||||||
self.group = self.client.create_group(group, status=200)
|
|
||||||
zone_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-records.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': self.group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
self.client.wait_until_zone_active(zone_change[u'zone'][u'id'])
|
|
||||||
self.zone = zone_change[u'zone']
|
|
||||||
self.all_records = self.client.list_recordsets_by_zone(self.zone['id'])['recordSets']
|
|
||||||
|
|
||||||
def tear_down(self):
|
|
||||||
clear_zones(self.client)
|
|
||||||
clear_groups(self.client)
|
|
||||||
|
|
||||||
def check_recordsets_page_accuracy(self, list_results_page, size, offset, nextId=False, startFrom=False, maxItems=100, recordTypeFilter=False, nameSort="ASC"):
|
|
||||||
# validate fields
|
|
||||||
if nextId:
|
|
||||||
assert_that(list_results_page, has_key('nextId'))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('nextId')))
|
|
||||||
if startFrom:
|
|
||||||
assert_that(list_results_page['startFrom'], is_(startFrom))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('startFrom')))
|
|
||||||
if recordTypeFilter:
|
|
||||||
assert_that(list_results_page, has_key('recordTypeFilter'))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('recordTypeFilter')))
|
|
||||||
assert_that(list_results_page['maxItems'], is_(maxItems))
|
|
||||||
assert_that(list_results_page['nameSort'], is_(nameSort))
|
|
||||||
|
|
||||||
# validate actual page
|
|
||||||
list_results_recordsets_page = list_results_page['recordSets']
|
|
||||||
assert_that(list_results_recordsets_page, has_length(size))
|
|
||||||
for i in range(len(list_results_recordsets_page)):
|
|
||||||
assert_that(list_results_recordsets_page[i]['name'], is_(self.all_records[i+offset]['name']))
|
|
||||||
verify_recordset(list_results_recordsets_page[i], self.all_records[i+offset])
|
|
||||||
assert_that(list_results_recordsets_page[i]['accessLevel'], is_('Delete'))
|
|
||||||
|
|
||||||
def check_recordsets_parameters(self, list_results_page, nextId=False, startFrom=False, maxItems=100, recordTypeFilter=False, nameSort="ASC"):
|
|
||||||
# validate fields
|
|
||||||
if nextId:
|
|
||||||
assert_that(list_results_page, has_key('nextId'))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('nextId')))
|
|
||||||
if startFrom:
|
|
||||||
assert_that(list_results_page['startFrom'], is_(startFrom))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('startFrom')))
|
|
||||||
if recordTypeFilter:
|
|
||||||
assert_that(list_results_page, has_key('recordTypeFilter'))
|
|
||||||
else:
|
|
||||||
assert_that(list_results_page, is_not(has_key('recordTypeFilter')))
|
|
||||||
assert_that(list_results_page['maxItems'], is_(maxItems))
|
|
||||||
assert_that(list_results_page['nameSort'], is_(nameSort))
|
|
@ -1,78 +0,0 @@
|
|||||||
from hamcrest import *
|
|
||||||
from utils import *
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
from vinyldns_python import VinylDNSClient
|
|
||||||
|
|
||||||
|
|
||||||
class ListZonesTestContext(object):
|
|
||||||
def __init__(self):
|
|
||||||
self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listZonesAccessKey', 'listZonesSecretKey')
|
|
||||||
|
|
||||||
def build(self):
|
|
||||||
self.tear_down()
|
|
||||||
group = {
|
|
||||||
'name': 'list-zones-group',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'list-zones-user'}],
|
|
||||||
'admins': [{'id': 'list-zones-user'}]
|
|
||||||
}
|
|
||||||
list_zones_group = self.client.create_group(group, status=200)
|
|
||||||
search_zone_1_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-zones-test-searched-1.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': list_zones_group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
|
|
||||||
search_zone_2_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-zones-test-searched-2.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': list_zones_group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
|
|
||||||
search_zone_3_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-zones-test-searched-3.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': list_zones_group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
|
|
||||||
non_search_zone_1_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-zones-test-unfiltered-1.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': list_zones_group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
|
|
||||||
non_search_zone_2_change = self.client.create_zone(
|
|
||||||
{
|
|
||||||
'name': 'list-zones-test-unfiltered-2.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'shared': False,
|
|
||||||
'adminGroupId': list_zones_group['id'],
|
|
||||||
'isTest': True,
|
|
||||||
'backendId': 'func-test-backend'
|
|
||||||
}, status=202)
|
|
||||||
|
|
||||||
zone_changes = [search_zone_1_change, search_zone_2_change, search_zone_3_change, non_search_zone_1_change,
|
|
||||||
non_search_zone_2_change]
|
|
||||||
for change in zone_changes:
|
|
||||||
self.client.wait_until_zone_active(change[u'zone'][u'id'])
|
|
||||||
|
|
||||||
def tear_down(self):
|
|
||||||
clear_zones(self.client)
|
|
||||||
clear_groups(self.client)
|
|
@ -1,240 +0,0 @@
|
|||||||
import json
|
|
||||||
|
|
||||||
from hamcrest import *
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_success(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group works
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-success',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
|
|
||||||
assert_that(result['name'], is_(new_group['name']))
|
|
||||||
assert_that(result['email'], is_(new_group['email']))
|
|
||||||
assert_that(result['description'], is_(new_group['description']))
|
|
||||||
assert_that(result['status'], is_('Active'))
|
|
||||||
assert_that(result['created'], not_none())
|
|
||||||
assert_that(result['id'], not_none())
|
|
||||||
assert_that(result['members'], has_length(1))
|
|
||||||
assert_that(result['members'][0]['id'], is_('ok'))
|
|
||||||
assert_that(result['admins'], has_length(1))
|
|
||||||
assert_that(result['admins'][0]['id'], is_('ok'))
|
|
||||||
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_creator_is_an_admin(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that the creator is an admin
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-success',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': []
|
|
||||||
}
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
|
|
||||||
assert_that(result['name'], is_(new_group['name']))
|
|
||||||
assert_that(result['email'], is_(new_group['email']))
|
|
||||||
assert_that(result['description'], is_(new_group['description']))
|
|
||||||
assert_that(result['status'], is_('Active'))
|
|
||||||
assert_that(result['created'], not_none())
|
|
||||||
assert_that(result['id'], not_none())
|
|
||||||
assert_that(result['members'], has_length(1))
|
|
||||||
assert_that(result['members'][0]['id'], is_('ok'))
|
|
||||||
assert_that(result['admins'], has_length(1))
|
|
||||||
assert_that(result['admins'][0]['id'], is_('ok'))
|
|
||||||
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_without_name(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group without a name fails
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
|
|
||||||
new_group = {
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
errors = client.create_group(new_group, status=400)['errors']
|
|
||||||
assert_that(errors[0], is_("Missing Group.name"))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_without_email(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group without an email fails
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
|
|
||||||
new_group = {
|
|
||||||
'name': 'without-email',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
errors = client.create_group(new_group, status=400)['errors']
|
|
||||||
assert_that(errors[0], is_("Missing Group.email"))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_without_name_or_email(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group without name or an email fails
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
|
|
||||||
new_group = {
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
errors = client.create_group(new_group, status=400)['errors']
|
|
||||||
assert_that(errors, has_length(2))
|
|
||||||
assert_that(errors, contains_inanyorder(
|
|
||||||
"Missing Group.name",
|
|
||||||
"Missing Group.email"
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_without_members_or_admins(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group without members or admins fails
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
|
|
||||||
new_group = {
|
|
||||||
'name': 'some-group-name',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description'
|
|
||||||
}
|
|
||||||
errors = client.create_group(new_group, status=400)['errors']
|
|
||||||
assert_that(errors, has_length(2))
|
|
||||||
assert_that(errors, contains_inanyorder(
|
|
||||||
"Missing Group.members",
|
|
||||||
"Missing Group.admins"
|
|
||||||
))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_adds_admins_as_members(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group adds admins as members
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
try:
|
|
||||||
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-add-admins-as-members',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
|
|
||||||
assert_that(result['name'], is_(new_group['name']))
|
|
||||||
assert_that(result['email'], is_(new_group['email']))
|
|
||||||
assert_that(result['description'], is_(new_group['description']))
|
|
||||||
assert_that(result['status'], is_('Active'))
|
|
||||||
assert_that(result['created'], not_none())
|
|
||||||
assert_that(result['id'], not_none())
|
|
||||||
assert_that(result['members'][0]['id'], is_('ok'))
|
|
||||||
assert_that(result['admins'][0]['id'], is_('ok'))
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_duplicate(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group that has already been created fails
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-duplicate',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'ok'}]
|
|
||||||
}
|
|
||||||
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
client.create_group(new_group, status=409)
|
|
||||||
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_no_members(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group that has no members adds current user as a member and an admin
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-no-members',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [],
|
|
||||||
'admins': []
|
|
||||||
}
|
|
||||||
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
assert_that(result['members'][0]['id'], is_('ok'))
|
|
||||||
assert_that(result['admins'][0]['id'], is_('ok'))
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_create_group_adds_admins_to_member_list(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that creating a group adds admins to member list
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-create-group-add-admins-to-members',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [{'id': 'ok'}],
|
|
||||||
'admins': [{'id': 'dummy'}]
|
|
||||||
}
|
|
||||||
|
|
||||||
result = client.create_group(new_group, status=200)
|
|
||||||
assert_that(map(lambda x: x['id'], result['members']), contains('ok', 'dummy'))
|
|
||||||
assert_that(result['admins'][0]['id'], is_('dummy'))
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(result['id'], status=(200, 404))
|
|
@ -1,143 +0,0 @@
|
|||||||
import pytest
|
|
||||||
import uuid
|
|
||||||
import json
|
|
||||||
|
|
||||||
from hamcrest import *
|
|
||||||
from vinyldns_python import VinylDNSClient
|
|
||||||
from vinyldns_context import VinylDNSTestContext
|
|
||||||
|
|
||||||
|
|
||||||
def test_delete_group_success(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that we can delete a group that has been created
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
saved_group = None
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-delete-group-success',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [ { 'id': 'ok'} ],
|
|
||||||
'admins': [ { 'id': 'ok'} ]
|
|
||||||
}
|
|
||||||
saved_group = client.create_group(new_group, status=200)
|
|
||||||
result = client.delete_group(saved_group['id'], status=200)
|
|
||||||
assert_that(result['status'], is_('Deleted'))
|
|
||||||
finally:
|
|
||||||
if result:
|
|
||||||
client.delete_group(saved_group['id'], status=(200,404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_delete_group_not_found(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that deleting a group that does not exist returns a 404
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
client.delete_group('doesntexist', status=404)
|
|
||||||
|
|
||||||
|
|
||||||
def test_delete_group_that_is_already_deleted(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that deleting a group that is already deleted
|
|
||||||
"""
|
|
||||||
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
saved_group = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-delete-group-already',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [ { 'id': 'ok'} ],
|
|
||||||
'admins': [ { 'id': 'ok'} ]
|
|
||||||
}
|
|
||||||
saved_group = client.create_group(new_group, status=200)
|
|
||||||
|
|
||||||
client.delete_group(saved_group['id'], status=200)
|
|
||||||
client.delete_group(saved_group['id'], status=404)
|
|
||||||
|
|
||||||
finally:
|
|
||||||
if saved_group:
|
|
||||||
client.delete_group(saved_group['id'], status=(200,404))
|
|
||||||
|
|
||||||
|
|
||||||
def test_delete_admin_group(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that we cannot delete a group that is the admin of a zone
|
|
||||||
"""
|
|
||||||
client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
result_group = None
|
|
||||||
result_zone = None
|
|
||||||
|
|
||||||
try:
|
|
||||||
#Create group
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-delete-group-already',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [ { 'id': 'ok'} ],
|
|
||||||
'admins': [ { 'id': 'ok'} ]
|
|
||||||
}
|
|
||||||
|
|
||||||
result_group = client.create_group(new_group, status=200)
|
|
||||||
print result_group
|
|
||||||
|
|
||||||
#Create zone with that group ID as admin
|
|
||||||
zone = {
|
|
||||||
'name': 'one-time.',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'adminGroupId': result_group['id'],
|
|
||||||
'connection': {
|
|
||||||
'name': 'vinyldns.',
|
|
||||||
'keyName': VinylDNSTestContext.dns_key_name,
|
|
||||||
'key': VinylDNSTestContext.dns_key,
|
|
||||||
'primaryServer': VinylDNSTestContext.dns_ip
|
|
||||||
},
|
|
||||||
'transferConnection': {
|
|
||||||
'name': 'vinyldns.',
|
|
||||||
'keyName': VinylDNSTestContext.dns_key_name,
|
|
||||||
'key': VinylDNSTestContext.dns_key,
|
|
||||||
'primaryServer': VinylDNSTestContext.dns_ip
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
result = client.create_zone(zone, status=202)
|
|
||||||
result_zone = result['zone']
|
|
||||||
client.wait_until_zone_active(result[u'zone'][u'id'])
|
|
||||||
|
|
||||||
client.delete_group(result_group['id'], status=400)
|
|
||||||
|
|
||||||
#Delete zone
|
|
||||||
client.delete_zone(result_zone['id'], status=202)
|
|
||||||
client.wait_until_zone_deleted(result_zone['id'])
|
|
||||||
|
|
||||||
#Should now be able to delete group
|
|
||||||
client.delete_group(result_group['id'], status=200)
|
|
||||||
finally:
|
|
||||||
if result_zone:
|
|
||||||
client.delete_zone(result_zone['id'], status=(202,404))
|
|
||||||
if result_group:
|
|
||||||
client.delete_group(result_group['id'], status=(200,404))
|
|
||||||
|
|
||||||
def test_delete_group_not_authorized(shared_zone_test_context):
|
|
||||||
"""
|
|
||||||
Tests that only the admins can delete a zone
|
|
||||||
"""
|
|
||||||
ok_client = shared_zone_test_context.ok_vinyldns_client
|
|
||||||
not_admin_client = shared_zone_test_context.dummy_vinyldns_client
|
|
||||||
try:
|
|
||||||
new_group = {
|
|
||||||
'name': 'test-delete-group-not-authorized',
|
|
||||||
'email': 'test@test.com',
|
|
||||||
'description': 'this is a description',
|
|
||||||
'members': [ { 'id': 'ok'} ],
|
|
||||||
'admins': [ { 'id': 'ok'} ]
|
|
||||||
}
|
|
||||||
saved_group = ok_client.create_group(new_group, status=200)
|
|
||||||
not_admin_client.delete_group(saved_group['id'], status=403)
|
|
||||||
finally:
|
|
||||||
if saved_group:
|
|
||||||
ok_client.delete_group(saved_group['id'], status=(200,404))
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
x
Reference in New Issue
Block a user