2
0
mirror of https://github.com/VinylDNS/vinyldns synced 2025-08-22 02:02:14 +00:00
vinyldns/DEVELOPER_GUIDE.md
2018-07-31 14:26:07 -04:00

13 KiB

Getting Started

Table of Contents

Project Structure

Make sure that you have the requirements installed before proceeding.

The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project, from the root directory run sbt. Most of the code can be found in the modules directory. The following modules are present:

  • root - this is the parent project, if you run tasks here, it will run against all sub-modules
  • api - the engine behind VinylDNS. Has the REST API that all things interact with.
  • core - contains code applicable across modules
  • portal - the web user interface for VinylDNS
  • docs - the API Documentation for VinylDNS

VinylDNS API

The API is the RESTful API for interacting with VinylDNS. The code is found in modules/api. The following technologies are used:

  • Akka HTTP - Used primarily for REST and HTTP calls. We migrated code from Spray.io, so Akka HTTP was a rather seamless upgrade
  • FS2 - Used for backend change processing off of message queues. FS2 has back-pressure built in, and gives us tools like throttling and concurrency.
  • Cats Effect - We are currently migrating away from Future as our primary type and towards cats effect IO. Hopefully, one day, all the things will be using IO.
  • Cats - Used for functional programming. There is presently a hybrid of somethings scalaz and other things cats. We are migrating away from scalaz, so when building new code prefer cats if possible.
  • PureConfig - For loading configuration values. We are currently migrating to use PureConfig everywhere. Not all the places use it yet.

The API has the following dependencies:

  • MySQL - the SQL database that houses zone data
  • DynamoDB - where all of the other data is stored
  • SQS - for managing concurrent updates and enabling high-availability
  • Bind9 - for testing integration with a real DNS system

The API Code

The API code can be found in modules/api

  • functional_test - contains the python black box / regression tests
  • src/it - integration tests
  • src/main - the main source code
  • src/test - unit tests
  • src/universal - items that are packaged in the docker image for the VinylDNS API

The package structure for the source code follows:

  • vinyldns.api.domain - contains the core front-end logic. This includes things like the application services, repository interfaces, domain model, validations, and business rules.
  • vinyldns.api.engine - the back-end processing engine. This is where we process commands including record changes, zone changes, and zone syncs.
  • vinyldns.api.protobuf - marshalling and unmarshalling to and from protobuf to types in our system
  • vinyldns.api.repository - repository implementations live here
  • vinyldns.api.route - http endpoints

VinylDNS Portal

The Portal project (found in modules/portal) is the user interface for VinylDNS. The project is built using:

Tne portal is mostly a shim around the API. Most actions in the user interface and translated into API calls.

The features that the Portal provides that are not in the API include:

  • Authentication against LDAP
  • Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user for them in the database with their LDAP information.

Developer Requirements

  • sbt
  • Java 8
  • Python 2.7
  • virtualenv
  • docker
  • wget
  • Protobuf 2.6.1

Installing Protobuf on a Mac

The protocol buffer library is located at https://github.com/sbt/sbt-protobuf, we currently have it set to v0.5.2, which can only support up to protobuf v2.6.1

Run protoc --version, if it is not 2.6.1, then

  1. Note that on Mac OS, brew install protobuf will install a version too new to use with this project, if you have protobuf installed through brew, then run brew uninstall protobuf
  2. To install protobuf v2.6.1, go to https://github.com/google/protobuf/releases/tag/v2.6.1, and download protobuf-2.6.1.tar.gz
  3. Run the following commands to extract the tar, cd into it, and configure/install:
$ cd ~/Downloads; tar -zxvf protobuf-2.6.1.tar.gz; cd protobuf-2.6.1
$ ./configure
$ make
$ make check
$ sudo make install

  1. Finally, run protoc --version to confirm you are on v2.6.1

Docker

Be sure to install the latest version of docker. You must have docker running in order to work with VinylDNS on your machine. Be sure to start it up if it is not running before moving further.

How to use the Docker Image

Starting a vinyldns-api server instance

VinylDNS depends on several dependencies including mysql, sqs, dynamodb and a DNS server. These can be passed in as environment variables, or you can override the config file with your own settings.

Environment variables

  1. MYSQL_ADDRESS - the IP address of the mysql server; defaults to vinyldns-mysql assuming a docker compose setup
  2. MYSQL_PORT - the port of the mysql server; defaults to 3306

Volume Mounts

vinyldns exposes volumes that allow the user to customize the runtime. Those mounts include:

  • /opt/docker/lib_extra - place here additional jar files that need to be loaded into the classpath when the application starts up. This is used for "plugins" that are proprietary or not part of the standard build. All jar files here will be placed on the class path.
  • /opt/docker/conf - place an application.conf file here with your own custom settings. This can be easier than passing in environment variables.

Ports

vinyldns only exposes port 9000 for HTTP access to all endpoints

Starting a vinyldns installation locally in docker

There is a handy docker-compose file for spinning up the production docker image on your local under docker/docker-compose-build.yml

From the root directory run...

> docker-compose -f ./docker/docker-compose-build.yml up -d

This will startup all the dependencies as well as the api server. Once the api server is running, you can verify it is up by running the following curl -v http://localhost:9000/status

To stop the local setup, run ./bin/stop-all-docker-containers.sh from the project root.

Validating everything

VinylDNS comes with a build script ./build.sh that validates, verifies, and runs functional tests. Note: This takes a while to run, and typically is only necessary if you want to simulate the same process that runs on the build servers

When functional tests run, you will see a lot of output intermingled together across the various containers. You can view only the output of the functional tests at target/vinyldns-functest.log. If you want to see the docker log output from any one container, you can view them after the tests complete at:

  • target/vinyldns-api.log - the api server logs
  • target/vinyldns-bind9.log - the bind9 DNS server logs
  • target/vinyldns-dynamodb.log - the DynamoDB server logs
  • target/vinyldns-elasticmq.log - the ElasticMQ (SQS) server logs
  • target/vinyldns-functest.log - the output of running the functional tests
  • target/vinyldns-mysql.log - the MySQL server logs

When the func tests complete, the entire docker setup will be automatically torn down.

Starting the API server locally

To start the API for integration, functional, or portal testing. Start up sbt by running sbt from the root directory.

  • project api to change the sbt project to the api
  • dockerComposeUp to spin up the dependencies on your machine.
  • reStart to start up the API server
  • Wait until you see the message VINYLDNS SERVER STARTED SUCCESSFULLY before working with the server
  • To stop the VinylDNS server, run reStop from the api project
  • To stop the dependent docker containers, run dockerComposeStop from the api project

Starting the Portal locally

To run the portal locally, you first have to start up the VinylDNS API Server (see instructions above). Once that is done, in the same sbt session or a different one, go to project portal and then execute ;preparePortal; run.

Testing the portal against your own LDAP directory

Often, it is valuable to test locally hitting your own LDAP directory. This is possible to do, just take care when following these steps as to not accidentally check in secrets or your own environment information in future PRs.

  1. Create a file modules/portal/conf/local.conf. This file is added to .gitignore so it should not be committed
  2. Configure your own LDAP settings in local.conf. See the LDAP section of modules/portal/conf/application.conf for the expected format. Be sure to set portal.test_login = false in that file to override the test setting
  3. If you need SSL certs, you will need to create a java keystore that holds your SSL certificates. The portal only reads from the trust store, so you do not need to pass in the password to the app.
  4. Put the trust store in modules/portal/private directory. It is also added to .gitignore to prevent you from accidentally committing it.
  5. Start sbt in a separate terminal by running sbt -Djavax.net.ssl.trustStore="modules/portal/private/trustStore.jks"
  6. Go to project portal and type ;preparePortal;run to start up the portal
  7. You can now login using your own LDAP repository going to http://localhost:9001/login

Configuration

Configuration of the application is done using Typesafe Config.

  • reference.conf contains the default configuration values.
  • application.conf contains environment specific overrides of the defaults

Testing

Unit Tests

  1. First, start up your scala build tool: sbt. I usually do a clean immediately after starting.
  2. (Optionally) Go to the project you want to work on, for example project api for the api; project portal for the portal.
  3. Run all unit tests by just running test
  4. Run an individual unit test by running testOnly *MySpec
  5. If you are working on a unit test and production code at the same time, use ~ that automatically background compiles for you! ~testOnly *MySpec

Integration Tests

Integration tests are used to test integration with real dependent services. We use docker to spin up those backend services for integration test development.

  1. Integration tests are currently only in the api module. Go to the module in sbt project api
  2. Type dockerComposeUp to start up dependent background services
  3. Run all integration tests by typing it:test.
  4. Run an individual integration test by typing it:testOnly *MyIntegrationSpec
  5. You can background compile as well if working on a single spec by using ~it:testOnly *MyIntegrationSpec

Functional Tests

When adding new features, you will often need to write new functional tests that black box / regression test the API. We have over 350 (and growing) automated regression tests. The API functional tests are written in Python and live under modules/api/functional_test.

To run functional tests, make sure that you have started the api server (directions above). Then outside of sbt, cd modules/api/functional_test.

Managing Test Zone Files

When functional tests are run, we spin up several docker containers. One of the docker containers is a Bind9 DNS server. If you need to add or modify the test DNS zone files, you can find them in docker/bind9/zones

Handy Scripts

Start up a complete local API server

bin/docker-up-api-server.sh - this will build vinyl (if not built) and then start up an api server and all dependencies

The following ports and services are available:

  • mysql - 3306
  • dynamodb - 19000
  • bind9 - 19001
  • sqs - 9324
  • api server (main vinyl backend app) - 9000

To kill the environment, run bin/stop-all-docker-containers.sh

Kill all docker containers

bin/stop-all-docker-containers - sometimes, you can have orphaned docker containers hanging out. Run this script to tear everything down. Note: It will stop ALL docker containers on the current machine!

Start up a DNS server

bin/docker-up-dns-server.sh - fires up a DNS server. Sometimes, especially when developing func tests, you want to quickly see how new test zones / records behave without having to fire up an entire environment. This script fires up only the dns server with our test zones. The DNS server is accessible locally on port 19001.

Publish the API docker image

bin/docker-publish-api.sh - publishes the API docker image. You must be logged into the repo you are publishing to using docker login, or create a file in ~/.ivy/.dockerCredentials that has your credentials in it following the format defined in https://www.scala-sbt.org/1.x/docs/Publishing.html