diff --git a/AUTHORS.md b/AUTHORS.md new file mode 100644 index 000000000..cb91fc143 --- /dev/null +++ b/AUTHORS.md @@ -0,0 +1,45 @@ +# Authors + +This project would not be possible without the generous contributions of many people. +Thank you! If you have contributed in any way, but do not see your name here, please open a PR to add yourself! + +## Maintainers +- Paul Cleary +- Nima Eskandary +- Michael Ly +- Rebecca Star +- Britney Wright + +## Tool Maintainers +- Mike Ball: vinyldns-cli, vinyldns-terraform +- Nathan Pierce: vinyldns-ruby + +## DNS SMEs +- Joe Crowe +- David Back +- Hong Ye + +## Contributors +- Tommy Barker +- Robert Barrimond +- Charles Bitter +- Maulon Byron +- Peter Cline +- Kemar Cockburn +- Luke Cori +- Jearvon Dharrie +- Daniel Jin +- Krista Khare +- Patrick Lee +- Sheree Liu +- Deepak Mohanakrishnan +- Joshulyne Park +- Sriram Ramakrishnan +- Khalid Reid +- Trent Schmidt +- Ghafar Shah +- Jess Stodola +- Jim Wakeman +- Fei Wan +- Peter Willis +- Andrew Wang diff --git a/CODE_OF_CONDUCT.md b/CODE_OF_CONDUCT.md new file mode 100644 index 000000000..519e24176 --- /dev/null +++ b/CODE_OF_CONDUCT.md @@ -0,0 +1,46 @@ +# VinylDNS Code of Conduct + +## Our Pledge + +In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation. + +## Our Standards + +Examples of behavior that contributes to creating a positive environment include: + +* Using welcoming and inclusive language +* Being respectful of differing viewpoints and experiences +* Gracefully accepting constructive criticism +* Focusing on what is best for the community +* Showing empathy towards other community members + +Examples of unacceptable behavior by participants include: + +* The use of sexualized language or imagery and unwelcome sexual attention or advances +* Trolling, insulting/derogatory comments, and personal or political attacks +* Public or private harassment +* Publishing others' private information, such as a physical or electronic address, without explicit permission +* Other conduct which could reasonably be considered inappropriate in a professional setting + +## Our Responsibilities + +Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior. + +Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful. + +## Scope + +This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers. + +## Enforcement + +Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at vinyldns-core@googlegroups.com. The project team will review and investigate all complaints, and will respond in a way that it deems appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately. + +Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership. + +## Attribution + +This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4, available at [http://contributor-covenant.org/version/1/4][version] + +[homepage]: http://contributor-covenant.org +[version]: http://contributor-covenant.org/version/1/4/ diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md new file mode 100644 index 000000000..f7fa6333c --- /dev/null +++ b/CONTRIBUTING.md @@ -0,0 +1,173 @@ +# Contributing to VinylDNS +The following are a set of guidelines for contributing to VinylDNS and its associated repositories. + +## Table of Contents +- [Code of Conduct](#code-of-conduct) +- [Issues](#issues) +- [Making Contributions](#making-contributions) +- [Style Guide](#style-guide) +- [Testing](#testing) +- [License Header Check](#license-header-check) +- [Release Management](#release-management) + +## Code of Conduct +This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By +participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com. + +## Issues +If you would like to contribute to VinylDNS, you can look through `beginner` and `help-wanted` issues. We keep a list +of these issues around to encourage participation in building the platform. In the issue list, you can chose "Labels" and +choose a specific label to narrow down the issues to review. + +* **Beginner issues**: only require a few lines of code to complete, rather isolated to one or two files. A good way +to get through changing and testing your code, and meet everyone! +* **Help wanted issues**: these are more involved than beginner issues, are items that tend to come near the top of our backlog but not necessarily in the current development stream. + +Besides those issues, you can sort the issue list by number of comments to find one that maybe of interest. You do +_not_ have to limit yourself to _only_ "beginner" or "help-wanted" issues. + +Before choosing an issue, see if anyone is assigned or has indicated they are working on it (either in comment or via PR). +You can work on the issue by reviewing the PR or asking where they are at; otherwise, it doesn't make sense to duplicate +work that is already in-progress. + +## Making Contributions +### Submitting a Code Contribution +We follow the standard *GitHub Flow* for taking code contributions. The following is the process typically followed: + +1 - Create a fork of the repository that you want to contribute code to +1 - Clone your forked repository to your local machine +1 - In your local machine, add a remote to the "main" repository, we call this "upstream" by running +`git remote add upstream https://github.com/vinyldns/vinyldns.git`. Note: you can also use `ssh` instead of `https` +1 - Create a local branch for your work `git checkout -b your-user-name/user-branch-name`. Add whatever your GitHub +user name is before whatever you want your branch to be. +1 - Begin working on your local branch +1 - Make sure you run all builds before posting a PR! It's faster to run everything locally rather than waiting for +the build server to complete its job. See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for information on local development +1 - When you are ready to contribute your code, run `git push origin your-user-name/user-branch-name` to push your changes +to your _own fork_. +1 - Go to the [VinylDNS main repository](https://github.com/vinyldns/vinyldns.git) (or whatever repo you are contributing to) +and you will see your change waiting and a link to "Create a PR". Click the link to create a PR. +1 - You will receive comments on your PR. Use the PR as a dialog on your changes. + +### Commit Messages +* Limit the first line to 72 characters or fewer. +* Use the present tense ("Add validation" not "Added validation"). +* Use the imperative mood ("Move database call" not "Moves database call"). +* Reference issues and other pull requests liberally after the first line. Use [GitHub Auto Linking](https://help.github.com/articles/autolinked-references-and-urls/) +to link your PR to other issues. _Note: This is essential, otherwise we may not know what issue a PR is created for_ +* Use markdown syntax as much as you want + +### Modifying your Pull Requests +Often times, you will need to make revisions to your PRs that you submit. This is part of the standard process of code +review. There are different ways that you can make revisions, but the following process is pretty standard. + +1 - Sync with upstream first. `git checkout master && git fetch upstream && git rebase upstream master && git push origin master` +1 - Checkout your branch on your local `git checkout your-user-name/user-branch-name` +1 - Sync your branch with latest `git rebase master`. Note: If you have merge conflicts, you will have to resolve them +1 - Revise your PR, making changes recommended in the comments / code review +1 - When all tests pass, `git push origin your-user-name/user-branch-name` to revise your commit. GitHub automatically +recognizes the update and will re-run verification on your PR! + +### Merging your Pull Request +Once your PR is approved, one of the maintainers will merge your request for you. If you are a maintainer, you can +merge your PR once you have the approval of at least 2 other maintainers. + +## Style Guides +### Python Style Guide +* Use snake case for everything except classes. `this_is_snake_case`; `thisIsNotSnakeCaseDoNotDoThis` + +## Testing +For specific steps to run the tests see the [Testing](BUILDING.md#testing) section of the Building guide. + +### Python for Testing +We use [pytest](https://docs.pytest.org/en/latest/) for python tests. It is helpful that you browse the documentation +so that you are familiar with pytest and how our functional tests operate. + +We also use [PyHamcrest](https://pyhamcrest.readthedocs.io/en/release-1.8/) for matchers in order to write easy +to read tests. Please browse that documentation as well so that you are familiar with the different matchers +for PyHamcrest. There aren't a lot, so it should be quick. + +Want to become a super star? [Write custom matchers!](https://pyhamcrest.readthedocs.io/en/release-1.8/custom_matchers/) + +### Python Setup +We use python for our functional tests exclusively in this project. You can find all python code under the +`functional_test` directory. + +In that directory are a few important files for you to be familiar with: + +* vinyl_client.py - this provides the interface to the VinylDNS api. It handles signing the request for you, as well +as building and executing the requests, and giving you back valid responses. For all new API endpoints, there should +be a corresponding function in the vinyl_client +* utils.py - provides general use functions that can be used anywhere in your tests. Feel free to contribute new +functions here when you see repetition in the code + +Functional tests run on every build, and are designed to work _in every environment_. That means locally, in docker, +and in production environments. + +The functional tests that we run live in `functional_test/live_tests` directory. In there, we have directories / modules +for different areas of the application. + +* membership - for managing groups and users +* recordsets - for managing record sets +* zones - for managing zones +* internal - for internal endpoints (not intended for public consumption) +* batch - for managing batch updates + +### Functional Test Context +Our func tests use pytest contexts. There is a main test context that lives in `shared_zone_test_context.py` +that creates and tears down a shared test context used by many functional tests. The +beauty of pytest is that it will ensure that the test context is stood up exactly once, then all individual tests +that use the context are called using that same context. + +The shared test context sets up several things that can be reused: + +1. An ok user and group +1. A dummy user and group - a separate user and group helpful for tesing access controls and authorization +1. An ok zone accessible only by the ok user and ok group +1. A dummy zone accessible only by the dummy user and dummy group +1. An IPv6 reverse zone +1. A normal IPv4 reverse zone +1. A classless IPv4 reverse zone +1. A parent zone that has child zones - used for testing NS record management and zone delegations + +### Really Important Test Context Rules! + +1. Try to use the `shared_zone_test_context` whenever possible! This reduces the time +it takes to run functional tests (which is in minutes). +1. Limit changes to users, groups, and zones in the shared test context, as doing so could impact downstream tests +1. If you do modify any entities in the shared zone context, roll those back when your function completes! + +## License Header Check + +### API +VinylDNS is configured with [sbt-header](https://github.com/sbt/sbt-header). All existing scala files have the appropriate +header. You can check for headers in `sbt` with: + +```bash +> ;headerCheck;test:headerCheck;it:headerCheck +``` + +If you add a new file, you can add the appropriate header in `sbt` with: +```bash +> ;headerCreate;test:headerCreate;it:headerCreate +``` + +### Portal +>You can check for headers in `sbt` with: +``` +project portal +;headerCheck;test:headerCheck;checkJsHeaders +``` + +>You can create headers in `sbt` with: +``` +project portal +;headerCreate;test:headerCreate;createJsHeaders +``` + +## Release Management +As an overview, we release on a regular schedule roughly once per month. At any time, you can see the following releases scheduled using Milestones in GitHub. + +* - for example, 0.9.8. This constitutes the current work that is in-flight +* - for example, 0.9.9. These are the issues pegged for the _next_ release to be worked on +* Backlog - These are the issues designated to be worked on in the not too distant future. diff --git a/DEVELOPER_GUIDE.md b/DEVELOPER_GUIDE.md new file mode 100644 index 000000000..0622c2da4 --- /dev/null +++ b/DEVELOPER_GUIDE.md @@ -0,0 +1,251 @@ +# Getting Started + +## Table of Contents +- [Project Structure](#project-structure) +- [Developer Requirements](#developer-requirements) +- [Docker](#docker-setup) +- [Configuration](#configuration) +- [Starting the API Server Locally](#starting-the-api-server-locally) +- [Starting the Portal Locally](#starting-the-portal-locally) +- [Testing](#testing) +- [Handy Scripts](#handy-scripts) + +## Project Structure +Make sure that you have the requirements installed before proceeding. + +The main codebase is a multi-module Scala project with multiple sub-modules. To start working with the project, +from the root directory run `sbt`. Most of the code can be found in the `modules` directory. +The following modules are present: + +* `root` - this is the parent project, if you run tasks here, it will run against all sub-modules +* `api` - the engine behind VinylDNS. Has the REST API that all things interact with. +* `core` - contains code applicable across modules +* `portal` - the web user interface for VinylDNS +* `docs` - the API Documentation for VinylDNS + +### VinylDNS API +The API is the RESTful API for interacting with VinylDNS. The code is found in `modules/api`. The following technologies are used: + +* [Akka HTTP](https://doc.akka.io/docs/akka-http/current/) - Used primarily for REST and HTTP calls. We migrated +code from Spray.io, so Akka HTTP was a rather seamless upgrade +* [FS2](https://functional-streams-for-scala.github.io/fs2/) - Used for backend change processing off of message queues. +FS2 has back-pressure built in, and gives us tools like throttling and concurrency. +* [Cats Effect](https://typelevel.org/cats-effect/) - We are currently migrating away from `Future` as our primary type +and towards cats effect IO. Hopefully, one day, all the things will be using IO. +* [Cats](https://typelevel.org/cats) - Used for functional programming. There is presently a hybrid of somethings +scalaz and other things cats. We are migrating away from scalaz, so when building new code prefer cats if possible. +* [PureConfig](https://pureconfig.github.io/) - For loading configuration values. We are currently migrating to +use PureConfig everywhere. Not all the places use it yet. + +The API has the following dependencies: +* MySQL - the SQL database that houses zone data +* DynamoDB - where all of the other data is stored +* SQS - for managing concurrent updates and enabling high-availability +* Bind9 - for testing integration with a real DNS system + +#### The API Code +The API code can be found in `modules/api` + +* `functional_test` - contains the python black box / regression tests +* `src/it` - integration tests +* `src/main` - the main source code +* `src/test` - unit tests +* `src/universal` - items that are packaged in the docker image for the VinylDNS API + +The package structure for the source code follows: + +* `vinyldns.api.domain` - contains the core front-end logic. This includes things like the application services, +repository interfaces, domain model, validations, and business rules. +* `vinyldns.api.engine` - the back-end processing engine. This is where we process commands including record changes, +zone changes, and zone syncs. +* `vinyldns.api.protobuf` - marshalling and unmarshalling to and from protobuf to types in our system +* `vinyldns.api.repository` - repository implementations live here +* `vinyldns.api.route` - http endpoints + +### VinylDNS Portal +The Portal project (found in `modules/portal`) is the user interface for VinylDNS. The project is built +using: +* [Play Framework](https://www.playframework.com/documentation/2.6.x/Home) +* [AngularJS](https://angularjs.org/) + +Tne portal is _mostly_ a shim around the API. Most actions in the user interface and translated into API calls. + +The features that the Portal provides that are not in the API include: +* Authentication against LDAP +* Creation of users - when a user logs in for the first time, VinylDNS automatically creates a user for them in the +database with their LDAP information. + +## Developer Requirements +- sbt +- Java 8 +- Python 2.7 +- virtualenv +- docker +- wget +- Protobuf 2.6.1 + +### Installing Protobuf on a Mac +The protocol buffer library is located at `https://github.com/sbt/sbt-protobuf`, we currently have it set to v0.5.2, which can only support up to protobuf v2.6.1 + +Run `protoc --version`, if it is not 2.6.1, then + +1. Note that on Mac OS, `brew install protobuf` will install a version too new to use with this project, if you have protobuf installed through brew, then run `brew uninstall protobuf` +1. To install protobuf v2.6.1, go to https://github.com/google/protobuf/releases/tag/v2.6.1, and download `protobuf-2.6.1.tar.gz` +1. Run the following commands to extract the tar, cd into it, and configure/install: + ``` + $ cd ~/Downloads; tar -zxvf protobuf-2.6.1.tar.gz; cd protobuf-2.6.1 + $ ./configure + $ make + $ make check + $ sudo make install + + ``` +1. Finally, run `protoc --version` to confirm you are on v2.6.1 + +## Docker +Be sure to install the latest version of [docker](https://docs.docker.com/). You must have docker running in order to work with VinylDNS on your machine. +Be sure to start it up if it is not running before moving further. + +### How to use the Docker Image +#### Starting a vinyldns-api server instance +VinylDNS depends on several dependencies including mysql, sqs, dynamodb and a DNS server. These can be passed in as +environment variables, or you can override the config file with your own settings. + +#### Environment variables +1. `MYSQL_ADDRESS` - the IP address of the mysql server; defaults to `vinyldns-mysql` assuming a docker compose setup +1. `MYSQL_PORT` - the port of the mysql server; defaults to 3306 + +#### Volume Mounts +vinyldns exposes volumes that allow the user to customize the runtime. Those mounts include: + +* `/opt/docker/lib_extra` - place here additional jar files that need to be loaded into the classpath when the application starts up. +This is used for "plugins" that are proprietary or not part of the standard build. All jar files here will be placed on the class path. +* `/opt/docker/conf` - place an `application.conf` file here with your own custom settings. This can be easier than passing in environment +variables. + +#### Ports +vinyldns only exposes port 9000 for HTTP access to all endpoints + +#### Starting a vinyldns installation locally in docker +There is a handy docker-compose file for spinning up the production docker image on your local under `docker/docker-compose-build.yml` + +From the root directory run... + +``` +> docker-compose -f ./docker/docker-compose-build.yml up -d +``` + +This will startup all the dependencies as well as the api server. Once the api server is running, you can verify it is +up by running the following `curl -v http://localhost:9000/status` + +To stop the local setup, run `./bin/stop-all-docker-containers.sh` from the project root. + +#### Validating everything +VinylDNS comes with a build script `./build.sh` that validates, verifies, and runs functional tests. Note: This +takes a while to run, and typically is only necessary if you want to simulate the same process that runs on the build +servers + +When functional tests run, you will see a lot of output intermingled together across the various containers. You can view only the output +of the functional tests at `target/vinyldns-functest.log`. If you want to see the docker log output from any one +container, you can view them after the tests complete at: + +* `target/vinyldns-api.log` - the api server logs +* `target/vinyldns-bind9.log` - the bind9 DNS server logs +* `target/vinyldns-dynamodb.log` - the DynamoDB server logs +* `target/vinyldns-elasticmq.log` - the ElasticMQ (SQS) server logs +* `target/vinyldns-functest.log` - the output of running the functional tests +* `target/vinyldns-mysql.log` - the MySQL server logs + +When the func tests complete, the entire docker setup will be automatically torn down. + +## Starting the API server locally +To start the API for integration, functional, or portal testing. Start up sbt by running `sbt` from the root directory. +* `project api` to change the sbt project to the api +* `dockerComposeUp` to spin up the dependencies on your machine. +* `reStart` to start up the API server +* Wait until you see the message `VINYLDNS SERVER STARTED SUCCESSFULLY` before working with the server +* To stop the VinylDNS server, run `reStop` from the api project +* To stop the dependent docker containers, run `dockerComposeStop` from the api project + +## Starting the Portal locally +To run the portal locally, you _first_ have to start up the VinylDNS API Server (see instructions above). Once +that is done, in the same `sbt` session or a different one, go to `project portal` and then execute `;preparePortal; run`. + +### Testing the portal against your own LDAP directory +Often, it is valuable to test locally hitting your own LDAP directory. This is possible to do, just take care when +following these steps as to not accidentally check in secrets or your own environment information in future PRs. + +1. Create a file `modules/portal/conf/local.conf`. This file is added to `.gitignore` so it should not be committed +1. Configure your own LDAP settings in local.conf. See the LDAP section of `modules/portal/conf/application.conf` for the +expected format. Be sure to set `portal.test_login = false` in that file to override the test setting +1. If you need SSL certs, you will need to create a java keystore that holds your SSL certificates. The portal only +_reads_ from the trust store, so you do not need to pass in the password to the app. +1. Put the trust store in `modules/portal/private` directory. It is also added to .gitignore to prevent you from +accidentally committing it. +1. Start `sbt` in a separate terminal by running `sbt -Djavax.net.ssl.trustStore="modules/portal/private/trustStore.jks"` +1. Go to `project portal` and type `;preparePortal;run` to start up the portal +1. You can now login using your own LDAP repository going to http://localhost:9001/login + +## Configuration +Configuration of the application is done using [Typesafe Config](https://github.com/typesafehub/config). + +* `reference.conf` contains the _default_ configuration values. +* `application.conf` contains environment specific overrides of the defaults + +## Testing +### Unit Tests +1. First, start up your scala build tool: `sbt`. I usually do a *clean* immediately after starting. +1. (Optionally) Go to the project you want to work on, for example `project api` for the api; `project portal` for the portal. +1. Run _all_ unit tests by just running `test` +1. Run an individual unit test by running `testOnly *MySpec` +1. If you are working on a unit test and production code at the same time, use `~` that automatically background compiles for you! +`~testOnly *MySpec` + +### Integration Tests +Integration tests are used to test integration with _real_ dependent services. We use docker to spin up those +backend services for integration test development. + +1. Integration tests are currently only in the `api` module. Go to the module in sbt `project api` +1. Type `dockerComposeUp` to start up dependent background services +1. Run all integration tests by typing `it:test`. +1. Run an individual integration test by typing `it:testOnly *MyIntegrationSpec` +1. You can background compile as well if working on a single spec by using `~it:testOnly *MyIntegrationSpec` + +### Functional Tests +When adding new features, you will often need to write new functional tests that black box / regression test the +API. We have over 350 (and growing) automated regression tests. The API functional tests are written in Python +and live under `modules/api/functional_test`. + +To run functional tests, make sure that you have started the api server (directions above). Then outside of sbt, `cd modules/api/functional_test`. + +### Managing Test Zone Files +When functional tests are run, we spin up several docker containers. One of the docker containers is a Bind9 DNS +server. If you need to add or modify the test DNS zone files, you can find them in +`docker/bind9/zones` + +## Handy Scripts +### Start up a complete local API server +`bin/docker-up-api-server.sh` - this will build vinyl (if not built) and then start up an api server and all dependencies + +The following ports and services are available: + +- mysql - 3306 +- dynamodb - 19000 +- bind9 - 19001 +- sqs - 9324 +- api server (main vinyl backend app) - 9000 + +To kill the environment, run `bin/stop-all-docker-containers.sh` + +### Kill all docker containers +`bin/stop-all-docker-containers` - sometimes, you can have orphaned docker containers hanging out. Run this +script to tear everything down. Note: It will stop ALL docker containers on the current machine! + +### Start up a DNS server +`bin/docker-up-dns-server.sh` - fires up a DNS server. Sometimes, especially when developing func tests, you want +to quickly see how new test zones / records behave without having to fire up an entire environment. This script +fires up _only_ the dns server with our test zones. The DNS server is accessible locally on port 19001. + +### Publish the API docker image +`bin/docker-publish-api.sh` - publishes the API docker image. You must be logged into the repo you are publishing to +using `docker login`, or create a file in `~/.ivy/.dockerCredentials` that has your credentials in it following the format defined in https://www.scala-sbt.org/1.x/docs/Publishing.html diff --git a/LICENSE b/LICENSE new file mode 100644 index 000000000..29064f28b --- /dev/null +++ b/LICENSE @@ -0,0 +1,202 @@ + Apache License + Version 2.0, January 2004 + http://www.apache.org/licenses/ + + TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION + + 1. Definitions. + + "License" shall mean the terms and conditions for use, reproduction, + and distribution as defined by Sections 1 through 9 of this document. + + "Licensor" shall mean the copyright owner or entity authorized by + the copyright owner that is granting the License. + + "Legal Entity" shall mean the union of the acting entity and all + other entities that control, are controlled by, or are under common + control with that entity. For the purposes of this definition, + "control" means (i) the power, direct or indirect, to cause the + direction or management of such entity, whether by contract or + otherwise, or (ii) ownership of fifty percent (50%) or more of the + outstanding shares, or (iii) beneficial ownership of such entity. + + "You" (or "Your") shall mean an individual or Legal Entity + exercising permissions granted by this License. + + "Source" form shall mean the preferred form for making modifications, + including but not limited to software source code, documentation + source, and configuration files. + + "Object" form shall mean any form resulting from mechanical + transformation or translation of a Source form, including but + not limited to compiled object code, generated documentation, + and conversions to other media types. + + "Work" shall mean the work of authorship, whether in Source or + Object form, made available under the License, as indicated by a + copyright notice that is included in or attached to the work + (an example is provided in the Appendix below). + + "Derivative Works" shall mean any work, whether in Source or Object + form, that is based on (or derived from) the Work and for which the + editorial revisions, annotations, elaborations, or other modifications + represent, as a whole, an original work of authorship. For the purposes + of this License, Derivative Works shall not include works that remain + separable from, or merely link (or bind by name) to the interfaces of, + the Work and Derivative Works thereof. + + "Contribution" shall mean any work of authorship, including + the original version of the Work and any modifications or additions + to that Work or Derivative Works thereof, that is intentionally + submitted to Licensor for inclusion in the Work by the copyright owner + or by an individual or Legal Entity authorized to submit on behalf of + the copyright owner. For the purposes of this definition, "submitted" + means any form of electronic, verbal, or written communication sent + to the Licensor or its representatives, including but not limited to + communication on electronic mailing lists, source code control systems, + and issue tracking systems that are managed by, or on behalf of, the + Licensor for the purpose of discussing and improving the Work, but + excluding communication that is conspicuously marked or otherwise + designated in writing by the copyright owner as "Not a Contribution." + + "Contributor" shall mean Licensor and any individual or Legal Entity + on behalf of whom a Contribution has been received by Licensor and + subsequently incorporated within the Work. + + 2. Grant of Copyright License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + copyright license to reproduce, prepare Derivative Works of, + publicly display, publicly perform, sublicense, and distribute the + Work and such Derivative Works in Source or Object form. + + 3. Grant of Patent License. Subject to the terms and conditions of + this License, each Contributor hereby grants to You a perpetual, + worldwide, non-exclusive, no-charge, royalty-free, irrevocable + (except as stated in this section) patent license to make, have made, + use, offer to sell, sell, import, and otherwise transfer the Work, + where such license applies only to those patent claims licensable + by such Contributor that are necessarily infringed by their + Contribution(s) alone or by combination of their Contribution(s) + with the Work to which such Contribution(s) was submitted. If You + institute patent litigation against any entity (including a + cross-claim or counterclaim in a lawsuit) alleging that the Work + or a Contribution incorporated within the Work constitutes direct + or contributory patent infringement, then any patent licenses + granted to You under this License for that Work shall terminate + as of the date such litigation is filed. + + 4. Redistribution. You may reproduce and distribute copies of the + Work or Derivative Works thereof in any medium, with or without + modifications, and in Source or Object form, provided that You + meet the following conditions: + + (a) You must give any other recipients of the Work or + Derivative Works a copy of this License; and + + (b) You must cause any modified files to carry prominent notices + stating that You changed the files; and + + (c) You must retain, in the Source form of any Derivative Works + that You distribute, all copyright, patent, trademark, and + attribution notices from the Source form of the Work, + excluding those notices that do not pertain to any part of + the Derivative Works; and + + (d) If the Work includes a "NOTICE" text file as part of its + distribution, then any Derivative Works that You distribute must + include a readable copy of the attribution notices contained + within such NOTICE file, excluding those notices that do not + pertain to any part of the Derivative Works, in at least one + of the following places: within a NOTICE text file distributed + as part of the Derivative Works; within the Source form or + documentation, if provided along with the Derivative Works; or, + within a display generated by the Derivative Works, if and + wherever such third-party notices normally appear. The contents + of the NOTICE file are for informational purposes only and + do not modify the License. You may add Your own attribution + notices within Derivative Works that You distribute, alongside + or as an addendum to the NOTICE text from the Work, provided + that such additional attribution notices cannot be construed + as modifying the License. + + You may add Your own copyright statement to Your modifications and + may provide additional or different license terms and conditions + for use, reproduction, or distribution of Your modifications, or + for any such Derivative Works as a whole, provided Your use, + reproduction, and distribution of the Work otherwise complies with + the conditions stated in this License. + + 5. Submission of Contributions. Unless You explicitly state otherwise, + any Contribution intentionally submitted for inclusion in the Work + by You to the Licensor shall be under the terms and conditions of + this License, without any additional terms or conditions. + Notwithstanding the above, nothing herein shall supersede or modify + the terms of any separate license agreement you may have executed + with Licensor regarding such Contributions. + + 6. Trademarks. This License does not grant permission to use the trade + names, trademarks, service marks, or product names of the Licensor, + except as required for reasonable and customary use in describing the + origin of the Work and reproducing the content of the NOTICE file. + + 7. Disclaimer of Warranty. Unless required by applicable law or + agreed to in writing, Licensor provides the Work (and each + Contributor provides its Contributions) on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or + implied, including, without limitation, any warranties or conditions + of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A + PARTICULAR PURPOSE. You are solely responsible for determining the + appropriateness of using or redistributing the Work and assume any + risks associated with Your exercise of permissions under this License. + + 8. Limitation of Liability. In no event and under no legal theory, + whether in tort (including negligence), contract, or otherwise, + unless required by applicable law (such as deliberate and grossly + negligent acts) or agreed to in writing, shall any Contributor be + liable to You for damages, including any direct, indirect, special, + incidental, or consequential damages of any character arising as a + result of this License or out of the use or inability to use the + Work (including but not limited to damages for loss of goodwill, + work stoppage, computer failure or malfunction, or any and all + other commercial damages or losses), even if such Contributor + has been advised of the possibility of such damages. + + 9. Accepting Warranty or Additional Liability. While redistributing + the Work or Derivative Works thereof, You may choose to offer, + and charge a fee for, acceptance of support, warranty, indemnity, + or other liability obligations and/or rights consistent with this + License. However, in accepting such obligations, You may act only + on Your own behalf and on Your sole responsibility, not on behalf + of any other Contributor, and only if You agree to indemnify, + defend, and hold each Contributor harmless for any liability + incurred by, or claims asserted against, such Contributor by reason + of your accepting any such warranty or additional liability. + + END OF TERMS AND CONDITIONS + + APPENDIX: How to apply the Apache License to your work. + + To apply the Apache License to your work, attach the following + boilerplate notice, with the fields enclosed by brackets "{}" + replaced with your own identifying information. (Don't include + the brackets!) The text should be enclosed in the appropriate + comment syntax for the file format. We also recommend that a + file or class name and description of purpose be included on the + same "printed page" as the copyright notice for easier + identification within third-party archives. + + Copyright 2018 Comcast Cable Communications Management, LLC + + Licensed under the Apache License, Version 2.0 (the "License"); + you may not use this file except in compliance with the License. + You may obtain a copy of the License at + + http://www.apache.org/licenses/LICENSE-2.0 + + Unless required by applicable law or agreed to in writing, software + distributed under the License is distributed on an "AS IS" BASIS, + WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + See the License for the specific language governing permissions and + limitations under the License. + diff --git a/README.md b/README.md index c8a7d54d3..90f9d5222 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,96 @@ -# vinyldns -DNS Management System +[![Join the chat at https://gitter.im/vinyldns/Lobby](https://badges.gitter.im/vinyldns/vinyldns.svg)](https://gitter.im/vinyldns/Lobby) + +# VinylDNS + +VinylDNS is a vendor agnostic front-end for managing self-service DNS across your DNS systems. +The platform provides fine-grained access controls, auditing of all changes, a self-service user interface, +secure REST based API, and integration with infrastructure automation tools like Ansible and Terraform. +It is designed to integrate with your existing DNS infrastructure, and provides extensibility to fit your installation. + +Currently, VinylDNS supports: +* Connecting to existing DNS Zones +* Creating, updating, deleting DNS Records +* Working with forward and reverse zones +* Working with IP4 and IP6 records +* Governing access with fine-grained controls at the record and zone level +* Bulk updating of DNS records across zones + +VinylDNS helps secure DNS management via: +* AWS Sig4 signing of all messages to ensure that the message that was sent was not altered in transit +* Throttling of DNS updates to rate limit concurrent updates against your DNS systems +* Encrypting user secrets and TSIG keys at rest and in-transit +* Recording every change made to DNS records and zones + +Integration is simple with first-class language support including: +* java +* ruby +* python +* go-lang + +## Table of Contents +- [Roadmap](#roadmap) +- [Code of Conduct](#code-of-conduct) +- [Developer Guide](#developer-guide) +- [Project Layout](#project-layout) +- [Contributing](#contributing) +- [Contact](#contact) +- [Maintainers and Contributors](#maintainers-and-contributors) +- [Credits](#credits) + +## Roadmap +See [ROADMAP.md](ROADMAP.md) for the future plans for VinylDNS. + +## Code of Conduct +This project and everyone participating in it are governed by the [VinylDNS Code Of Conduct](CODE_OF_CONDUCT.md). By +participating, you agree to this Code. Please report any violations to the code of conduct to vinyldns-core@googlegroups.com. + +## Developer Guide +### Requirements +- sbt +- Java 8 +- Python 2.7 +- virtualenv +- docker +- wget +- Protobuf 2.6.1 + +See [DEVELOPER_GUIDE.md](DEVELOPER_GUIDE.md) for instructions on setting up VinylDNS locally. + +## Project Layout +* [API](modules/api): the API is the main engine for all of VinylDNS. This is the most active area of the codebase, as everything else typically just funnels through +the API. More detail on the API can be provided below. +* [Portal](modules/portal): The portal is a user interface wrapper around the API. Most of the business rules, logic, and processing can be found in the API. The +_only_ feature in the portal not found in the API is creation of users and user authentication. +* [Documentation](modules/docs): The documentation is primarily in support of the API. + +For more details see the [project structure](DEVELOPER_GUIDE.md#project-structure) in the Developer Guide. + +## Contributing +See the [Contributing Guide](CONTRIBUTING.md). + +## Contact +- [Gitter](https://gitter.im/vinyldns/Lobby) +- [Mailing List](https://groups.google.com/forum/#!forum/vinyldns) +- If you have any security concerns please contact the maintainers directly vinyldns-core@googlegroups.com + +## Maintainers and Contributors +The current maintainers (people who can merge pull requests) are: +- Paul Cleary +- Michael Ly +- Rebecca Star +- Britney Wright + +See [AUTHORS.md](AUTHORS.md) for the full list of contributors to VinylDNS. + +## Credits +VinylDNS would not be possible without the help of many other pieces of open source software. Thank you open source world! + +Initial development of DynamoDBHelper done by [Roland Kuhn](https://github.com/rkuhn) from https://github.com/akka/akka-persistence-dynamodb/blob/8d7495821faef754d97759f0d3d35ed18fc17cc7/src/main/scala/akka/persistence/dynamodb/journal/DynamoDBHelper.scala + +Given the Apache 2.0 license of VinylDNS, we specifically want to call out the following libraries and their corresponding licenses shown below. +- [logback-classic](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html) +- [logback-core](https://github.com/qos-ch/logback) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html) +- [h2 database](http://h2database.com) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/) +- [pureconfig](https://github.com/pureconfig/pureconfig) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/) +- [pureconfig-macros](https://github.com/pureconfig/pureconfig) - [Mozilla Public License, version 2.0](https://www.mozilla.org/MPL/2.0/) +- [junit](https://junit.org/junit4/) - [Eclipse Public License 1.0](https://www.eclipse.org/legal/epl-v10.html) diff --git a/ROADMAP.md b/ROADMAP.md new file mode 100644 index 000000000..a2fee1658 --- /dev/null +++ b/ROADMAP.md @@ -0,0 +1,29 @@ +# Roadmap + +The Roadmap captures the plans for VinylDNS. There are a few high-level features that are planned for active development: + +1. DNS SEC - There is no first-class support for DNS SEC. That feature set is being defined. +1. Shared Zones - IP space and large common zones are cumbersome to manage using fine-grained ACL rules. Shared zones +enable self-service management of records via a record ownership model for access controls. Record ownership assigns +a group as the owner of the record to restrict who can modify that record. +1. Zone Management - Presently VinylDNS _connects to existing zones_ for management. Zone Management will allow users +to create and manage zones in the authoritative systems themselves. +1. Record meta data - VinylDNS will allow the "tagging" of DNS records with arbitrary key-value pairs + +In addition to large feature initiatives, we will be looking to improve how VinylDNS is operated. The current +installation requires the following components: + +* At least one VinylDNS API server +* At least one VinylDNS portal server +* AWS DynamoDB +* MySQL Database +* AWS SQS Message Queues + +We would like to: +* Run entirely in a single database without MySQL. This may be necessary as the query requirements of VinylDNS are +exceeding the capabilities of DynamoDB. +* Support alternative message queues, for example RabbitMQ +* Support additional databases, including PostgreSQL and MongoDB +* Support additional languages +* Support additional automation tools +* A new user interface (the existing portal is built using AngularJS, there are new and better ways to UI these days) diff --git a/bin/add-license-headers.sh b/bin/add-license-headers.sh new file mode 100755 index 000000000..7cac5e516 --- /dev/null +++ b/bin/add-license-headers.sh @@ -0,0 +1,113 @@ +#!/usr/bin/env bash + +TARGET_DIR="" +FIND_COMMAND_OPTS="" +SUPPORTED_FILE_TYPE="*" # Default is to choose all file types + +EXIT_CODE=0 + +# Global flags +CHECK_ONLY_FLAG=0 +HELP_FLAG=0 +VERBOSE_FLAG=0 + +LICENSE_TEXT="/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the \"License\"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an \"AS IS\" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */" + +print_usage () { + echo -e "Description: Prepends license/header text by recursively traversing for files.\n" + echo -e "Usage: add-license-headers.sh [-d= | --directory=] [-f= | --file-type=] [-h | --help] [-v | --verbose]\n" + echo -e "\t-c|--check-only \tEnables check-only mode. Does not actually modify files, but will return an error (exit code 1) if there are files that are missing headers." + echo -e "\t-d|--directory \tSet target directory to recursively search for files to prepend headers." + echo -e "\t-f|--file-type \tAdd case-insensitive, supported file type (eg. txt). Supports single call. If no file types are specified, all will be included by default." + echo -e "\t-v|--verbose \tEnable verbose mode." + echo -e "\t-h|--help \tPrint out usage information." +} + +# Process command line arguments + +for i in "$@" +do + case "$i" in + -c|--check-only) + CHECK_ONLY_FLAG=1 + shift + ;; + + -d=*|--directory=*) + TARGET_DIR="${i#*=}" + shift + ;; + + -f=*|--file-types=*) + SUPPORTED_FILE_TYPE="*.${i#*=}" + shift + ;; + + -v|--verbose) + VERBOSE_FLAG=1 + shift + ;; + + -h|--help) + HELP_FLAG=1 + shift + ;; + esac +done + +# Perform preliminary checks + +if [ "$HELP_FLAG" = 1 ]; then + print_usage + exit "$EXIT_CODE" +fi + +if [ -z "$TARGET_DIR" ]; then + echo "Target directory is required but not specified." + EXIT_CODE=1 +elif [ ! -d "$TARGET_DIR" ]; then + echo "Specified target directory \"$TARGET_DIR\" does not exist." + EXIT_CODE=1 +fi + +if [ "$EXIT_CODE" = 1 ]; then + echo "" + print_usage + echo "Aborting program due to errors." + exit "$EXIT_CODE" +fi + +LICENSE_TEXT_LINES=$(echo "$LICENSE_TEXT" | awk '{print NR}' | tail -1) + +for file in $(find "$TARGET_DIR" -type f -iname "$SUPPORTED_FILE_TYPE"); do + if [ ! -d "$file" ]; then + STARTING_TEXT=$(head -n "$LICENSE_TEXT_LINES" "$file") + if [[ "$STARTING_TEXT" != "$LICENSE_TEXT" ]]; then + if [ "$CHECK_ONLY_FLAG" = 1 ]; then + EXIT_CODE=1 + echo "$file" + else + if [ "$VERBOSE_FLAG" = 1 ]; then + echo "$file" + fi + $(printf '0i\n'"$LICENSE_TEXT"'\n\n.\nwq\n' | ed -s "$file") + fi + fi + fi +done + +exit "$EXIT_CODE" diff --git a/bin/build.sh b/bin/build.sh new file mode 100755 index 000000000..eec65e38c --- /dev/null +++ b/bin/build.sh @@ -0,0 +1,34 @@ +#!/usr/bin/env bash +DIR=$( cd $(dirname $0) ; pwd -P ) + +echo "Verifying code..." +#${DIR}/verify.sh + +#step_result=$? +step_result=0 +if [ ${step_result} != 0 ] +then + echo "Failed to verify build!!!" + exit ${step_result} +fi + +echo "Func testing the api..." +${DIR}/func-test-api.sh + +step_result=$? +if [ ${step_result} != 0 ] +then + echo "Failed API func tests!!!" + exit ${step_result} +fi + +echo "Func testing the portal..." +${DIR}/func-test-portal.sh +step_result=$? +if [ ${step_result} != 0 ] +then + echo "Failed Portal func tests!!!" + exit ${step_result} +fi + +exit 0 diff --git a/bin/docker-publish-api.sh b/bin/docker-publish-api.sh new file mode 100755 index 000000000..80d1282d3 --- /dev/null +++ b/bin/docker-publish-api.sh @@ -0,0 +1,10 @@ +#!/usr/bin/env bash +DIR=$( cd $(dirname $0) ; pwd -P ) + +cd $DIR/../ + +echo "Publishing docker image..." +sbt clean docker:publish +publish_result=$? +cd $DIR +exit ${publish_result} diff --git a/bin/docker-up-api-server.sh b/bin/docker-up-api-server.sh new file mode 100755 index 000000000..80885b1df --- /dev/null +++ b/bin/docker-up-api-server.sh @@ -0,0 +1,52 @@ +#!/bin/bash +###################################################################### +# Copies the contents of `docker` into target/scala-2.12 +# to start up dependent services via docker compose. Once +# dependent services are started up, the fat jar built by sbt assembly +# is loaded into a docker container. The api will be available +# by default on port 9000 +###################################################################### + +DIR=$( cd $(dirname $0) ; pwd -P ) +WORK_DIR=$DIR/../target/scala-2.12 +mkdir -p $WORK_DIR + +echo "Copy all docker to the target directory so we can start up properly and the docker context is small..." +cp -af $DIR/../docker $WORK_DIR/ + +echo "Copy the vinyldns.jar to the api docker folder so it is in context..." +if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then + echo "vinyldns jar not found, building..." + cd $DIR/../ + sbt api/clean api/assembly + cd $DIR +fi +cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api + +echo "Starting api server and all dependencies in the background..." +docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker up --build -d api + +VINYL_URL="http://localhost:9000" +echo "Waiting for API to be ready at ${VINYL_URL} ..." +DATA="" +RETRY=40 +while [ $RETRY -gt 0 ] +do + DATA=$(wget -O - -q -t 1 "${VINYL_URL}/ping") + if [ $? -eq 0 ] + then + echo "Succeeded in connecting to VINYL!" + break + else + echo "Retrying Again" >&2 + + let RETRY-=1 + sleep 1 + + if [ $RETRY -eq 0 ] + then + echo "Exceeded retries waiting for VINYL to be ready, failing" + exit 1 + fi + fi +done diff --git a/bin/docker-up-dns-server.sh b/bin/docker-up-dns-server.sh new file mode 100755 index 000000000..864a47d0a --- /dev/null +++ b/bin/docker-up-dns-server.sh @@ -0,0 +1,5 @@ +#!/bin/bash +DIR=$( cd $(dirname $0) ; pwd -P ) + +echo "Starting ONLY the bind9 server. To start an api server use the api server script" +docker-compose -f $DIR/../docker/docker-compose-func-test.yml --project-directory $DIR/../docker up -d bind9 diff --git a/bin/func-test-api.sh b/bin/func-test-api.sh new file mode 100755 index 000000000..13a2a86a3 --- /dev/null +++ b/bin/func-test-api.sh @@ -0,0 +1,52 @@ +#!/bin/bash +###################################################################### +# Copies the contents of `docker` into target/scala-2.12 +# to start up dependent services via docker compose. Once +# dependent services are started up, the fat jar built by sbt assembly +# is loaded into a docker container. Finally, the func tests run inside +# another docker container +# At the end, we grab all the logs and place them in the target +# directory +###################################################################### + +DIR=$( cd $(dirname $0) ; pwd -P ) +WORK_DIR=$DIR/../target/scala-2.12 +mkdir -p $WORK_DIR + +echo "Cleaning up unused networks..." +docker network prune -f + +echo "Copy all docker to the target directory so we can start up properly and the docker context is small..." +cp -af $DIR/../docker $WORK_DIR/ + +echo "Copy over the functional tests as well as those that are run in a container..." +mkdir -p $WORK_DIR/functest +rsync -av --exclude='.virtualenv' $DIR/../modules/api/functional_test $WORK_DIR/docker/functest + +echo "Copy the vinyldns.jar to the api docker folder so it is in context..." +if [[ ! -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar ]]; then + echo "vinyldns jar not found, building..." + cd $DIR/../ + sbt api/clean api/assembly + cd $DIR +fi +cp -f $DIR/../modules/api/target/scala-2.12/vinyldns.jar $WORK_DIR/docker/api + +echo "Staring docker environment and running func tests..." +docker-compose -f $WORK_DIR/docker/docker-compose-func-test.yml --project-directory $WORK_DIR/docker --log-level ERROR up --build --exit-code-from functest +test_result=$? + +echo "Grabbing the logs..." + +docker logs vinyldns-api > $DIR/../target/vinyldns-api.log 2>/dev/null +docker logs vinyldns-bind9 > $DIR/../target/vinyldns-bind9.log 2>/dev/null +docker logs vinyldns-mysql > $DIR/../target/vinyldns-mysql.log 2>/dev/null +docker logs vinyldns-elasticmq > $DIR/../target/vinyldns-elasticmq.log 2>/dev/null +docker logs vinyldns-dynamodb > $DIR/../target/vinyldns-dynamodb.log 2>/dev/null +docker logs vinyldns-functest > $DIR/../target/vinyldns-functest.log 2>/dev/null + +echo "Cleaning up docker containers..." +$DIR/./stop-all-docker-containers.sh + +echo "Func tests returned result: ${test_result}" +exit ${test_result} diff --git a/bin/func-test-portal.sh b/bin/func-test-portal.sh new file mode 100755 index 000000000..816828a2f --- /dev/null +++ b/bin/func-test-portal.sh @@ -0,0 +1,46 @@ +#!/bin/bash +###################################################################### +# Runs e2e tests against the portal +###################################################################### + +DIR=$( cd $(dirname $0) ; pwd -P ) +WORK_DIR=$DIR/../modules/portal + +function check_for() { + which $1 >/dev/null 2>&1 + EXIT_CODE=$? + if [ ${EXIT_CODE} != 0 ] + then + echo "$1 is not installed" + exit ${EXIT_CODE} + fi +} + +cd $WORK_DIR +check_for python +check_for npm + +# if the program exits before this has been captured then there must have been an error +EXIT_CODE=1 + +# javascript code generate +npm install +grunt default + +TEST_SUITES=('grunt unit') + +for TEST in "${TEST_SUITES[@]}" +do + echo "##### Running test: [$TEST]" + $TEST + EXIT_CODE=$? + echo "##### Test [$TEST] ended with status [$EXIT_CODE]" + if [ ${EXIT_CODE} != 0 ] + then + cd - + exit ${EXIT_CODE} + fi +done + +cd - +exit 0 diff --git a/bin/stop-all-docker-containers.sh b/bin/stop-all-docker-containers.sh new file mode 100755 index 000000000..39304daeb --- /dev/null +++ b/bin/stop-all-docker-containers.sh @@ -0,0 +1,6 @@ +#!/bin/bash +echo "Shutting down docker" + +docker kill $(docker ps -a -q) || echo "No docker containers to kill" +docker rm -v $(docker ps -a -q) || echo "No docker volumes to remove" +docker network prune -f diff --git a/bin/verify.sh b/bin/verify.sh new file mode 100755 index 000000000..57dfca875 --- /dev/null +++ b/bin/verify.sh @@ -0,0 +1,21 @@ +#!/bin/bash +echo 'Running tests...' + +echo 'Stopping any docker containers...' +./bin/stop-all-docker-containers.sh + +echo 'Starting up docker for integration testing and running unit and integration tests on all modules...' +sbt ";validate;verify" +verify_result=$? + +echo 'Stopping any docker containers...' +./bin/stop-all-docker-containers.sh + +if [ ${verify_result} -eq 0 ] +then + echo 'Verify successful!' + exit 0 +else + echo 'Verify failed!' + exit 1 +fi diff --git a/build.sbt b/build.sbt new file mode 100644 index 000000000..f3670f864 --- /dev/null +++ b/build.sbt @@ -0,0 +1,299 @@ +import sbtprotobuf.{ProtobufPlugin => PB} +import Resolvers._ +import Dependencies._ +import CompilerOptions._ +import com.typesafe.sbt.packager.docker._ +import scoverage.ScoverageKeys.{coverageFailOnMinimum, coverageMinimum} +import org.scalafmt.sbt.ScalafmtPlugin._ +import microsites._ + +resolvers ++= additionalResolvers + +lazy val IntegrationTest = config("it") extend(Test) + +// Needed because we want scalastyle for integration tests which is not first class +val codeStyleIntegrationTest = taskKey[Unit]("enforce code style then integration test") +def scalaStyleIntegrationTest: Seq[Def.Setting[_]] = { + inConfig(IntegrationTest)(ScalastylePlugin.rawScalastyleSettings()) ++ + Seq( + scalastyleConfig in IntegrationTest := root.base / "scalastyle-test-config.xml", + scalastyleTarget in IntegrationTest := target.value / "scalastyle-it-results.xml", + scalastyleFailOnError in IntegrationTest := (scalastyleFailOnError in scalastyle).value, + (scalastyleFailOnWarning in IntegrationTest) := (scalastyleFailOnWarning in scalastyle).value, + scalastyleSources in IntegrationTest := (unmanagedSourceDirectories in IntegrationTest).value, + codeStyleIntegrationTest := scalastyle.in(IntegrationTest).toTask("").value + ) +} + +// Create a default Scala style task to run with tests +lazy val testScalastyle = taskKey[Unit]("testScalastyle") +def scalaStyleTest: Seq[Def.Setting[_]] = Seq( + (scalastyleConfig in Test) := baseDirectory.value / ".." / ".." / "scalastyle-test-config.xml", + scalastyleTarget in Test := target.value / "scalastyle-test-results.xml", + scalastyleFailOnError in Test := (scalastyleFailOnError in scalastyle).value, + (scalastyleFailOnWarning in Test) := (scalastyleFailOnWarning in scalastyle).value, + scalastyleSources in Test := (unmanagedSourceDirectories in Test).value, + testScalastyle := scalastyle.in(Test).toTask("").value +) + +lazy val compileScalastyle = taskKey[Unit]("compileScalastyle") +def scalaStyleCompile: Seq[Def.Setting[_]] = Seq( + compileScalastyle := scalastyle.in(Compile).toTask("").value +) + +def scalaStyleSettings: Seq[Def.Setting[_]] = scalaStyleCompile ++ scalaStyleTest ++ scalaStyleIntegrationTest + +// settings that should be inherited by all projects +lazy val sharedSettings = Seq( + organization := "vinyldns", + version := "0.8.0-SNAPSHOT", + scalaVersion := "2.12.6", + organizationName := "Comcast Cable Communications Management, LLC", + startYear := Some(2018), + licenses += ("Apache-2.0", new URL("https://www.apache.org/licenses/LICENSE-2.0.txt")), + scalacOptions += "-target:jvm-1.8", + scalacOptions ++= scalacOptionsByV(scalaVersion.value), + // Use wart remover to eliminate code badness + wartremoverErrors ++= Seq( + Wart.ArrayEquals, + Wart.EitherProjectionPartial, + Wart.IsInstanceOf, + Wart.JavaConversions, + Wart.Return, + Wart.LeakingSealed, + Wart.ExplicitImplicitTypes + ), + + // scala format + scalafmtOnCompile := true, + scalafmtOnCompile in IntegrationTest := true +) + +lazy val testSettings = Seq( + parallelExecution in Test := false, + parallelExecution in IntegrationTest := false, + fork in IntegrationTest := false, + testOptions in Test += Tests.Argument("-oD"), + logBuffered in Test := false +) + +lazy val apiSettings = Seq( + name := "api", + libraryDependencies ++= compileDependencies ++ testDependencies, + mainClass := Some("vinyldns.api.Boot"), + javaOptions in reStart += "-Dlogback.configurationFile=test/logback.xml", + coverageMinimum := 85, + coverageFailOnMinimum := true, + coverageHighlighting := true, + coverageExcludedPackages := ".*Boot.*" +) + +lazy val apiAssemblySettings = Seq( + assemblyJarName in assembly := "vinyldns.jar", + test in assembly := {}, + mainClass in assembly := Some("vinyldns.api.Boot"), + mainClass in reStart := Some("vinyldns.api.Boot"), + // there are some odd things from dnsjava including update.java and dig.java that we don't use + assemblyMergeStrategy in assembly := { + case "update.class"| "dig.class" => MergeStrategy.discard + case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "index.js") => MergeStrategy.discard + case PathList("scala", "tools", "nsc", "doc", "html", "resource", "lib", "template.js") => MergeStrategy.discard + case x => + val oldStrategy = (assemblyMergeStrategy in assembly).value + oldStrategy(x) + } +) + +lazy val apiDockerSettings = Seq( + dockerBaseImage := "openjdk:8u171-jdk", + dockerUsername := Some("vinyldns"), + packageName in Docker := "api", + dockerExposedPorts := Seq(9000), + dockerEntrypoint := Seq("/opt/docker/bin/boot"), + dockerExposedVolumes := Seq("/opt/docker/lib_extra"), // mount extra libs to the classpath + dockerExposedVolumes := Seq("/opt/docker/conf"), // mount extra config to the classpath + + // add extra libs to class path via mount + scriptClasspath in bashScriptDefines ~= (cp => cp :+ "/opt/docker/lib_extra/*"), + + // adds config file to mount + bashScriptExtraDefines += """addJava "-Dconfig.file=${app_home}/../conf/application.conf"""", + bashScriptExtraDefines += """addJava "-Dlogback.configurationFile=${app_home}/../conf/logback.xml"""", // adds logback + bashScriptExtraDefines += "(cd ${app_home} && ./wait-for-dependencies.sh && cd -)", + credentials in Docker := Seq(Credentials(Path.userHome / ".iv2" / ".dockerCredentials")), + dockerCommands ++= Seq( + Cmd("USER", "root"), // switch to root so we can install netcat + ExecCmd("RUN", "apt-get", "update"), + ExecCmd("RUN", "apt-get", "install", "-y", "netcat-openbsd"), + Cmd("USER", "daemon") // switch back to the daemon user + ), + composeFile := baseDirectory.value.getAbsolutePath + "/../../docker/docker-compose.yml" +) + +lazy val noPublishSettings = Seq( + publish := {}, + publishLocal := {}, + publishArtifact := false +) + +lazy val apiPublishSettings = Seq( + publishArtifact := false, + publishLocal := (publishLocal in Docker).value, + publish := (publish in Docker).value +) + +lazy val pbSettings = Seq( + version in ProtobufConfig := "2.6.1" +) + +lazy val allApiSettings = Revolver.settings ++ Defaults.itSettings ++ + apiSettings ++ + sharedSettings ++ + apiAssemblySettings ++ + testSettings ++ + apiPublishSettings ++ + apiDockerSettings ++ + pbSettings ++ + scalaStyleSettings + +lazy val api = (project in file("modules/api")) + .enablePlugins(JavaAppPackaging, DockerComposePlugin, AutomateHeaderPlugin, ProtobufPlugin) + .configs(IntegrationTest) + .settings(allApiSettings) + .settings(headerSettings(IntegrationTest)) + .settings(inConfig(IntegrationTest)(scalafmtConfigSettings)) + .dependsOn(core) + +lazy val root = (project in file(".")).enablePlugins(AutomateHeaderPlugin, ProtobufPlugin) + .configs(IntegrationTest) + .settings(headerSettings(IntegrationTest)) + .settings(sharedSettings) + .settings( + inConfig(IntegrationTest)(scalafmtConfigSettings), + (scalastyleConfig in Test) := baseDirectory.value / "scalastyle-test-config.xml", + (scalastyleConfig in IntegrationTest) := baseDirectory.value / "scalastyle-test-config.xml" + ) + .aggregate(core, api, portal) + +lazy val coreBuildSettings = Seq( + name := "core", + + // do not use unused params as NoOpCrypto ignores its constructor, we should provide a way + // to write a crypto plugin so that we fall back to a noarg constructor + scalacOptions ++= scalacOptionsByV(scalaVersion.value).filterNot(_ == "-Ywarn-unused:params") +) + +lazy val corePublishSettings = Seq( + publishMavenStyle := true, + publishArtifact in Test := false, + pomIncludeRepository := { _ => false }, + autoAPIMappings := true, + credentials += Credentials(Path.userHome / ".ivy2" / ".credentials"), + publish in Docker := {}, + mainClass := None +) + +lazy val core = (project in file("modules/core")).enablePlugins(AutomateHeaderPlugin) + .settings(sharedSettings) + .settings(coreBuildSettings) + .settings(corePublishSettings) + .settings(testSettings) + .settings(libraryDependencies ++= coreDependencies) + .settings(scalaStyleCompile ++ scalaStyleTest) + .settings( + coverageMinimum := 85, + coverageFailOnMinimum := true, + coverageHighlighting := true + ) + +val preparePortal = TaskKey[Unit]("preparePortal", "Runs NPM to prepare portal for start") +val checkJsHeaders = TaskKey[Unit]("checkJsHeaders", "Runs script to check for APL 2.0 license headers") +val createJsHeaders = TaskKey[Unit]("createJsHeaders", "Runs script to prepend APL 2.0 license headers to files") + +lazy val portal = (project in file("modules/portal")).enablePlugins(PlayScala, AutomateHeaderPlugin) + .settings(sharedSettings) + .settings(testSettings) + .settings(noPublishSettings) + .settings( + name := "portal", + libraryDependencies ++= portalDependencies, + routesGenerator := InjectedRoutesGenerator, + coverageMinimum := 75, + coverageExcludedPackages := ";views.html.*;router.*", + javaOptions in Test += "-Dconfig.file=conf/application-test.conf", + javaOptions in run += "-Dhttp.port=9001 -Dconfig.file=modules/portal/conf/application.conf", + + // adds an extra classpath to the portal loading so we can externalize jars, make sure to create the lib_extra + // directory and lay down any dependencies that are required when deploying + scriptClasspath in bashScriptDefines ~= (cp => cp :+ "lib_extra/*"), + mainClass in reStart := None, + + // we need to filter out unused for the portal as the play framework needs a lot of unused things + scalacOptions ~= { opts => opts.filterNot(p => p.contains("unused")) }, + + // runs our prepare portal process + preparePortal := { + import scala.sys.process._ + "./modules/portal/prepare-portal.sh" ! + }, + + checkJsHeaders := { + import scala.sys.process._ + "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js -c" ! + }, + + createJsHeaders := { + import scala.sys.process._ + "./bin/add-license-headers.sh -d=modules/portal/public/lib -f=js" ! + }, + + // change the name of the output to portal.zip + packageName in Universal := "portal" + ) + .dependsOn(core) + +lazy val docSettings = Seq( + git.remoteRepo := "https://github.com/vinyldns/vinyldns", + micrositeGithubOwner := "VinylDNS", + micrositeGithubRepo := "vinyldns", + micrositeName := "VinylDNS", + micrositeDescription := "DNS Management Platform", + micrositeAuthor := "VinylDNS", + micrositeHomepage := "http://vinyldns.io", + micrositeDocumentationUrl := "/apidocs", + micrositeGitterChannelUrl := "vinyldns/Lobby", + micrositeShareOnSocial := false, + micrositeExtraMdFiles := Map( + file("CONTRIBUTING.md") -> ExtraMdFileConfig( + "contributing.md", + "page", + Map("title" -> "Contributing", "section" -> "contributing", "position" -> "2") + ) + ), + ghpagesNoJekyll := false, + fork in tut := true +) + +lazy val docs = (project in file("modules/docs")).enablePlugins(MicrositesPlugin) + .settings(docSettings) + +// Validate runs static checks and compile to make sure we can go +addCommandAlias("validate-api", + ";project api; clean; headerCheck; test:headerCheck; it:headerCheck; scalastyle; test:scalastyle; " + + "it:scalastyle; compile; test:compile; it:compile") +addCommandAlias("validate-core", + ";project core; clean; headerCheck; test:headerCheck; scalastyle; test:scalastyle; compile; test:compile") +addCommandAlias("validate-portal", + ";project portal; clean; headerCheck; test:headerCheck; compile; test:compile; createJsHeaders; checkJsHeaders") +addCommandAlias("validate", ";validate-core;validate-api;validate-portal") + +// Verify runs all tests and code coverage +addCommandAlias("verify", + ";project api;dockerComposeUp;project root;coverage;test;it:test;coverageReport;coverageAggregate;project api;dockerComposeStop") + +// Build the artifacts for release +addCommandAlias("build-api", ";project api;clean;assembly") +addCommandAlias("build-portal", ";project portal;clean;preparePortal;dist") +addCommandAlias("build", ";build-api;build-portal") + + diff --git a/docker/api/.dockerignore b/docker/api/.dockerignore new file mode 100644 index 000000000..f4ed141ba --- /dev/null +++ b/docker/api/.dockerignore @@ -0,0 +1,5 @@ +.DS_Store +.dockerignore +.git +.gitignore +classes \ No newline at end of file diff --git a/docker/api/Dockerfile b/docker/api/Dockerfile new file mode 100644 index 000000000..10690951c --- /dev/null +++ b/docker/api/Dockerfile @@ -0,0 +1,18 @@ +FROM openjdk:8u171-jdk-stretch + +RUN apt-get update && apt-get install -y netcat-openbsd + +# install the jar onto the server, asserts this Dockerfile is copied to target/scala-2.12 after a build +COPY vinyldns.jar /app/vinyldns-server.jar +COPY run.sh /app/run.sh +RUN chmod a+x /app/run.sh + +COPY docker.conf /app/docker.conf + +EXPOSE 9000 +EXPOSE 2551 + +# set the entry point for the container to start vinyl, specify the config resource +ENTRYPOINT ["/app/run.sh"] + + diff --git a/docker/api/docker.conf b/docker/api/docker.conf new file mode 100644 index 000000000..7356b006c --- /dev/null +++ b/docker/api/docker.conf @@ -0,0 +1,194 @@ +################################################################################################################ +# This configuration is only used by docker. Environment variables are required in order to start +# up a docker cluster appropriately, so most of the values are passed in here. Defaults assume a local docker compose +# for vinyldns running. +# SQS_ENDPOINT is the SQS endpoint +# SQS_QUEUE_URL is the full URL to the SQS queue +# SQS_REGION is the service region where the SQS queue lives (e.g. us-east-1) +# AWS_ACCESS_KEY is the AWS access key +# AWS_SECRET_ACCESS_KEY is the AWS secret access key +# JDBC_MIGRATION_URL - the URL for migations in the SQL database +# JDBC_URL - the full URL to the SQL database +# JDBC_USER - the SQL database user +# JDBC_PASSWORD - the SQL database password +# DYNAMODB_ENDPOINT - the endpoint for DynamoDB +# DEFAULT_DNS_ADDRESS - the server (and port if not 53) of the default DNS server +# DEFAULT_DNS_KEY_NAME - the default key name used to connect to the default DNS server +# DEFAULT_DNS_KEY_SECRET - the default key secret used to connect to the default DNS server +################################################################################################################ +vinyldns { + sqs { + embedded = false + access-key = "x" + access-key = ${?AWS_ACCESS_KEY} + + secret-key = "x" + secret-key = ${?AWS_SECRET_ACCESS_KEY} + signing-region = "x" + signing-region = ${?SQS_REGION} + service-endpoint = "http://vinyldns-elasticmq:9324/" + service-endpoint = ${?SQS_ENDPOINT} + queue-url = "http://vinyldns-elasticmq:9324/queue/vinyldns" + queue-url = ${?SQS_QUEUE_URL} + } + + rest { + host = "0.0.0.0" + port = 9000 + } + + sync-delay = 10000 + + monitoring { + logging-interval = 120s + } + + crypto { + type = "vinyldns.core.crypto.NoOpCrypto" + } + + # default settings point to the setup from docker compose + db { + name = "vinyldns" + local-mode = false # for docker only so we initialize the db every time + default { + driver = "org.mariadb.jdbc.Driver" + migrationUrl = "jdbc:mariadb://vinyldns-mysql:3306/?user=root&password=pass" + migrationUrl = ${?JDBC_MIGRATION_URL} + url = "jdbc:mariadb://vinyldns-mysql:3306/vinyldns?user=root&password=pass" + url = ${?JDBC_URL} + user = "root" + user = ${?JDBC_USER} + password = "pass" + password = ${?JDBC_PASSWORD} + poolInitialSize = 10 + poolMaxSize = 20 + connectionTimeoutMillis = 1000 + maxLifeTime = 600000 + } + } + + # default settings point to the docker compose setup + dynamo { + key = "x" + key = ${?AWS_ACCESS_KEY} + secret = "x" + secret = ${?AWS_SECRET_ACCESS_KEY} + endpoint = "http://vinyldns-dynamodb:8000" + endpoint = ${?DYNAMODB_ENDPOINT} + } + + zoneChanges { + dynamo { + tableName = "zoneChange" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + recordSet { + dynamo { + tableName = "recordSet" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + recordChange { + dynamo { + tableName = "recordChange" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + users { + dynamo { + tableName = "users" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + groups { + dynamo { + tableName = "groups" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + groupChanges { + dynamo { + tableName = "groupChanges" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + membership { + dynamo { + tableName = "membership" + provisionedReads = 30 + provisionedWrites = 30 + } + } + + defaultZoneConnection { + name = "vinyldns." + keyName = "vinyldns." + keyName = ${?DEFAULT_DNS_KEY_NAME} + + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + + primaryServer = "vinyldns-bind9" + primaryServer = ${?DEFAULT_DNS_ADDRESS} + } + + defaultTransferConnection { + name = "vinyldns." + keyName = "vinyldns." + keyName = ${?DEFAULT_DNS_KEY_NAME} + + key = "nzisn+4G2ldMn0q1CV3vsg==" + key = ${?DEFAULT_DNS_KEY_SECRET} + + primaryServer = "vinyldns-bind9" + primaryServer = ${?DEFAULT_DNS_ADDRESS} + } + + batch-change-limit = 20 + + # log prometheus metrics to logger factory + metrics { + log-to-console = false + } +} + +akka { + loglevel = "INFO" + loggers = ["akka.event.slf4j.Slf4jLogger"] + logging-filter = "akka.event.slf4j.Slf4jLoggingFilter" + logger-startup-timeout = 30s + + actor { + provider = "akka.actor.LocalActorRefProvider" + } +} + +akka.http { + server { + # The time period within which the TCP binding process must be completed. + # Set to `infinite` to disable. + bind-timeout = 5s + + # Show verbose error messages back to the client + verbose-error-messages = on + } + + parsing { + # Spray doesn't like the AWS4 headers + illegal-header-warnings = on + } +} diff --git a/docker/api/run.sh b/docker/api/run.sh new file mode 100644 index 000000000..39e827f53 --- /dev/null +++ b/docker/api/run.sh @@ -0,0 +1,53 @@ +#!/usr/bin/env bash + +# gets the docker-ized ip address, sets it to an environment variable +export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'` + +export DYNAMO_ADDRESS="vinyldns-dynamodb" +export DYNAMO_PORT=8000 +export JOURNAL_HOST="vinyldns-dynamodb" +export JOURNAL_PORT=8000 +export MYSQL_ADDRESS="vinyldns-mysql" +export MYSQL_PORT=3306 +export JDBC_USER=root +export JDBC_PASSWORD=pass +export DNS_ADDRESS="vinyldns-bind9" +export DYNAMO_KEY="local" +export DYNAMO_SECRET="local" +export DYNAMO_TABLE_PREFIX="" +export ELASTICMQ_ADDRESS="vinyldns-elasticmq" +export DYNAMO_ENDPOINT="http://${DYNAMO_ADDRESS}:${DYNAMO_PORT}" +export JDBC_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/vinyldns?user=${JDBC_USER}&password=${JDBC_PASSWORD}" +export JDBC_MIGRATION_URL="jdbc:mariadb://${MYSQL_ADDRESS}:${MYSQL_PORT}/?user=${JDBC_USER}&password=${JDBC_PASSWORD}" + +# wait until mysql is ready... +echo 'Waiting for MYSQL to be ready...' +DATA="" +RETRY=30 +while [ $RETRY -gt 0 ] +do + DATA=$(nc -vzw1 vinyldns-mysql 3306) + if [ $? -eq 0 ] + then + break + else + echo "Retrying Again" >&2 + + let RETRY-=1 + sleep .5 + + if [ $RETRY -eq 0 ] + then + echo "Exceeded retries waiting for MYSQL to be ready, failing" + return 1 + fi + fi +done + +echo "Running migrations..." +java -Dconfig.resource=db-migrations.conf -cp /app/vinyldns-server.jar db.migration.MigrationRunner + +echo "Starting up Vinyl..." +sleep 2 +java -Djava.net.preferIPv4Stack=true -Dconfig.file=/app/docker.conf -Dakka.loglevel=INFO -Dlogback.configurationFile=test/logback.xml -jar /app/vinyldns-server.jar vinyldns.api.Boot + diff --git a/docker/bind9/etc/named.conf.local b/docker/bind9/etc/named.conf.local new file mode 100755 index 000000000..da5063266 --- /dev/null +++ b/docker/bind9/etc/named.conf.local @@ -0,0 +1,160 @@ +// +// Do any local configuration here +// + +// Consider adding the 1918 zones here, if they are not used in your +// organization +//include "/etc/bind/zones.rfc1918"; + +key "vinyldns." { + algorithm hmac-md5; + secret "nzisn+4G2ldMn0q1CV3vsg=="; +}; + +// Consider adding the 1918 zones here, if they are not used in your +// organization +//include "/etc/bind/zones.rfc1918"; +zone "vinyldns" { + type master; + file "/var/bind/vinyldns.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns2" { + type master; + file "/var/bind/old-vinyldns2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-vinyldns3" { + type master; + file "/var/bind/old-vinyldns3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "old-shared" { + type master; + file "/var/bind/old-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "dummy" { + type master; + file "/var/bind/dummy.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "ok" { + type master; + file "/var/bind/ok.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "shared" { + type master; + file "/var/bind/shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test" { + type master; + file "/var/bind/system-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "system-test-history" { + type master; + file "/var/bind/system-test-history.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "30.172.in-addr.arpa" { + type master; + file "/var/bind/30.172.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "2.0.192.in-addr.arpa" { + type master; + file "/var/bind/2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "192/30.2.0.192.in-addr.arpa" { + type master; + file "/var/bind/192^30.2.0.192.in-addr.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa" { + type master; + file "/var/bind/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time" { + type master; + file "/var/bind/one-time.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "sync-test" { + type master; + file "/var/bind/sync-test.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "invalid-zone" { + type master; + file "/var/bind/invalid-zone.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-1" { + type master; + file "/var/bind/list-zones-test-searched-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-2" { + type master; + file "/var/bind/list-zones-test-searched-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-searched-3" { + type master; + file "/var/bind/list-zones-test-searched-3.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-1" { + type master; + file "/var/bind/list-zones-test-unfiltered-1.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "list-zones-test-unfiltered-2" { + type master; + file "/var/bind/list-zones-test-unfiltered-2.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "one-time-shared" { + type master; + file "/var/bind/one-time-shared.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "parent.com" { + type master; + file "/var/bind/parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + +zone "child.parent.com" { + type master; + file "/var/bind/child.parent.com.hosts"; + allow-update { key "vinyldns."; }; + }; + diff --git a/docker/bind9/zones/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa b/docker/bind9/zones/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa new file mode 100755 index 000000000..f7842ea63 --- /dev/null +++ b/docker/bind9/zones/1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa. IN NS 172.17.42.1. +4.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR www.vinyldns. +5.2.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0 IN PTR mail.vinyldns. diff --git a/docker/bind9/zones/192^30.2.0.192.in-addr.arpa b/docker/bind9/zones/192^30.2.0.192.in-addr.arpa new file mode 100644 index 000000000..89b539aca --- /dev/null +++ b/docker/bind9/zones/192^30.2.0.192.in-addr.arpa @@ -0,0 +1,11 @@ +$ttl 38400 +192/30.2.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +192/30.2.0.192.in-addr.arpa. IN NS 172.17.42.1. +192 IN PTR portal.vinyldns. +194 IN PTR mail.vinyldns. +195 IN PTR test.vinyldns. diff --git a/docker/bind9/zones/2.0.192.in-addr.arpa b/docker/bind9/zones/2.0.192.in-addr.arpa new file mode 100644 index 000000000..9f4d04e35 --- /dev/null +++ b/docker/bind9/zones/2.0.192.in-addr.arpa @@ -0,0 +1,13 @@ +$ttl 38400 +2.0.192.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +2.0.192.in-addr.arpa. IN NS 172.17.42.1. +192/30 IN NS 172.17.42.1. +192 IN CNAME 192.192/30.2.0.192.in-addr.arpa. +193 IN CNAME 193.192/30.2.0.192.in-addr.arpa. +194 IN CNAME 194.192/30.2.0.192.in-addr.arpa. +195 IN CNAME 195.192/30.2.0.192.in-addr.arpa. diff --git a/docker/bind9/zones/30.172.in-addr.arpa b/docker/bind9/zones/30.172.in-addr.arpa new file mode 100755 index 000000000..dda5a3dd4 --- /dev/null +++ b/docker/bind9/zones/30.172.in-addr.arpa @@ -0,0 +1,10 @@ +$ttl 38400 +30.172.in-addr.arpa. IN SOA 172.17.42.1. admin.vinyldns.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +30.172.in-addr.arpa. IN NS 172.17.42.1. +24.0 IN PTR www.vinyl. +25.0 IN PTR mail.vinyl. diff --git a/docker/bind9/zones/child.parent.com.hosts b/docker/bind9/zones/child.parent.com.hosts new file mode 100644 index 000000000..a74630542 --- /dev/null +++ b/docker/bind9/zones/child.parent.com.hosts @@ -0,0 +1,9 @@ +$ttl 38400 +$ORIGIN child.parent.com. +@ IN SOA ns1.parent.com. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +@ IN NS ns1.parent.com. diff --git a/docker/bind9/zones/dummy.hosts b/docker/bind9/zones/dummy.hosts new file mode 100644 index 000000000..d742b4da0 --- /dev/null +++ b/docker/bind9/zones/dummy.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +dummy. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +dummy. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/invalid-zone.hosts b/docker/bind9/zones/invalid-zone.hosts new file mode 100644 index 000000000..47eae6943 --- /dev/null +++ b/docker/bind9/zones/invalid-zone.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +invalid-zone. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +invalid-zone. IN NS 172.17.42.1. +invalid-zone. IN NS not-approved.thing.com. +invalid.child.invalid-zone. IN NS 172.17.42.1. +dotted.host.invalid-zone. IN A 1.2.3.4 +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/list-zones-test-searched-1.hosts b/docker/bind9/zones/list-zones-test-searched-1.hosts new file mode 100644 index 000000000..c2cf966f7 --- /dev/null +++ b/docker/bind9/zones/list-zones-test-searched-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-searched-2.hosts b/docker/bind9/zones/list-zones-test-searched-2.hosts new file mode 100644 index 000000000..b531d2a19 --- /dev/null +++ b/docker/bind9/zones/list-zones-test-searched-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-searched-3.hosts b/docker/bind9/zones/list-zones-test-searched-3.hosts new file mode 100644 index 000000000..33e76e90f --- /dev/null +++ b/docker/bind9/zones/list-zones-test-searched-3.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-searched-3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-searched-3. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-unfiltered-1.hosts b/docker/bind9/zones/list-zones-test-unfiltered-1.hosts new file mode 100755 index 000000000..9205eec0d --- /dev/null +++ b/docker/bind9/zones/list-zones-test-unfiltered-1.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-1. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-1. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/list-zones-test-unfiltered-2.hosts b/docker/bind9/zones/list-zones-test-unfiltered-2.hosts new file mode 100755 index 000000000..dfdb66493 --- /dev/null +++ b/docker/bind9/zones/list-zones-test-unfiltered-2.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +list-zones-test-unfiltered-2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +list-zones-test-unfiltered-2. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/ok.hosts b/docker/bind9/zones/ok.hosts new file mode 100755 index 000000000..8c0a604d3 --- /dev/null +++ b/docker/bind9/zones/ok.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +ok. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +ok. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/old-shared.hosts b/docker/bind9/zones/old-shared.hosts new file mode 100755 index 000000000..a7c06b6d1 --- /dev/null +++ b/docker/bind9/zones/old-shared.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-shared. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-shared. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/old-vinyldns2.hosts b/docker/bind9/zones/old-vinyldns2.hosts new file mode 100755 index 000000000..5fdc55ce9 --- /dev/null +++ b/docker/bind9/zones/old-vinyldns2.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns2. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns2. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/old-vinyldns3.hosts b/docker/bind9/zones/old-vinyldns3.hosts new file mode 100755 index 000000000..5d514886a --- /dev/null +++ b/docker/bind9/zones/old-vinyldns3.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +old-vinyldns3. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +old-vinyldns3. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/one-time-shared.hosts b/docker/bind9/zones/one-time-shared.hosts new file mode 100755 index 000000000..654f01557 --- /dev/null +++ b/docker/bind9/zones/one-time-shared.hosts @@ -0,0 +1,8 @@ +$ttl 38400 +one-time-shared. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time-shared. IN NS 172.17.42.1. diff --git a/docker/bind9/zones/one-time.hosts b/docker/bind9/zones/one-time.hosts new file mode 100755 index 000000000..df072413e --- /dev/null +++ b/docker/bind9/zones/one-time.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +one-time. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +one-time. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/parent.com.hosts b/docker/bind9/zones/parent.com.hosts new file mode 100755 index 000000000..c3dc749f6 --- /dev/null +++ b/docker/bind9/zones/parent.com.hosts @@ -0,0 +1,15 @@ +$ttl 38400 +$ORIGIN parent.com. +@ IN SOA ns1.parent.com. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +parent.com. IN NS ns1.parent.com. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +already-exists IN A 6.6.6.6 +ns1 IN A 172.17.42.1 diff --git a/docker/bind9/zones/shared.hosts b/docker/bind9/zones/shared.hosts new file mode 100755 index 000000000..81d0f9fea --- /dev/null +++ b/docker/bind9/zones/shared.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +shared. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +shared. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/sync-test.hosts b/docker/bind9/zones/sync-test.hosts new file mode 100755 index 000000000..72024b633 --- /dev/null +++ b/docker/bind9/zones/sync-test.hosts @@ -0,0 +1,17 @@ +$ttl 38400 +sync-test. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +sync-test. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 +fqdn.sync-test. IN A 7.7.7.7 +_sip._tcp IN SRV 10 60 5060 foo.sync-test. +existing.dotted IN A 9.9.9.9 diff --git a/docker/bind9/zones/system-test-history.hosts b/docker/bind9/zones/system-test-history.hosts new file mode 100755 index 000000000..1408efda6 --- /dev/null +++ b/docker/bind9/zones/system-test-history.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test-history. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test-history. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/system-test.hosts b/docker/bind9/zones/system-test.hosts new file mode 100755 index 000000000..02f493ffc --- /dev/null +++ b/docker/bind9/zones/system-test.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +system-test. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +system-test. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/bind9/zones/vinyldns.hosts b/docker/bind9/zones/vinyldns.hosts new file mode 100644 index 000000000..905211823 --- /dev/null +++ b/docker/bind9/zones/vinyldns.hosts @@ -0,0 +1,14 @@ +$ttl 38400 +vinyldns. IN SOA 172.17.42.1. admin.test.com. ( + 1439234395 + 10800 + 3600 + 604800 + 38400 ) +vinyldns. IN NS 172.17.42.1. +jenkins IN A 10.1.1.1 +foo IN A 2.2.2.2 +test IN A 3.3.3.3 +test IN A 4.4.4.4 +@ IN A 5.5.5.5 +already-exists IN A 6.6.6.6 diff --git a/docker/docker-compose-build.yml b/docker/docker-compose-build.yml new file mode 100644 index 000000000..975a21f7e --- /dev/null +++ b/docker/docker-compose-build.yml @@ -0,0 +1,48 @@ +version: "3.0" +services: + mysql: + image: "mysql:5.7" + container_name: "vinyldns-mysql" + environment: + - MYSQL_ROOT_PASSWORD=pass # do not use quotes around the environment variables!!! + - MYSQL_ROOT_HOST=% # this is required as mysql is currently locked down to localhost + ports: + - "3306:3306" + + dynamodb: + image: "cnadiminti/dynamodb-local:2017-02-16" + container_name: "vinyldns-dynamodb" + ports: + - "19000:8000" + command: "--sharedDb --inMemory" + + bind9: + image: "vinyldns/bind9:0.0.1" + container_name: "vinyldns-bind9" + ports: + - "19001:53/udp" + - "19001:53" + volumes: + - ./bind9/etc:/var/cache/bind/config + - ./bind9/zones:/var/cache/bind/zones + + elasticmq: + image: s12v/elasticmq:0.13.8 + container_name: "vinyldns-elasticmq" + ports: + - "9324:9324" + volumes: + - ./elasticmq/custom.conf:/etc/elasticmq/elasticmq.conf + + api: + image: vinyldns/api:0.1 # the version of the docker container we want to pull + environment: + - REST_PORT=9000 + container_name: "vinyldns-api" + ports: + - "9000:9000" + depends_on: + - mysql + - bind9 + - elasticmq + - dynamodb diff --git a/docker/docker-compose-func-test.yml b/docker/docker-compose-func-test.yml new file mode 100644 index 000000000..110de9885 --- /dev/null +++ b/docker/docker-compose-func-test.yml @@ -0,0 +1,56 @@ +version: "3.0" +services: + mysql: + image: "mysql:5.7" + container_name: "vinyldns-mysql" + environment: + - MYSQL_ROOT_PASSWORD=pass # do not use quotes around the environment variables!!! + - MYSQL_ROOT_HOST=% # this is required as mysql is currently locked down to localhost + ports: + - "3306:3306" + + dynamodb: + image: "cnadiminti/dynamodb-local:2017-02-16" + container_name: "vinyldns-dynamodb" + ports: + - "19000:8000" + + bind9: + image: "vinyldns/bind9:0.0.1" + container_name: "vinyldns-bind9" + volumes: + - ./bind9/etc:/var/cache/bind/config + - ./bind9/zones:/var/cache/bind/zones + ports: + - "19001:53/tcp" + - "19001:53/udp" + + elasticmq: + image: s12v/elasticmq:0.13.8 + container_name: "vinyldns-elasticmq" + ports: + - "9324:9324" + volumes: + - ./elasticmq/custom.conf:/etc/elasticmq/elasticmq.conf + + # this file is copied into the target directory to get the jar! won't run in place as is! + api: + build: + context: api + environment: + - REST_PORT=9000 + container_name: "vinyldns-api" + ports: + - "9000:9000" + depends_on: + - mysql + - bind9 + - elasticmq + - dynamodb + + functest: + build: + context: functest + container_name: "vinyldns-functest" + depends_on: + - api diff --git a/docker/docker-compose.yml b/docker/docker-compose.yml new file mode 100644 index 000000000..15015e87e --- /dev/null +++ b/docker/docker-compose.yml @@ -0,0 +1,31 @@ +version: "3.0" +services: + mysql: + image: "mysql:5.7" + environment: + - MYSQL_ROOT_PASSWORD=pass # do not use quotes around the environment variables!!! + - MYSQL_ROOT_HOST=% # this is required as mysql is currently locked down to localhost + ports: + - "3306:3306" + + dynamodb: + image: "cnadiminti/dynamodb-local:2017-02-16" + ports: + - "19000:8000" + command: "--sharedDb --inMemory" + + bind9: + image: "vinyldns/bind9:0.0.1" + ports: + - "19001:53/udp" + - "19001:53" + volumes: + - ./bind9/etc:/var/cache/bind/config + - ./bind9/zones:/var/cache/bind/zones + + elasticmq: + image: s12v/elasticmq:0.13.8 + ports: + - "9324:9324" + volumes: + - ./elasticmq/custom.conf:/etc/elasticmq/elasticmq.conf diff --git a/docker/elasticmq/Dockerfile b/docker/elasticmq/Dockerfile new file mode 100644 index 000000000..a9f515970 --- /dev/null +++ b/docker/elasticmq/Dockerfile @@ -0,0 +1,10 @@ +FROM alpine:3.2 +FROM anapsix/alpine-java:8_server-jre + +EXPOSE 9324 + +COPY run.sh /elasticmq/run.sh +COPY custom.conf /elasticmq/custom.conf +COPY elasticmq-server-0.13.2.jar /elasticmq/server.jar + +ENTRYPOINT ["/elasticmq/run.sh"] diff --git a/docker/elasticmq/custom.conf b/docker/elasticmq/custom.conf new file mode 100644 index 000000000..d0546d48a --- /dev/null +++ b/docker/elasticmq/custom.conf @@ -0,0 +1,32 @@ +include classpath("application.conf") + +node-address { + protocol = http + host = "localhost" + host = ${?APP_HOST} + port = 9324 + context-path = "" +} + +rest-sqs { + enabled = true + bind-port = 9324 + bind-hostname = "0.0.0.0" + // Possible values: relaxed, strict + sqs-limits = relaxed +} + +queues { + vinyldns { + defaultVisibilityTimeout = 10 seconds + receiveMessageWait = 0 seconds + } + vinyldns-bind9 { + defaultVisibilityTimeout = 10 seconds + receiveMessageWait = 0 seconds + } + vinyldns-zones { + defaultVisibilityTimeout = 10 seconds + receiveMessageWait = 0 seconds + } +} diff --git a/docker/elasticmq/run.sh b/docker/elasticmq/run.sh new file mode 100755 index 000000000..f498d5992 --- /dev/null +++ b/docker/elasticmq/run.sh @@ -0,0 +1,8 @@ +#!/usr/bin/env bash + +# gets the docker-ized ip address, sets it to an environment variable +export APP_HOST=`ip addr show eth0 | grep 'inet ' | awk '{print $2}' | cut -f1 -d'/'` + +echo "APP HOST = ${APP_HOST}" + +java -Djava.net.preferIPv4Stack=true -Dconfig.file=/elasticmq/custom.conf -jar /elasticmq/server.jar diff --git a/docker/functest/Dockerfile b/docker/functest/Dockerfile new file mode 100644 index 000000000..0ee4715e8 --- /dev/null +++ b/docker/functest/Dockerfile @@ -0,0 +1,20 @@ +FROM python:2.7.15-stretch + +# Install dns utils so we can run dig +RUN apt-get update && apt-get install dnsutils -y + +# The run script is what actually runs our func tests +COPY run.sh /app/run.sh +RUN chmod a+x /app/run.sh + +COPY run-tests.py /app/run-tests.py +RUN chmod a+x /app/run-tests.py + +# Copy over the functional test directory, this must have been copied into the build context previous to this building! +ADD functional_test /app + +# Install our func test requirements +RUN pip install --index-url https://pypi.python.org/simple/ -r /app/requirements.txt + +# set the entry point for the container to start vinyl, specify the config resource +ENTRYPOINT ["/app/run.sh"] diff --git a/docker/functest/run-tests.py b/docker/functest/run-tests.py new file mode 100644 index 000000000..1d270a6d5 --- /dev/null +++ b/docker/functest/run-tests.py @@ -0,0 +1,18 @@ +#!/usr/bin/env python +import os +import sys + +basedir = os.path.dirname(os.path.realpath(__file__)) + +report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports') +if not os.path.exists(report_dir): + os.system('mkdir -p ' + report_dir) + +import pytest + +result = 1 +result = pytest.main(list(sys.argv[1:])) + +sys.exit(result) + + diff --git a/docker/functest/run.sh b/docker/functest/run.sh new file mode 100644 index 000000000..56f7baa66 --- /dev/null +++ b/docker/functest/run.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash + +VINYLDNS_URL="http://vinyldns-api:9000" +echo "Waiting for API to be ready at ${VINYLDNS_URL} ..." +DATA="" +RETRY=40 +while [ $RETRY -gt 0 ] +do + DATA=$(wget -O - -q -t 1 "${VINYLDNS_URL}/ping") + if [ $? -eq 0 ] + then + break + else + echo "Retrying Again" >&2 + + let RETRY-=1 + sleep 1 + + if [ $RETRY -eq 0 ] + then + echo "Exceeded retries waiting for VINYLDNS to be ready, failing" + exit 1 + fi + fi +done + +DNS_IP=$(dig +short vinyldns-bind9) +echo "Running live tests against ${VINYLDNS_URL} and DNS server ${DNS_IP}" + +cd /app +./run-tests.py live_tests -v --url=${VINYLDNS_URL} --dns-ip=${DNS_IP} diff --git a/img/vinyldns-full-logo-light.png b/img/vinyldns-full-logo-light.png new file mode 100644 index 000000000..cba2b103d Binary files /dev/null and b/img/vinyldns-full-logo-light.png differ diff --git a/modules/api/functional_test/bootstrap.sh b/modules/api/functional_test/bootstrap.sh new file mode 100755 index 000000000..0c43fe630 --- /dev/null +++ b/modules/api/functional_test/bootstrap.sh @@ -0,0 +1,12 @@ +#!/bin/bash -e + +if [ ! -d "./.virtualenv" ]; then + echo "Creating virtualenv..." + virtualenv --clear --python="$(which python2.7)" ./.virtualenv +fi + +if ! diff ./requirements.txt ./.virtualenv/requirements.txt &> /dev/null; then + echo "Installing dependencies..." + .virtualenv/bin/python ./.virtualenv/bin/pip install --index-url https://pypi.python.org/simple/ -r ./requirements.txt + cp ./requirements.txt ./.virtualenv/ +fi diff --git a/modules/api/functional_test/boto_request_signer.py b/modules/api/functional_test/boto_request_signer.py new file mode 100644 index 000000000..b25326a09 --- /dev/null +++ b/modules/api/functional_test/boto_request_signer.py @@ -0,0 +1,81 @@ +import logging + +from datetime import datetime +from hashlib import sha256 + +from boto.dynamodb2.layer1 import DynamoDBConnection + +import requests.compat as urlparse + +logger = logging.getLogger(__name__) + +__all__ = [u'BotoRequestSigner'] + + +class BotoRequestSigner(object): + + def __init__(self, index_url, access_key, secret_access_key): + url = urlparse.urlparse(index_url) + self.boto_connection = DynamoDBConnection( + host = url.hostname, + port = url.port, + aws_access_key_id = access_key, + aws_secret_access_key = secret_access_key, + is_secure = False) + + @staticmethod + def canonical_date(headers): + """Derive canonical date (ISO 8601 string) from headers if possible, + or synthesize it if no usable header exists.""" + iso_format = u'%Y%m%dT%H%M%SZ' + http_format = u'%a, %d %b %Y %H:%M:%S GMT' + + def try_parse(date_string, format): + if date_string is None: + return None + try: + return datetime.strptime(date_string, format) + except ValueError: + return None + + amz_date = try_parse(headers.get(u'X-Amz-Date'), iso_format) + http_date = try_parse(headers.get(u'Date'), http_format) + fallback_date = datetime.utcnow() + + date = next(d for d in [amz_date, http_date, fallback_date] if d is not None) + return date.strftime(iso_format) + + def build_auth_header(self, method, path, headers, body, params=None): + """Construct an Authorization header, using boto.""" + + request = self.boto_connection.build_base_http_request( + method=method, + path=path, + auth_path=path, + headers=headers, + data=body, + params=params or {}) + + auth_handler = self.boto_connection._auth_handler + + timestamp = BotoRequestSigner.canonical_date(headers) + request.timestamp = timestamp[0:8] + + request.region_name = u'us-east-1' + request.service_name = u'VinylDNS' + + credential_scope = u'/'.join([request.timestamp, request.region_name, request.service_name, u'aws4_request']) + + canonical_request = auth_handler.canonical_request(request) + hashed_request = sha256(canonical_request.encode(u'utf-8')).hexdigest() + + string_to_sign = u'\n'.join([u'AWS4-HMAC-SHA256', timestamp, credential_scope, hashed_request]) + signature = auth_handler.signature(request, string_to_sign) + headers_to_sign = auth_handler.headers_to_sign(request) + + auth_header = u','.join([ + u'AWS4-HMAC-SHA256 Credential=%s' % auth_handler.scope(request), + u'SignedHeaders=%s' % auth_handler.signed_headers(headers_to_sign), + u'Signature=%s' % signature]) + + return auth_header diff --git a/modules/api/functional_test/conftest.py b/modules/api/functional_test/conftest.py new file mode 100644 index 000000000..c75f0eb53 --- /dev/null +++ b/modules/api/functional_test/conftest.py @@ -0,0 +1,77 @@ +import os +import pytest +import boto.dynamodb2 +from boto.dynamodb2.table import Table +from boto.dynamodb2.fields import HashKey +from boto.dynamodb2.fields import GlobalAllIndex + +from vinyldns_context import VinylDNSTestContext + +def pytest_addoption(parser): + """ + Adds additional options that we can parse when we run the tests, stores them in the parser / py.test context + """ + parser.addoption("--url", dest="url", action="store", default="http://localhost:9000", + help="URL for application to root") + parser.addoption("--dns-ip", dest="dns_ip", action="store", default="127.0.0.1:19001", + help="The ip address for the dns server to use for the tests") + parser.addoption("--dns-zone", dest="dns_zone", action="store", default="vinyldns.", + help="The zone name that will be used for testing") + parser.addoption("--dns-key-name", dest="dns_key_name", action="store", default="vinyldns.", + help="The name of the key used to sign updates for the zone") + parser.addoption("--dns-key", dest="dns_key", action="store", default="nzisn+4G2ldMn0q1CV3vsg==", + help="The tsig key") + + # optional + parser.addoption("--basic-auth", dest="basic_auth_creds", + help="Basic auth credentials in 'user:pass' format") + parser.addoption("--basic-auth-realm", dest="basic_auth_realm", + help="Basic auth realm to use with credentials supplied by \"-b\"") + parser.addoption("--iauth-creds", dest="iauth_creds", + help="Intermediary auth (codebig style) in 'key:secret' format") + parser.addoption("--oauth-creds", dest="oauth_creds", + help="OAuth credentials in consumer:secret format") + parser.addoption("--environment", dest="cim_env", action="store", default="test", + help="CIM_ENV that we are testing against.") + parser.addoption("--log-level", dest="logging_level", + help="logging level should be CRITICAL, ERROR, WARNING, INFO or DEBUG") + + +def pytest_configure(config): + """ + Loads the test context since we are no longer using run.py + """ + + # Monkey patch ssl so we do not verify ssl certs + import ssl + try: + _create_unverified_https_context = ssl._create_unverified_context + except AttributeError: + # Legacy Python that doesn't verify HTTPS certificates by default + pass + else: + # Handle target environment that doesn't support HTTPS verification + ssl._create_default_https_context = _create_unverified_https_context + + url = config.getoption("url", default="http://localhost:9000/") + if not url.endswith('/'): + url += '/' + + import sys + sys.dont_write_bytecode = True + + VinylDNSTestContext.configure(config.getoption("dns_ip"), + config.getoption("dns_zone"), + config.getoption("dns_key_name"), + config.getoption("dns_key"), + config.getoption("url")) + +def pytest_report_header(config): + """ + Overrides the test result header like we do in pyfunc test + """ + header = "Testing against CIM_ENV " + config.getoption("cim_env") + header += "\r\nURL: " + config.getoption("url") + header += "\r\nRunning from directory " + os.getcwd() + header += '\r\nTest shim directory ' + os.path.dirname(__file__) + return header diff --git a/modules/api/functional_test/live_tests/batch/create_batch_change_test.py b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py new file mode 100644 index 000000000..e83efee01 --- /dev/null +++ b/modules/api/functional_test/live_tests/batch/create_batch_change_test.py @@ -0,0 +1,2306 @@ +from hamcrest import * +from utils import * + +def does_not_contain(x): + is_not(contains(x)) + +def validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data): + assert_that(input_json['changeType'], is_(change_type)) + assert_that(input_json['inputName'], is_(input_name)) + assert_that(input_json['type'], is_(record_type)) + assert_that(record_type, is_in(['A', 'AAAA', 'CNAME', 'PTR', 'TXT', 'MX'])) + if change_type=="Add": + assert_that(input_json['ttl'], is_(ttl)) + if record_type in ["A", "AAAA"]: + assert_that(input_json['record']['address'], is_(record_data)) + elif record_type=="CNAME": + assert_that(input_json['record']['cname'], is_(record_data)) + elif record_type=="PTR": + assert_that(input_json['record']['ptrdname'], is_(record_data)) + elif record_type=="TXT": + assert_that(input_json['record']['text'], is_(record_data)) + elif record_type=="MX": + assert_that(input_json['record']['preference'], is_(record_data['preference'])) + assert_that(input_json['record']['exchange'], is_(record_data['exchange'])) + return + +def assert_failed_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", ttl=200, record_data="1.1.1.1", error_messages=[]): + validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data) + assert_error(input_json, error_messages) + return + +def assert_successful_change_in_error_response(input_json, change_type="Add", input_name="fqdn.", record_type="A", ttl=200, record_data="1.1.1.1"): + validate_change_error_response_basics(input_json, change_type, input_name, record_type, ttl, record_data) + assert_that('errors' in input_json, is_(False)) + return + +def assert_change_success_response_values(changes_json, zone, index, record_name, input_name, record_data, ttl=200, record_type="A", change_type="Add"): + assert_that(changes_json[index]['zoneId'], is_(zone['id'])) + assert_that(changes_json[index]['zoneName'], is_(zone['name'])) + assert_that(changes_json[index]['recordName'], is_(record_name)) + assert_that(changes_json[index]['inputName'], is_(input_name)) + if change_type=="Add": + assert_that(changes_json[index]['ttl'], is_(ttl)) + assert_that(changes_json[index]['type'], is_(record_type)) + assert_that(changes_json[index]['id'], is_not(none())) + assert_that(changes_json[index]['changeType'], is_(change_type)) + assert_that(record_type, is_in(['A', 'AAAA', 'CNAME', 'PTR', 'TXT', 'MX'])) + if record_type in ["A", "AAAA"] and change_type=="Add": + assert_that(changes_json[index]['record']['address'], is_(record_data)) + elif record_type=="CNAME" and change_type=="Add": + assert_that(changes_json[index]['record']['cname'], is_(record_data)) + elif record_type=="PTR" and change_type=="Add": + assert_that(changes_json[index]['record']['ptrdname'], is_(record_data)) + elif record_type=="TXT" and change_type=="Add": + assert_that(changes_json[index]['record']['text'], is_(record_data)) + elif record_type=="MX" and change_type=="Add": + assert_that(changes_json[index]['record']['preference'], is_(record_data['preference'])) + assert_that(changes_json[index]['record']['exchange'], is_(record_data['exchange'])) + return + +def assert_error(input_json, error_messages): + for error in error_messages: + assert_that(input_json['errors'], has_item(error)) + assert_that(len(input_json['errors']), is_(len(error_messages))) + + +def test_create_batch_change_with_adds_success(shared_zone_test_context): + """ + Test successfully creating a batch change with adds + """ + client = shared_zone_test_context.ok_vinyldns_client + parent_zone = shared_zone_test_context.parent_zone + ok_zone = shared_zone_test_context.ok_zone + classless_delegation_zone = shared_zone_test_context.classless_zone_delegation_zone + classless_base_zone = shared_zone_test_context.classless_base_zone + ip6_reverse_zone = shared_zone_test_context.ip6_reverse_zone + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("parent.com.", address="4.5.6.7"), + get_change_A_AAAA_json("parent.com", address="4.5.6.7"), + get_change_A_AAAA_json("ok.", record_type="AAAA", address="fd69:27cc:fe91::60"), + get_change_A_AAAA_json("relative.parent.com.", address="1.1.1.1"), + get_change_A_AAAA_json("relative.parent.com", address="2.2.2.2"), + get_change_CNAME_json("cname.parent.com", cname="nice.parent.com"), + get_change_CNAME_json("2cname.parent.com", cname="nice.parent.com"), + get_change_CNAME_json("4.2.0.192.in-addr.arpa.", cname="4.4/30.2.0.192.in-addr.arpa."), + get_change_PTR_json("192.0.2.193", ptrdname="www.vinyldns"), + get_change_PTR_json("192.0.2.44"), + get_change_PTR_json("fd69:27cc:fe91::60", ptrdname="www.vinyldns"), + get_change_TXT_json("txt.ok."), + get_change_TXT_json("ok."), + get_change_TXT_json("txt-unique-characters.ok.", text='a\\\\`=` =\\"Cat\\"\nattr=val'), + get_change_TXT_json("txt.2.0.192.in-addr.arpa."), + get_change_MX_json("mx.ok.", preference=0), + get_change_MX_json("mx.ok.", preference=65535), + get_change_MX_json("ok.", preference=1000, exchange="bar.foo.") + ] + } + + to_delete = [] + try: + result = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(result) + record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS + + ## validate initial response + assert_that(result['comments'], is_("this is optional")) + assert_that(result['userName'], is_("ok")) + assert_that(result['userId'], is_("ok")) + assert_that(result['id'], is_not(none())) + assert_that(completed_batch['status'], is_("Complete")) + + assert_change_success_response_values(result['changes'], zone=parent_zone, index=0, record_name="parent.com.", + input_name="parent.com.", record_data="4.5.6.7") + assert_change_success_response_values(result['changes'], zone=parent_zone, index=1, record_name="parent.com.", + input_name="parent.com.", record_data="4.5.6.7") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=2, record_name="ok.", + input_name="ok.", record_data="fd69:27cc:fe91::60", record_type="AAAA") + assert_change_success_response_values(result['changes'], zone=parent_zone, index=3, record_name="relative", + input_name="relative.parent.com.", record_data="1.1.1.1") + assert_change_success_response_values(result['changes'], zone=parent_zone, index=4, record_name="relative", + input_name="relative.parent.com.", record_data="2.2.2.2"), + assert_change_success_response_values(result['changes'], zone=parent_zone, index=5, record_name="cname", + input_name="cname.parent.com.", record_data="nice.parent.com.", record_type="CNAME") + assert_change_success_response_values(result['changes'], zone=parent_zone, index=6, record_name="2cname", + input_name="2cname.parent.com.", record_data="nice.parent.com.", record_type="CNAME") + assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=7, record_name="4", + input_name="4.2.0.192.in-addr.arpa.", record_data="4.4/30.2.0.192.in-addr.arpa.", record_type="CNAME") + assert_change_success_response_values(result['changes'], zone=classless_delegation_zone, index=8, record_name="193", + input_name="192.0.2.193", record_data="www.vinyldns.", record_type="PTR") + assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=9, record_name="44", + input_name="192.0.2.44", record_data="test.com.", record_type="PTR") + assert_change_success_response_values(result['changes'], zone=ip6_reverse_zone, index=10, record_name="0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", + input_name="fd69:27cc:fe91::60", record_data="www.vinyldns.", record_type="PTR") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=11, record_name="txt", + input_name="txt.ok.", record_data="test", record_type="TXT") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=12, record_name="ok.", + input_name="ok.", record_data="test", record_type="TXT") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=13, record_name="txt-unique-characters", + input_name="txt-unique-characters.ok.", record_data='a\\\\`=` =\\"Cat\\"\nattr=val', record_type="TXT") + assert_change_success_response_values(result['changes'], zone=classless_base_zone, index=14, record_name="txt", + input_name="txt.2.0.192.in-addr.arpa.", record_data="test", record_type="TXT") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=15, record_name="mx", + input_name="mx.ok.", record_data={'preference': 0, 'exchange': 'foo.bar.'}, record_type="MX") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=16, record_name="mx", + input_name="mx.ok.", record_data={'preference': 65535, 'exchange': 'foo.bar.'}, record_type="MX") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=17, record_name="ok.", + input_name="ok.", record_data={'preference': 1000, 'exchange': 'bar.foo.'}, record_type="MX") + + completed_status = [change['status'] == 'Complete' for change in completed_batch['changes']] + assert_that(all(completed_status), is_(True)) + + ## get all the recordsets created by this batch, validate + rs1 = client.get_recordset(record_set_list[0][0], record_set_list[0][1])['recordSet'] + expected1 = {'name': 'parent.com.', + 'zoneId': parent_zone['id'], + 'type': 'A', + 'ttl': 200, + 'records': [{'address': '4.5.6.7'}]} + verify_recordset(rs1, expected1) + + rs2 = client.get_recordset(record_set_list[1][0], record_set_list[1][1])['recordSet'] + assert_that(rs2, is_(rs1)) # duplicate entry, should get same thing + + rs3 = client.get_recordset(record_set_list[2][0], record_set_list[2][1])['recordSet'] + expected3 = {'name': 'ok.', + 'zoneId': ok_zone['id'], + 'type': 'AAAA', + 'ttl': 200, + 'records': [{'address': 'fd69:27cc:fe91::60'}]} + verify_recordset(rs3, expected3) + + rs4 = client.get_recordset(record_set_list[3][0], record_set_list[3][1])['recordSet'] + expected4 = {'name': 'relative', + 'zoneId': parent_zone['id'], + 'type': 'A', + 'ttl': 200, + 'records': [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]} + verify_recordset(rs4, expected4) + + rs5 = client.get_recordset(record_set_list[5][0], record_set_list[5][1])['recordSet'] + expected5 = {'name': 'cname', + 'zoneId': parent_zone['id'], + 'type': 'CNAME', + 'ttl': 200, + 'records': [{'cname': 'nice.parent.com.'}]} + verify_recordset(rs5, expected5) + + rs6 = client.get_recordset(record_set_list[6][0], record_set_list[6][1])['recordSet'] + expected6 = {'name': '2cname', + 'zoneId': parent_zone['id'], + 'type': 'CNAME', + 'ttl': 200, + 'records': [{'cname': 'nice.parent.com.'}]} + verify_recordset(rs6, expected6) + + rs7 = client.get_recordset(record_set_list[7][0], record_set_list[7][1])['recordSet'] + expected7 = {'name': '4', + 'zoneId': classless_base_zone['id'], + 'type': 'CNAME', + 'ttl': 200, + 'records': [{'cname': '4.4/30.2.0.192.in-addr.arpa.'}]} + verify_recordset(rs7, expected7) + + rs8 = client.get_recordset(record_set_list[8][0], record_set_list[8][1])['recordSet'] + expected8 = {'name': '193', + 'zoneId': classless_delegation_zone['id'], + 'type': 'PTR', + 'ttl': 200, + 'records': [{'ptrdname': 'www.vinyldns.'}]} + verify_recordset(rs8, expected8) + + rs9 = client.get_recordset(record_set_list[9][0], record_set_list[9][1])['recordSet'] + expected9 = {'name': '44', + 'zoneId': classless_base_zone['id'], + 'type': 'PTR', + 'ttl': 200, + 'records': [{'ptrdname': 'test.com.'}]} + verify_recordset(rs9, expected9) + + rs10 = client.get_recordset(record_set_list[10][0], record_set_list[10][1])['recordSet'] + expected10 = {'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', + 'zoneId': ip6_reverse_zone['id'], + 'type': 'PTR', + 'ttl': 200, + 'records': [{'ptrdname': 'www.vinyldns.'}]} + verify_recordset(rs10, expected10) + + rs11 = client.get_recordset(record_set_list[11][0], record_set_list[11][1])['recordSet'] + expected11 = {'name': 'txt', + 'zoneId': ok_zone['id'], + 'type': 'TXT', + 'ttl': 200, + 'records': [{'text': 'test'}]} + verify_recordset(rs11, expected11) + + rs12 = client.get_recordset(record_set_list[12][0], record_set_list[12][1])['recordSet'] + expected12 = {'name': 'ok.', + 'zoneId': ok_zone['id'], + 'type': 'TXT', + 'ttl': 200, + 'records': [{'text': 'test'}]} + verify_recordset(rs12, expected12) + + rs13 = client.get_recordset(record_set_list[13][0], record_set_list[13][1])['recordSet'] + expected13 = {'name': 'txt-unique-characters', + 'zoneId': ok_zone['id'], + 'type': 'TXT', + 'ttl': 200, + 'records': [{'text': 'a\\\\`=` =\\"Cat\\"\nattr=val'}]} + verify_recordset(rs13, expected13) + + rs14 = client.get_recordset(record_set_list[14][0], record_set_list[14][1])['recordSet'] + expected14 = {'name': 'txt', + 'zoneId': classless_base_zone['id'], + 'type': 'TXT', + 'ttl': 200, + 'records': [{'text': 'test'}]} + verify_recordset(rs14, expected14) + + rs15 = client.get_recordset(record_set_list[15][0], record_set_list[15][1])['recordSet'] + expected15 = {'name': 'mx', + 'zoneId': ok_zone['id'], + 'type': 'MX', + 'ttl': 200, + 'records': [{'preference': 0, 'exchange': 'foo.bar.'}, {'preference': 65535, 'exchange': 'foo.bar.'}]} + verify_recordset(rs15, expected15) + + rs16 = client.get_recordset(record_set_list[17][0], record_set_list[17][1])['recordSet'] + expected16 = {'name': 'ok.', + 'zoneId': ok_zone['id'], + 'type': 'MX', + 'ttl': 200, + 'records': [{'preference': 1000, 'exchange': 'bar.foo.'}]} + verify_recordset(rs16, expected16) + + finally: + clear_zoneid_rsid_tuple_list(to_delete, client) + + +def test_create_batch_change_with_updates_deletes_success(shared_zone_test_context): + """ + Test successfully creating a batch change with updates and deletes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + dummy_zone = shared_zone_test_context.dummy_zone + ok_zone = shared_zone_test_context.ok_zone + classless_zone_delegation_zone = shared_zone_test_context.classless_zone_delegation_zone + + ok_zone_acl = generate_acl_rule('Delete', groupId=shared_zone_test_context.dummy_group['id'], recordMask='.*', recordTypes=['CNAME']) + classless_zone_delegation_zone_acl = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordTypes=['PTR']) + + rs_delete_dummy = get_recordset_json(dummy_zone, "delete", "AAAA", [{"address": "1:2:3:4:5:6:7:8"}]) + rs_update_dummy = get_recordset_json(dummy_zone, "update", "A", [{"address": "1.2.3.4"}]) + rs_delete_ok = get_recordset_json(ok_zone, "delete", "CNAME", [{"cname": "delete.cname."}]) + rs_update_classless = get_recordset_json(classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "will.change."}]) + txt_delete_dummy = get_recordset_json(dummy_zone, "delete-txt", "TXT", [{"text": "test"}]) + mx_delete_dummy = get_recordset_json(dummy_zone, "delete-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) + mx_update_dummy = get_recordset_json(dummy_zone, "update-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}]) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("delete.dummy.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update.dummy.", ttl=300, address="1.2.3.4"), + get_change_A_AAAA_json("Update.dummy.", change_type="DeleteRecordSet"), + get_change_CNAME_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.193", ttl=300, ptrdname="has.changed."), + get_change_PTR_json("192.0.2.193", change_type="DeleteRecordSet"), + get_change_TXT_json("delete-txt.dummy.", change_type="DeleteRecordSet"), + get_change_MX_json("delete-mx.dummy.", change_type="DeleteRecordSet"), + get_change_MX_json("update-mx.dummy.", change_type="DeleteRecordSet"), + get_change_MX_json("update-mx.dummy.", preference=1000) + ] + } + + to_create = [rs_delete_dummy, rs_update_dummy, rs_delete_ok, rs_update_classless, txt_delete_dummy, mx_delete_dummy, mx_update_dummy] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + create_client.wait_until_recordset_change_status(create_rs, 'Complete') + + # Configure ACL rules + add_ok_acl_rules(shared_zone_test_context, [ok_zone_acl]) + add_classless_acl_rules(shared_zone_test_context, [classless_zone_delegation_zone_acl]) + + result = dummy_client.create_batch_change(batch_change_input, status=202) + completed_batch = dummy_client.wait_until_batch_change_completed(result) + + record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + + to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS + + ## validate initial response + assert_that(result['comments'], is_("this is optional")) + assert_that(result['userName'], is_("dummy")) + assert_that(result['userId'], is_("dummy")) + assert_that(result['id'], is_not(none())) + assert_that(completed_batch['status'], is_("Complete")) + + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=0, record_name="delete", + input_name="delete.dummy.", record_data=None, record_type="AAAA", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=1, record_name="update", ttl=300, + input_name="update.dummy.", record_data="1.2.3.4") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=2, record_name="Update", + input_name="Update.dummy.", record_data=None, change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=ok_zone, index=3, record_name="delete", + input_name="delete.ok.", record_data=None, record_type="CNAME", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=classless_zone_delegation_zone, index=4, record_name="193", ttl=300, + input_name="192.0.2.193", record_data="has.changed.", record_type="PTR") + assert_change_success_response_values(result['changes'], zone=classless_zone_delegation_zone, index=5, record_name="193", + input_name="192.0.2.193", record_data=None, record_type="PTR", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=6, record_name="delete-txt", + input_name="delete-txt.dummy.", record_data=None, record_type="TXT", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=7, record_name="delete-mx", + input_name="delete-mx.dummy.", record_data=None, record_type="MX", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=8, record_name="update-mx", + input_name="update-mx.dummy.", record_data=None, record_type="MX", change_type="DeleteRecordSet") + assert_change_success_response_values(result['changes'], zone=dummy_zone, index=9, record_name="update-mx", + input_name="update-mx.dummy.", record_data={'preference': 1000, 'exchange': 'foo.bar.'}, record_type="MX") + + rs1 = dummy_client.get_recordset(record_set_list[0][0], record_set_list[0][1], status=404) + assert_that(rs1, is_("RecordSet with id " + record_set_list[0][1] + " does not exist in zone dummy.")) + + rs2 = dummy_client.get_recordset(record_set_list[1][0], record_set_list[1][1])['recordSet'] + expected2 = {'name': 'update', + 'zoneId': dummy_zone['id'], + 'type': 'A', + 'ttl': 300, + 'records': [{'address': '1.2.3.4'}]} + verify_recordset(rs2, expected2) + + # since this is an update, record_set_list[1] and record_set_list[2] are the same record + rs3 = dummy_client.get_recordset(record_set_list[2][0], record_set_list[2][1])['recordSet'] + verify_recordset(rs3, expected2) + + rs4 = dummy_client.get_recordset(record_set_list[3][0], record_set_list[3][1], status=404) + assert_that(rs4, is_("RecordSet with id " + record_set_list[3][1] + " does not exist in zone ok.")) + + rs5 = dummy_client.get_recordset(record_set_list[4][0], record_set_list[4][1])['recordSet'] + expected5 = {'name': '193', + 'zoneId': classless_zone_delegation_zone['id'], + 'type': 'PTR', + 'ttl': 300, + 'records': [{'ptrdname': 'has.changed.'}]} + verify_recordset(rs5, expected5) + + # since this is an update, record_set_list[5] and record_set_list[4] are the same record + rs6 = dummy_client.get_recordset(record_set_list[5][0], record_set_list[5][1])['recordSet'] + verify_recordset(rs6, expected5) + + rs7 = dummy_client.get_recordset(record_set_list[6][0], record_set_list[6][1], status=404) + assert_that(rs7, is_("RecordSet with id " + record_set_list[6][1] + " does not exist in zone dummy.")) + + rs8 = dummy_client.get_recordset(record_set_list[7][0], record_set_list[7][1], status=404) + assert_that(rs8, is_("RecordSet with id " + record_set_list[7][1] + " does not exist in zone dummy.")) + + rs9 = dummy_client.get_recordset(record_set_list[8][0], record_set_list[8][1])['recordSet'] + expected9 = {'name': 'update-mx', + 'zoneId': dummy_zone['id'], + 'type': 'MX', + 'ttl': 200, + 'records': [{'preference': 1000, 'exchange': 'foo.bar.'}]} + verify_recordset(rs9, expected9) + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs[0] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs[0] != dummy_zone['id']] + clear_zoneid_rsid_tuple_list(dummy_deletes, dummy_client) + clear_zoneid_rsid_tuple_list(ok_deletes, ok_client) + + # Clean up ACL rules + clear_ok_acl_rules(shared_zone_test_context) + clear_classless_acl_rules(shared_zone_test_context) + + +def test_create_batch_change_without_comments_succeeds(shared_zone_test_context): + """ + Test successfully creating a batch change without comments + Test successfully creating a batch using inputName without a trailing dot, and that the + returned inputName is dotted + """ + client = shared_zone_test_context.ok_vinyldns_client + parent_zone = shared_zone_test_context.parent_zone + batch_change_input = { + "changes": [ + get_change_A_AAAA_json("parent.com", address="4.5.6.7"), + ] + } + to_delete = [] + + try: + result = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(result) + to_delete = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + + assert_change_success_response_values(result['changes'], zone=parent_zone, index=0, record_name="parent.com.", + input_name="parent.com.", record_data="4.5.6.7") + finally: + clear_zoneid_rsid_tuple_list(to_delete, client) + + +def test_create_batch_change_partial_failure(shared_zone_test_context): + """ + Test batch change status with partial failures + """ + client = shared_zone_test_context.ok_vinyldns_client + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("will-succeed.ok.", address="4.5.6.7"), + get_change_A_AAAA_json("direct-to-backend.ok.", address="4.5.6.7") # this record will fail in processing + ] + } + + to_delete = [] + + try: + dns_add(shared_zone_test_context.ok_zone, "direct-to-backend", 200, "A", "1.2.3.4") + result = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(result) + record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes'] if change['status'] == "Complete"] + to_delete = set(record_set_list) # set here because multiple items in the batch combine to one RS + + assert_that(completed_batch['status'], is_("PartialFailure")) + + finally: + clear_zoneid_rsid_tuple_list(to_delete, client) + dns_delete(shared_zone_test_context.ok_zone, "direct-to-backend", "A") + + +def test_create_batch_change_failed(shared_zone_test_context): + """ + Test batch change status with all failures + """ + client = shared_zone_test_context.ok_vinyldns_client + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("backend-foo.ok.", address="4.5.6.7"), + get_change_A_AAAA_json("backend-already-exists.ok.", address="4.5.6.7") + ] + } + + try: + # both of these records already exist in the backend, but are not synced in zone + dns_add(shared_zone_test_context.ok_zone, "backend-foo", 200, "A", "1.2.3.4") + dns_add(shared_zone_test_context.ok_zone, "backend-already-exists", 200, "A", "1.2.3.4") + result = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(result) + + assert_that(completed_batch['status'], is_("Failed")) + + finally: + dns_delete(shared_zone_test_context.ok_zone, "backend-foo", "A") + dns_delete(shared_zone_test_context.ok_zone, "backend-already-exists", "A") + + +def test_empty_batch_fails(shared_zone_test_context): + """ + Test creating batch without any changes fails with + """ + + batch_change_input = { + "comments": "this should fail processing", + "changes": [] + } + + error = shared_zone_test_context.ok_vinyldns_client.create_batch_change(batch_change_input, status=422) + assert_that(error, is_("Batch change contained no changes. Batch change must have at least one change, up to a maximum of 20 changes.")) + + +def test_create_batch_exceeding_change_limit_fails(shared_zone_test_context): + """ + Test that creating a batch exceeding the change limit fails with ChangeLimitExceeded + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "changes": [] + } + for x in range(100): + batch_change_input['changes'].append(get_change_A_AAAA_json("ok.", address=("1.2.3." + str(x)))) + + errors = client.create_batch_change(batch_change_input, status=413) + + assert_that(errors, is_("Cannot request more than 20 changes in a single batch change request")) + + +def test_create_batch_change_without_changes_fails(shared_zone_test_context): + """ + Test creating a batch change with missing changes fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional" + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes"]) + + +def test_create_batch_change_with_missing_change_type_fails(shared_zone_test_context): + """ + Test creating a batch change with missing change type fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "inputName": "thing.thing.com.", + "type": "A", + "ttl": 200, + "record": { + "address": "4.5.6.7" + } + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.changeType"]) + + +def test_create_batch_change_with_invalid_change_type_fails(shared_zone_test_context): + """ + Test creating a batch change with invalid change type fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "InvalidChangeType", + "data": { + "inputName": "thing.thing.com.", + "type": "A", + "ttl": 200, + "record": { + "address": "4.5.6.7" + } + } + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Invalid ChangeInputType"]) + + +def test_create_batch_change_with_missing_input_name_fails(shared_zone_test_context): + """ + Test creating a batch change without an inputName fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "type": "A", + "ttl": 200, + "record": { + "address": "4.5.6.7" + } + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.inputName"]) + + +def test_create_batch_change_with_unsupported_record_type_fails(shared_zone_test_context): + """ + Test creating a batch change with unsupported record type fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "thing.thing.com.", + "type": "UNKNOWN", + "ttl": 200, + "record": { + "address": "4.5.6.7" + } + } + ] + } + + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Unsupported type UNKNOWN, valid types include: A, AAAA, CNAME, PTR, TXT, and MX"]) + + +def test_create_batch_change_with_invalid_record_type_fails(shared_zone_test_context): + """ + Test creating a batch change with invalid record type fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("thing.thing.com.", "B", address="4.5.6.7") + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Invalid RecordType"]) + + +def test_create_batch_change_with_missing_ttl_fails(shared_zone_test_context): + """ + Test creating a batch change without a ttl fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "thing.thing.com.", + "type": "A", + "record": { + "address": "4.5.6.7" + } + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.ttl"]) + + +def test_create_batch_change_with_missing_record_fails(shared_zone_test_context): + """ + Test creating a batch change without a record fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "thing.thing.com.", + "type": "A", + "ttl": 200 + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.record.address"]) + + +def test_create_batch_change_with_empty_record_fails(shared_zone_test_context): + """ + Test creating a batch change with empty record fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "thing.thing.com.", + "type": "A", + "ttl": 200, + "record": {} + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing A.address"]) + + +def test_create_batch_change_with_bad_A_record_data_fails(shared_zone_test_context): + """ + Test creating a batch change with malformed A record address fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_A_data_request = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("thing.thing.com.", address="bad address") + ] + } + errors = client.create_batch_change(bad_A_data_request, status=400) + + assert_error(errors, error_messages=["A must be a valid IPv4 Address"]) + + +def test_create_batch_change_with_bad_AAAA_record_data_fails(shared_zone_test_context): + """ + Test creating a batch change with malformed AAAA record address fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_AAAA_data_request = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("thing.thing.com.", record_type="AAAA", address="bad address") + ] + } + errors = client.create_batch_change(bad_AAAA_data_request, status=400) + + assert_error(errors, error_messages=["AAAA must be a valid IPv6 Address"]) + + +def test_create_batch_change_with_incorrect_CNAME_record_attribute_fails(shared_zone_test_context): + """ + Test creating a batch change with incorrect CNAME record attribute fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_CNAME_data_request = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "bizz.bazz.", + "type": "CNAME", + "ttl": 200, + "record": { + "address": "buzz." + } + } + ] + } + errors = client.create_batch_change(bad_CNAME_data_request, status=400)['errors'] + + assert_that(errors, contains("Missing CNAME.cname")) + + +def test_create_batch_change_with_incorrect_PTR_record_attribute_fails(shared_zone_test_context): + """ + Test creating a batch change with incorrect PTR record attribute fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_PTR_data_request = { + "comments": "this is optional", + "changes": [ + { + "changeType": "Add", + "inputName": "4.5.6.7", + "type": "PTR", + "ttl": 200, + "record": { + "address": "buzz." + } + } + ] + } + errors = client.create_batch_change(bad_PTR_data_request, status=400)['errors'] + + assert_that(errors, contains("Missing PTR.ptrdname")) + + +def test_create_batch_change_with_bad_CNAME_record_attribute_fails(shared_zone_test_context): + """ + Test creating a batch change with malformed CNAME record fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_CNAME_data_request = { + "comments": "this is optional", + "changes": [ + get_change_CNAME_json(input_name="bizz.baz.", cname="s." + "s" * 256) + ] + } + errors = client.create_batch_change(bad_CNAME_data_request, status=400) + + assert_error(errors, error_messages=["CNAME domain name must not exceed 255 characters"]) + + +def test_create_batch_change_with_bad_PTR_record_attribute_fails(shared_zone_test_context): + """ + Test creating a batch change with malformed PTR record fails + """ + client = shared_zone_test_context.ok_vinyldns_client + bad_PTR_data_request = { + "comments": "this is optional", + "changes": [ + get_change_PTR_json("4.5.6.7", ptrdname="s" * 256) + ] + } + errors = client.create_batch_change(bad_PTR_data_request, status=400) + + assert_error(errors, error_messages=["PTR must be less than 255 characters"]) + + +def test_create_batch_change_with_missing_input_name_for_delete_fails(shared_zone_test_context): + """ + Test creating a batch change without an inputName for DeleteRecordSet fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "DeleteRecordSet", + "type": "A" + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.inputName"]) + + +def test_create_batch_change_with_missing_record_type_for_delete_fails(shared_zone_test_context): + """ + Test creating a batch change without record type for DeleteRecordSet fails + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + { + "changeType": "DeleteRecordSet", + "inputName": "thing.thing.com." + } + ] + } + errors = client.create_batch_change(batch_change_input, status=400) + + assert_error(errors, error_messages=["Missing BatchChangeInput.changes.type"]) + + +def test_mx_recordtype_cannot_have_invalid_preference(shared_zone_test_context): + """ + Test batch fails with bad mx preference + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + + batch_change_input_low = { + "comments": "this is optional", + "changes": [ + get_change_MX_json("too-small.ok.", preference=-1) + ] + } + + batch_change_input_high = { + "comments": "this is optional", + "changes": [ + get_change_MX_json("too-big.ok.", preference=65536) + ] + } + + error_low = ok_client.create_batch_change(batch_change_input_low, status=400) + error_high = ok_client.create_batch_change(batch_change_input_high, status=400) + + assert_error(error_low, error_messages=["MX.preference must be a 16 bit integer"]) + assert_error(error_high, error_messages=["MX.preference must be a 16 bit integer"]) + + +def test_create_batch_change_with_invalid_duplicate_record_names_fails(shared_zone_test_context): + """ + Test creating a batch change that contains a CNAME record and another record with the same name fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + rs_A_delete = get_recordset_json(shared_zone_test_context.ok_zone, "delete", "A", [{"address": "10.1.1.1"}]) + rs_CNAME_delete = get_recordset_json(shared_zone_test_context.ok_zone, "delete-this", "CNAME", [{"cname": "cname."}]) + + to_create = [rs_A_delete, rs_CNAME_delete] + to_delete = [] + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("thing.ok.", address="4.5.6.7"), + get_change_CNAME_json("thing.ok"), + get_change_A_AAAA_json("delete.ok", change_type="DeleteRecordSet"), + get_change_CNAME_json("delete.ok"), + get_change_A_AAAA_json("delete-this.ok", address="4.5.6.7"), + get_change_CNAME_json("delete-this.ok", change_type="DeleteRecordSet") + ] + } + + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + assert_successful_change_in_error_response(response[0], input_name="thing.ok.", record_data="4.5.6.7") + assert_failed_change_in_error_response(response[1], input_name="thing.ok.", record_type="CNAME", record_data="test.com.", + error_messages=['Record Name "thing.ok." Not Unique In Batch Change:' + ' cannot have multiple "CNAME" records with the same name.']) + assert_successful_change_in_error_response(response[2], input_name="delete.ok.", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[3], input_name="delete.ok.", record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[4], input_name="delete-this.ok.", record_data="4.5.6.7") + assert_successful_change_in_error_response(response[5], input_name="delete-this.ok.", change_type="DeleteRecordSet", record_type="CNAME") + + finally: + clear_recordset_list(to_delete, client) + + +def test_create_batch_change_with_readonly_user_fails(shared_zone_test_context): + """ + Test creating a batch change with an read-only user fails (acl rules on zone) + """ + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id'], recordMask='.*', recordTypes=['A', 'AAAA']) + + delete_rs = get_recordset_json(shared_zone_test_context.ok_zone, "delete", "A", [{"address": "127.0.0.1"}], 300) + update_rs = get_recordset_json(shared_zone_test_context.ok_zone, "update", "A", [{"address": "127.0.0.1"}], 300) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("relative.ok.", address="4.5.6.7"), + get_change_A_AAAA_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update.ok.", address="1.2.3.4"), + get_change_A_AAAA_json("update.ok.", change_type="DeleteRecordSet") + ] + } + + to_delete = [] + try: + add_ok_acl_rules(shared_zone_test_context, acl_rule) + + for rs in [delete_rs, update_rs]: + create_result = ok_client.create_recordset(rs, status=202) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + + errors = dummy_client.create_batch_change(batch_change_input, status=400) + + assert_failed_change_in_error_response(errors[0], input_name="relative.ok.", record_data="4.5.6.7", error_messages=['User \"dummy\" is not authorized.']) + assert_failed_change_in_error_response(errors[1], input_name="delete.ok.", change_type="DeleteRecordSet", record_data="4.5.6.7", + error_messages=['User "dummy" is not authorized.']) + assert_failed_change_in_error_response(errors[2], input_name="update.ok.", record_data="1.2.3.4", error_messages=['User \"dummy\" is not authorized.']) + assert_failed_change_in_error_response(errors[3], input_name="update.ok.", change_type="DeleteRecordSet", record_data=None, + error_messages=['User \"dummy\" is not authorized.']) + finally: + clear_ok_acl_rules(shared_zone_test_context) + clear_recordset_list(to_delete, ok_client) + + +def test_a_recordtype_add_checks(shared_zone_test_context): + """ + Test all add validations performed on A records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_a = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-A", "A", [{"address": "10.1.1.1"}], 100) + existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Cname", "CNAME", [{"cname": "cname.data."}], 100) + + batch_change_input = { + "changes": [ + # valid changes + get_change_A_AAAA_json("good-record.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("summed-record.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("summed-record.parent.com.", address="5.6.7.8"), + + # input validation failures + get_change_A_AAAA_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, address="1.2.3.4"), + get_change_A_AAAA_json("reverse-zone.30.172.in-addr.arpa.", address="1.2.3.4"), + + # zone discovery failures + get_change_A_AAAA_json("no.subzone.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("no.zone.at.all.", address="1.2.3.4"), + + # context validation failures + get_change_CNAME_json("cname-duplicate.parent.com."), + get_change_A_AAAA_json("cname-duplicate.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("existing-a.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("existing-cname.parent.com.", address="1.2.3.4"), + get_change_A_AAAA_json("user-add-unauthorized.dummy.", address="1.2.3.4") + ] + } + + to_create = [existing_a, existing_cname] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="good-record.parent.com.", record_data="1.2.3.4") + assert_successful_change_in_error_response(response[1], input_name="summed-record.parent.com.", record_data="1.2.3.4") + assert_successful_change_in_error_response(response[2], input_name="summed-record.parent.com.", record_data="5.6.7.8") + + # ttl, domain name, reverse zone input validations + assert_failed_change_in_error_response(response[3], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, record_data="1.2.3.4", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' + 'valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="reverse-zone.30.172.in-addr.arpa.", record_data="1.2.3.4", + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse-zone.30.172.in-addr.arpa.\" and type \"A\" is not allowed in a reverse zone."]) + + # zone discovery failures + assert_failed_change_in_error_response(response[5], input_name="no.subzone.parent.com.", record_data="1.2.3.4", + error_messages=['Zone Discovery Failed: zone for "no.subzone.parent.com." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + assert_failed_change_in_error_response(response[6], input_name="no.zone.at.all.", record_data="1.2.3.4", + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validations: duplicate name failure is always on the cname + assert_failed_change_in_error_response(response[7], input_name="cname-duplicate.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_successful_change_in_error_response(response[8], input_name="cname-duplicate.parent.com.", record_data="1.2.3.4") + + # context validations: conflicting recordsets, unauthorized error + assert_failed_change_in_error_response(response[9], input_name="existing-a.parent.com.", record_data="1.2.3.4", + error_messages=["Record \"existing-a.parent.com.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + assert_failed_change_in_error_response(response[10], input_name="existing-cname.parent.com.", record_data="1.2.3.4", + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-cname.parent.com.\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[11], input_name="user-add-unauthorized.dummy.", record_data="1.2.3.4", + error_messages=["User \"ok\" is not authorized."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_a_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on A records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + dummy_zone = shared_zone_test_context.dummy_zone + + rs_delete_ok = get_recordset_json(ok_zone, "delete", "A", [{'address': '1.1.1.1'}]) + rs_update_ok = get_recordset_json(ok_zone, "update", "A", [{'address': '1.1.1.1'}]) + rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized", "A", [{'address': '1.1.1.1'}]) + rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized", "A", [{'address': '1.1.1.1'}]) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes + get_change_A_AAAA_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update.ok.", ttl=300), + + # input validations failures + get_change_A_AAAA_json("$invalid.host.name.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("$another.invalid.host.name.", ttl=300), + get_change_A_AAAA_json("$another.invalid.host.name.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("another.reverse.zone.in-addr.arpa.", ttl=10), + get_change_A_AAAA_json("another.reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet"), + + # zone discovery failures + get_change_A_AAAA_json("zone.discovery.error.", change_type="DeleteRecordSet"), + + # context validation failures: record does not exist, not authorized + get_change_A_AAAA_json("non-existent.ok.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("delete-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update-unauthorized.dummy.", ttl=300) + ] + } + + to_create = [rs_delete_ok, rs_update_ok, rs_delete_dummy, rs_update_dummy] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + # Confirm that record set doesn't already exist + ok_client.get_recordset(ok_zone['id'], 'non-existent', status=404) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # valid changes + assert_successful_change_in_error_response(response[0], input_name="delete.ok.", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name="update.ok.", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], input_name="update.ok.", ttl=300) + + # input validations failures + assert_failed_change_in_error_response(response[3], input_name="$invalid.host.name.", change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "$invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet", + error_messages=['Invalid Record Type In Reverse Zone: record with name "reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.']) + assert_failed_change_in_error_response(response[5], input_name="$another.invalid.host.name.", ttl=300, + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[6], input_name="$another.invalid.host.name.", change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[7], input_name="another.reverse.zone.in-addr.arpa.", ttl=10, + error_messages=['Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.', + 'Invalid TTL: "10", must be a number between 30 and 2147483647.']) + assert_failed_change_in_error_response(response[8], input_name="another.reverse.zone.in-addr.arpa.", change_type="DeleteRecordSet", + error_messages=['Invalid Record Type In Reverse Zone: record with name "another.reverse.zone.in-addr.arpa." and type "A" is not allowed in a reverse zone.']) + + # zone discovery failures + assert_failed_change_in_error_response(response[9], input_name="zone.discovery.error.", change_type="DeleteRecordSet", + error_messages=['Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validation failures: record does not exist, not authorized + assert_failed_change_in_error_response(response[10], input_name="non-existent.ok.", change_type="DeleteRecordSet", + error_messages=['Record "non-existent.ok." Does Not Exist: cannot delete a record that does not exist.']) + assert_failed_change_in_error_response(response[11], input_name="delete-unauthorized.dummy.", change_type="DeleteRecordSet", + error_messages=['User \"ok\" is not authorized.']) + assert_failed_change_in_error_response(response[12], input_name="update-unauthorized.dummy.", change_type="DeleteRecordSet", + error_messages=['User \"ok\" is not authorized.']) + assert_failed_change_in_error_response(response[13], input_name="update-unauthorized.dummy.", ttl=300, error_messages=['User \"ok\" is not authorized.']) + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + clear_recordset_list(dummy_deletes, dummy_client) + clear_recordset_list(ok_deletes, ok_client) + + +def test_aaaa_recordtype_add_checks(shared_zone_test_context): + """ + Test all add validations performed on AAAA records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_aaaa = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-AAAA", "AAAA", [{"address": "1::1"}], 100) + existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Cname", "CNAME", [{"cname": "cname.data."}], 100) + + batch_change_input = { + "changes": [ + # valid changes + get_change_A_AAAA_json("good-record.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("summed-record.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("summed-record.parent.com.", record_type="AAAA", address="1::2"), + + # input validation failures + get_change_A_AAAA_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("reverse-zone.1.2.3.ip6.arpa.", record_type="AAAA", address="1::1"), + + # zone discovery failures + get_change_A_AAAA_json("no.subzone.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("no.zone.at.all.", record_type="AAAA", address="1::1"), + + # context validation failures + get_change_CNAME_json("cname-duplicate.parent.com."), + get_change_A_AAAA_json("cname-duplicate.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("existing-aaaa.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("existing-cname.parent.com.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("user-add-unauthorized.dummy.", record_type="AAAA", address="1::1") + ] + } + + to_create = [existing_aaaa, existing_cname] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="good-record.parent.com.", record_type="AAAA", record_data="1::1") + assert_successful_change_in_error_response(response[1], input_name="summed-record.parent.com.", record_type="AAAA", record_data="1::1") + assert_successful_change_in_error_response(response[2], input_name="summed-record.parent.com.", record_type="AAAA", record_data="1::2") + + # ttl, domain name, reverse zone input validations + assert_failed_change_in_error_response(response[3], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, record_type="AAAA", record_data="1::1", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' + 'valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="reverse-zone.1.2.3.ip6.arpa.", record_type="AAAA", record_data="1::1", + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse-zone.1.2.3.ip6.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) + + # zone discovery failures + assert_failed_change_in_error_response(response[5], input_name="no.subzone.parent.com.", record_type="AAAA", record_data="1::1", + error_messages=["Zone Discovery Failed: zone for \"no.subzone.parent.com.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + assert_failed_change_in_error_response(response[6], input_name="no.zone.at.all.", record_type="AAAA", record_data="1::1", + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validations: duplicate name failure (always on the cname), conflicting recordsets, unauthorized error + assert_failed_change_in_error_response(response[7], input_name="cname-duplicate.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_successful_change_in_error_response(response[8], input_name="cname-duplicate.parent.com.", record_type="AAAA", record_data="1::1") + assert_failed_change_in_error_response(response[9], input_name="existing-aaaa.parent.com.", record_type="AAAA", record_data="1::1", + error_messages=["Record \"existing-aaaa.parent.com.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + assert_failed_change_in_error_response(response[10], input_name="existing-cname.parent.com.", record_type="AAAA", record_data="1::1", + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-cname.parent.com.\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[11], input_name="user-add-unauthorized.dummy.", record_type="AAAA", record_data="1::1", + error_messages=["User \"ok\" is not authorized."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_aaaa_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on AAAA records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + dummy_zone = shared_zone_test_context.dummy_zone + + rs_delete_ok = get_recordset_json(ok_zone, "delete", "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], 200) + rs_update_ok = get_recordset_json(ok_zone, "update", "AAAA", [{"address": "1:1:1:1:1:1:1:1"}], 200) + rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized", "AAAA", [{"address": "1::1"}], 200) + rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized", "AAAA", [{"address": "1:2:3:4:5:6:7:8"}], 200) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes + get_change_A_AAAA_json("delete.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update.ok.", record_type="AAAA", ttl=300, address="1:2:3:4:5:6:7:8"), + get_change_A_AAAA_json("update.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + + # input validations failures + get_change_A_AAAA_json("invalid-name$.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("reverse.zone.in-addr.arpa.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("bad-ttl-and-invalid-name$-update.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("bad-ttl-and-invalid-name$-update.ok.", ttl=29, record_type="AAAA", address="1:2:3:4:5:6:7:8"), + + # zone discovery failures + get_change_A_AAAA_json("no.zone.at.all.", record_type="AAAA", change_type="DeleteRecordSet"), + + # context validation failures + get_change_A_AAAA_json("delete-nonexistent.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update-nonexistent.ok.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update-nonexistent.ok.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("delete-unauthorized.dummy.", record_type="AAAA", change_type="DeleteRecordSet"), + get_change_A_AAAA_json("update-unauthorized.dummy.", record_type="AAAA", address="1::1"), + get_change_A_AAAA_json("update-unauthorized.dummy.", record_type="AAAA", change_type="DeleteRecordSet") + ] + } + + to_create = [rs_delete_ok, rs_update_ok, rs_delete_dummy, rs_update_dummy] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + # Confirm that record set doesn't already exist + ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="delete.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], ttl=300, input_name="update.ok.", record_type="AAAA", record_data="1:2:3:4:5:6:7:8") + assert_successful_change_in_error_response(response[2], input_name="update.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet") + + # input validations failures: invalid input name, reverse zone error, invalid ttl + assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="reverse.zone.in-addr.arpa.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["Invalid Record Type In Reverse Zone: record with name \"reverse.zone.in-addr.arpa.\" and type \"AAAA\" is not allowed in a reverse zone."]) + assert_failed_change_in_error_response(response[5], input_name="bad-ttl-and-invalid-name$-update.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "bad-ttl-and-invalid-name$-update.ok.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[6], input_name="bad-ttl-and-invalid-name$-update.ok.", ttl=29, record_type="AAAA", record_data="1:2:3:4:5:6:7:8", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$-update.ok.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + + # zone discovery failure + assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validation failures: record does not exist, not authorized + assert_failed_change_in_error_response(response[8], input_name="delete-nonexistent.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_failed_change_in_error_response(response[9], input_name="update-nonexistent.ok.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[10], input_name="update-nonexistent.ok.", record_type="AAAA", record_data="1::1",) + assert_failed_change_in_error_response(response[11], input_name="delete-unauthorized.dummy.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[12], input_name="update-unauthorized.dummy.", record_type="AAAA", record_data="1::1", + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[13], input_name="update-unauthorized.dummy.", record_type="AAAA", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + clear_recordset_list(dummy_deletes, dummy_client) + clear_recordset_list(ok_deletes, ok_client) + + +def test_cname_recordtype_add_checks(shared_zone_test_context): + """ + Test all add validations performed on CNAME records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + + existing_forward = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Forward", "A", [{"address": "1.2.3.4"}], 100) + existing_reverse = get_recordset_json(shared_zone_test_context.classless_base_zone, "0", "PTR", [{"ptrdname": "test.com."}], 100) + existing_cname = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Cname", "CNAME", [{"cname": "cname.data."}], 100) + rs_a_to_cname_ok = get_recordset_json(ok_zone, "a-to-cname", "A", [{'address': '1.1.1.1'}]) + rs_cname_to_A_ok = get_recordset_json(ok_zone, "cname-to-a", "CNAME", [{'cname': 'test.com.'}]) + + batch_change_input = { + "changes": [ + # valid change + get_change_CNAME_json("forward-zone.parent.com."), + get_change_CNAME_json("reverse-zone.30.172.in-addr.arpa."), + + # valid changes - delete and add of same record name but different type + get_change_A_AAAA_json("a-to-cname.ok", change_type="DeleteRecordSet"), + get_change_CNAME_json("a-to-cname.ok"), + get_change_A_AAAA_json("cname-to-a.ok"), + get_change_CNAME_json("cname-to-a.ok", change_type="DeleteRecordSet"), + + # input validations failures + get_change_CNAME_json("bad-ttl-and-invalid-name$.parent.com.", ttl=29, cname="also$bad.name"), + + # zone discovery failure + get_change_CNAME_json("no.subzone.parent.com."), + + # cant be apex + get_change_CNAME_json("parent.com."), + + # context validation failures + get_change_PTR_json("192.0.2.15"), + get_change_CNAME_json("15.2.0.192.in-addr.arpa.", cname="duplicate.other.type.within.batch."), + get_change_CNAME_json("cname-duplicate.parent.com."), + get_change_CNAME_json("cname-duplicate.parent.com.", cname="duplicate.cname.type.within.batch."), + get_change_CNAME_json("existing-forward.parent.com."), + get_change_CNAME_json("existing-cname.parent.com."), + get_change_CNAME_json("0.2.0.192.in-addr.arpa.", cname="duplicate.in.db."), + get_change_CNAME_json("user-add-unauthorized.dummy.") + ] + } + + to_create = [existing_forward, existing_reverse, existing_cname, rs_a_to_cname_ok, rs_cname_to_A_ok] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="forward-zone.parent.com.", record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[1], input_name="reverse-zone.30.172.in-addr.arpa.", record_type="CNAME", record_data="test.com.") + + # successful changes - delete and add of same record name but different type + assert_successful_change_in_error_response(response[2], input_name="a-to-cname.ok.", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[3], input_name="a-to-cname.ok.", record_type="CNAME", record_data="test.com.") + assert_successful_change_in_error_response(response[4], input_name="cname-to-a.ok.") + assert_successful_change_in_error_response(response[5], input_name="cname-to-a.ok.", record_type="CNAME", change_type="DeleteRecordSet") + + # ttl, domain name, data + assert_failed_change_in_error_response(response[6], input_name="bad-ttl-and-invalid-name$.parent.com.", ttl=29, record_type="CNAME", record_data="also$bad.name.", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$.parent.com.", ' + 'valid domain names must be letters, numbers, and hyphens, ' + 'joined by dots, and terminated with a dot.', + 'Invalid domain name: "also$bad.name.", ' + 'valid domain names must be letters, numbers, and hyphens, ' + 'joined by dots, and terminated with a dot.']) + # zone discovery failure + assert_failed_change_in_error_response(response[7], input_name="no.subzone.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Zone Discovery Failed: zone for \"no.subzone.parent.com.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # CNAME cant be apex + assert_failed_change_in_error_response(response[8], input_name="parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record \"parent.com.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + + # context validations: duplicates in batch + assert_successful_change_in_error_response(response[9], input_name="192.0.2.15", record_type="PTR", record_data="test.com.") + assert_failed_change_in_error_response(response[10], input_name="15.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="duplicate.other.type.within.batch.", + error_messages=["Record Name \"15.2.0.192.in-addr.arpa.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + + assert_failed_change_in_error_response(response[11], input_name="cname-duplicate.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_failed_change_in_error_response(response[12], input_name="cname-duplicate.parent.com.", record_type="CNAME", record_data="duplicate.cname.type.within.batch.", + error_messages=["Record Name \"cname-duplicate.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + + # context validations: existing recordsets pre-request, unauthorized, failure on duplicate add + assert_failed_change_in_error_response(response[13], input_name="existing-forward.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-forward.parent.com.\" and type \"A\" conflicts with this record."]) + assert_failed_change_in_error_response(response[14], input_name="existing-cname.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record \"existing-cname.parent.com.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.", + "CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-cname.parent.com.\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[15], input_name="0.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="duplicate.in.db.", + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"0.2.0.192.in-addr.arpa.\" and type \"PTR\" conflicts with this record."]) + assert_failed_change_in_error_response(response[16], input_name="user-add-unauthorized.dummy.", record_type="CNAME", record_data="test.com.", + error_messages=["User \"ok\" is not authorized."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_cname_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on CNAME records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + dummy_zone = shared_zone_test_context.dummy_zone + classless_base_zone = shared_zone_test_context.classless_base_zone + + rs_delete_ok = get_recordset_json(ok_zone, "delete", "CNAME", [{'cname': 'test.com.'}]) + rs_update_ok = get_recordset_json(ok_zone, "update", "CNAME", [{'cname': 'test.com.'}]) + rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized", "CNAME", [{'cname': 'test.com.'}]) + rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized", "CNAME", [{'cname': 'test.com.'}]) + rs_delete_base = get_recordset_json(classless_base_zone, "200", "CNAME", [{'cname': '200.192/30.2.0.192.in-addr.arpa.'}]) + rs_update_base = get_recordset_json(classless_base_zone, "201", "CNAME", [{'cname': '201.192/30.2.0.192.in-addr.arpa.'}]) + rs_update_duplicate_add = get_recordset_json(shared_zone_test_context.parent_zone, "Existing-Cname2", "CNAME", [{"cname": "cname.data."}], 100) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes - forward zone + get_change_CNAME_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_CNAME_json("update.ok.", change_type="DeleteRecordSet"), + get_change_CNAME_json("update.ok.", ttl=300), + + # valid changes - reverse zone + get_change_CNAME_json("200.2.0.192.in-addr.arpa.", change_type="DeleteRecordSet"), + get_change_CNAME_json("201.2.0.192.in-addr.arpa.", change_type="DeleteRecordSet"), + get_change_CNAME_json("201.2.0.192.in-addr.arpa.", ttl=300), + + # input validation failures + get_change_CNAME_json("$invalid.host.name.", change_type="DeleteRecordSet"), + get_change_CNAME_json("$another.invalid.host.name", change_type="DeleteRecordSet"), + get_change_CNAME_json("$another.invalid.host.name", ttl=20, cname="$another.invalid.cname."), + + # zone discovery failures + get_change_CNAME_json("zone.discovery.error.", change_type="DeleteRecordSet"), + + # context validation failures: record does not exist, not authorized, failure on update with multiple adds + get_change_CNAME_json("non-existent-delete.ok.", change_type="DeleteRecordSet"), + get_change_CNAME_json("non-existent-update.ok.", change_type="DeleteRecordSet"), + get_change_CNAME_json("non-existent-update.ok."), + get_change_CNAME_json("delete-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_CNAME_json("update-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_CNAME_json("update-unauthorized.dummy.", ttl=300), + get_change_CNAME_json("existing-cname2.parent.com.", change_type="DeleteRecordSet"), + get_change_CNAME_json("existing-cname2.parent.com."), + get_change_CNAME_json("existing-cname2.parent.com.", ttl=350) + ] + } + + to_create = [rs_delete_ok, rs_update_ok, rs_delete_dummy, rs_update_dummy, rs_delete_base, rs_update_base, rs_update_duplicate_add] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + # Confirm that record set doesn't already exist + ok_client.get_recordset(ok_zone['id'], 'non-existent', status=404) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # valid changes - forward zone + assert_successful_change_in_error_response(response[0], input_name="delete.ok.", record_type="CNAME", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name="update.ok.", record_type="CNAME", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], input_name="update.ok.", record_type="CNAME", ttl=300, record_data="test.com.") + + # valid changes - reverse zone + assert_successful_change_in_error_response(response[3], input_name="200.2.0.192.in-addr.arpa.", record_type="CNAME", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[4], input_name="201.2.0.192.in-addr.arpa.", record_type="CNAME", change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[5], input_name="201.2.0.192.in-addr.arpa.", record_type="CNAME", ttl=300, record_data="test.com.") + + # ttl, domain name, data + assert_failed_change_in_error_response(response[6], input_name="$invalid.host.name.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "$invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[7], input_name="$another.invalid.host.name.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[8], input_name="$another.invalid.host.name.", ttl=20, record_type="CNAME", record_data="$another.invalid.cname.", + error_messages=['Invalid TTL: "20", must be a number between 30 and 2147483647.', + 'Invalid domain name: "$another.invalid.host.name.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.', + 'Invalid domain name: "$another.invalid.cname.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + + # zone discovery failures + assert_failed_change_in_error_response(response[9], input_name="zone.discovery.error.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['Zone Discovery Failed: zone for "zone.discovery.error." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validation failures: record does not exist, not authorized + assert_failed_change_in_error_response(response[10], input_name="non-existent-delete.ok.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['Record "non-existent-delete.ok." Does Not Exist: cannot delete a record that does not exist.']) + assert_failed_change_in_error_response(response[11], input_name="non-existent-update.ok.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['Record "non-existent-update.ok." Does Not Exist: cannot delete a record that does not exist.']) + assert_successful_change_in_error_response(response[12], input_name="non-existent-update.ok.", record_type="CNAME", record_data="test.com.") + assert_failed_change_in_error_response(response[13], input_name="delete-unauthorized.dummy.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['User "ok" is not authorized.']) + assert_failed_change_in_error_response(response[14], input_name="update-unauthorized.dummy.", record_type="CNAME", change_type="DeleteRecordSet", + error_messages=['User "ok" is not authorized.']) + assert_failed_change_in_error_response(response[15], input_name="update-unauthorized.dummy.", record_type="CNAME", ttl=300, record_data="test.com.", error_messages=['User "ok" is not authorized.']) + assert_successful_change_in_error_response(response[16], input_name="existing-cname2.parent.com.", record_type="CNAME", change_type="DeleteRecordSet") + assert_failed_change_in_error_response(response[17], input_name="existing-cname2.parent.com.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"existing-cname2.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + assert_failed_change_in_error_response(response[18], input_name="existing-cname2.parent.com.", record_type="CNAME", record_data="test.com.", ttl=350, + error_messages=["Record Name \"existing-cname2.parent.com.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + clear_recordset_list(dummy_deletes, dummy_client) + clear_recordset_list(ok_deletes, ok_client) + + +def test_ptr_recordtype_auth_checks(shared_zone_test_context): + """ + Test all authorization validations performed on PTR records submitted in batch changes + """ + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_client = shared_zone_test_context.ok_vinyldns_client + + no_auth_ipv4 = get_recordset_json(shared_zone_test_context.classless_base_zone, "25", "PTR", [{"ptrdname": "ptrdname.data."}], 200) + no_auth_ipv6 = get_recordset_json(shared_zone_test_context.ip6_reverse_zone, "4.3.2.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "ptrdname.data."}], 200) + + batch_change_input = { + "changes": [ + get_change_PTR_json("192.0.2.5", ptrdname="not.authorized.ipv4.ptr.base."), + get_change_PTR_json("192.0.2.196", ptrdname="not.authorized.ipv4.ptr.classless.delegation."), + get_change_PTR_json("fd69:27cc:fe91::1234", ptrdname="not.authorized.ipv6.ptr."), + get_change_PTR_json("192.0.2.25", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91::1234", change_type="DeleteRecordSet") + ] + } + + to_create = [no_auth_ipv4, no_auth_ipv6] + to_delete = [] + + try: + for create_json in to_create: + create_result = ok_client.create_recordset(create_json, status=202) + to_delete.append(ok_client.wait_until_recordset_change_status(create_result, 'Complete')) + + errors = dummy_client.create_batch_change(batch_change_input, status=400) + + assert_failed_change_in_error_response(errors[0], input_name="192.0.2.5", record_type="PTR", record_data="not.authorized.ipv4.ptr.base.", + error_messages=["User \"dummy\" is not authorized."]) + assert_failed_change_in_error_response(errors[1], input_name="192.0.2.196", record_type="PTR", record_data="not.authorized.ipv4.ptr.classless.delegation.", + error_messages=["User \"dummy\" is not authorized."]) + assert_failed_change_in_error_response(errors[2], input_name="fd69:27cc:fe91::1234", record_type="PTR", record_data="not.authorized.ipv6.ptr.", + error_messages=["User \"dummy\" is not authorized."]) + assert_failed_change_in_error_response(errors[3], input_name="192.0.2.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"dummy\" is not authorized."]) + assert_failed_change_in_error_response(errors[4], input_name="fd69:27cc:fe91::1234", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"dummy\" is not authorized."]) + finally: + clear_recordset_list(to_delete, ok_client) + + +def test_ipv4_ptr_recordtype_add_checks(shared_zone_test_context): + """ + Perform all add, non-authorization validations performed on IPv4 PTR records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_ipv4 = get_recordset_json(shared_zone_test_context.classless_zone_delegation_zone, "193", "PTR", [{"ptrdname": "ptrdname.data."}]) + existing_cname = get_recordset_json(shared_zone_test_context.classless_base_zone, "199", "CNAME", [{"cname": "cname.data."}], 300) + + batch_change_input = { + "changes": [ + # valid change + get_change_PTR_json("192.0.2.44", ptrdname="base.vinyldns"), + get_change_PTR_json("192.0.2.198", ptrdname="delegated.vinyldns"), + + # input validation failures + get_change_PTR_json("invalidip.111."), + get_change_PTR_json("4.5.6.7", ttl=29, ptrdname="-1.2.3.4"), + + # duplicate PTR name failures + get_change_PTR_json("192.0.2.197"), + get_change_PTR_json("192.0.2.197", ptrdname="ptrdata."), + + # delegated and non-delegated PTR duplicate name checks + get_change_PTR_json("192.0.2.196"), # delegated zone + get_change_CNAME_json("196.2.0.192.in-addr.arpa"), # non-delegated zone + get_change_CNAME_json("196.192/30.2.0.192.in-addr.arpa"), # delegated zone + + get_change_PTR_json("192.0.2.55"), # non-delegated zone + get_change_CNAME_json("55.2.0.192.in-addr.arpa"), # non-delegated zone + get_change_CNAME_json("55.192/30.2.0.192.in-addr.arpa"), # delegated zone + + # zone discovery failure + get_change_PTR_json("192.0.1.192"), + + # context validation failures + get_change_PTR_json("192.0.2.193", ptrdname="existing-ptr."), + get_change_PTR_json("192.0.2.199", ptrdname="existing-cname.") + ] + } + + to_create = [existing_ipv4, existing_cname] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="192.0.2.44", record_type="PTR", record_data="base.vinyldns.") + assert_successful_change_in_error_response(response[1], input_name="192.0.2.198", record_type="PTR", record_data="delegated.vinyldns.") + + # input validation failures: invalid ip, ttl, data + assert_failed_change_in_error_response(response[2], input_name="invalidip.111.", record_type="PTR", record_data="test.com.", + error_messages=['Invalid IP address: "invalidip.111.".']) + assert_failed_change_in_error_response(response[3], input_name="4.5.6.7", ttl=29, record_type="PTR", record_data="-1.2.3.4.", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "-1.2.3.4.", ' + 'valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + + # duplicate names always fail for ptr + assert_failed_change_in_error_response(response[4], input_name="192.0.2.197", record_type="PTR", record_data="test.com.", + error_messages=['Record Name "192.0.2.197" Not Unique In Batch Change:' + ' cannot have multiple "PTR" records with the same name.']) + assert_failed_change_in_error_response(response[5], input_name="192.0.2.197", record_type="PTR", record_data="ptrdata.", + error_messages=['Record Name "192.0.2.197" Not Unique In Batch Change:' + ' cannot have multiple "PTR" records with the same name.']) + + # delegated and non-delegated PTR duplicate name checks + assert_successful_change_in_error_response(response[6], input_name="192.0.2.196", record_type="PTR", record_data="test.com.") + assert_successful_change_in_error_response(response[7], input_name="196.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.") + assert_failed_change_in_error_response(response[8], input_name="196.192/30.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.", + error_messages=['Record Name "196.192/30.2.0.192.in-addr.arpa." Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) + assert_successful_change_in_error_response(response[9], input_name="192.0.2.55", record_type="PTR", record_data="test.com.") + assert_failed_change_in_error_response(response[10], input_name="55.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.", + error_messages=['Record Name "55.2.0.192.in-addr.arpa." Not Unique In Batch Change: cannot have multiple "CNAME" records with the same name.']) + assert_successful_change_in_error_response(response[11], input_name="55.192/30.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="test.com.") + + # zone discovery failure + assert_failed_change_in_error_response(response[12], input_name="192.0.1.192", record_type="PTR", record_data="test.com.", + error_messages=['Zone Discovery Failed: zone for "192.0.1.192" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validations: existing cname recordset + assert_failed_change_in_error_response(response[13], input_name="192.0.2.193", record_type="PTR", record_data="existing-ptr.", + error_messages=['Record "192.0.2.193" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add.']) + assert_failed_change_in_error_response(response[14], input_name="192.0.2.199", record_type="PTR", record_data="existing-cname.", + error_messages=['CNAME Conflict: CNAME record names must be unique. Existing record with name "192.0.2.199" and type "CNAME" conflicts with this record.']) + + finally: + clear_recordset_list(to_delete, client) + + +def test_ipv4_ptr_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on ipv4 PTR records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + base_zone = shared_zone_test_context.classless_base_zone + delegated_zone = shared_zone_test_context.classless_zone_delegation_zone + + rs_delete_ipv4 = get_recordset_json(base_zone, "25", "PTR", [{"ptrdname": "delete.ptr."}], 200) + rs_update_ipv4 = get_recordset_json(delegated_zone, "193", "PTR", [{"ptrdname": "update.ptr."}], 200) + rs_replace_cname = get_recordset_json(base_zone, "21", "CNAME", [{"cname": "replace.cname."}], 200) + rs_replace_ptr = get_recordset_json(base_zone, "17", "PTR", [{"ptrdname": "replace.ptr."}], 200) + rs_update_ipv4_fail = get_recordset_json(base_zone, "9", "PTR", [{"ptrdname": "failed-update.ptr."}], 200) + rs_update_ipv4_double_update = get_recordset_json(shared_zone_test_context.classless_base_zone, "50", "PTR", [{"ptrdname": "ptrdname.data."}]) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes ipv4 + get_change_PTR_json("192.0.2.25", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.193", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json("192.0.2.193", change_type="DeleteRecordSet"), + + # valid changes: delete and add of same record name but different type + get_change_CNAME_json("21.2.0.192.in-addr.arpa", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.21", ptrdname="replace-cname.ptr."), + get_change_CNAME_json("17.2.0.192.in-addr.arpa", cname="replace-ptr.cname."), + get_change_PTR_json("192.0.2.17", change_type="DeleteRecordSet"), + + # input validations failures + get_change_PTR_json("1.1.1", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.", ttl=29, ptrdname="failed-update$.ptr"), + + # zone discovery failures + get_change_PTR_json("192.0.1.25", change_type="DeleteRecordSet"), + + # context validation failures + get_change_PTR_json("192.0.2.199", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.200", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json("192.0.2.200", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.50", change_type="DeleteRecordSet"), + get_change_PTR_json("192.0.2.50"), + get_change_PTR_json("192.0.2.50", ttl=350) + ] + } + + to_create = [rs_delete_ipv4, rs_update_ipv4, rs_replace_cname, rs_replace_ptr, rs_update_ipv4_fail, rs_update_ipv4_double_update] + to_delete = [] + + try: + for rs in to_create: + create_rs = ok_client.create_recordset(rs, status=202) + to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="192.0.2.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], ttl=300, input_name="192.0.2.193", record_type="PTR", record_data="has-updated.ptr.") + assert_successful_change_in_error_response(response[2], input_name="192.0.2.193", record_type="PTR", record_data=None, change_type="DeleteRecordSet") + + #successful changes: add and delete of same record name but different type + assert_successful_change_in_error_response(response[3], input_name="21.2.0.192.in-addr.arpa.", record_type="CNAME", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[4], input_name="192.0.2.21", record_type="PTR", record_data="replace-cname.ptr.") + assert_successful_change_in_error_response(response[5], input_name="17.2.0.192.in-addr.arpa.", record_type="CNAME", record_data="replace-ptr.cname.") + assert_successful_change_in_error_response(response[6], input_name="192.0.2.17", record_type="PTR", record_data=None, change_type="DeleteRecordSet") + + # input validations failures: invalid IP, ttl, and record data + assert_failed_change_in_error_response(response[7], input_name="1.1.1", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid IP address: "1.1.1".']) + assert_failed_change_in_error_response(response[8], input_name="192.0.2.", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid IP address: "192.0.2.".']) + assert_failed_change_in_error_response(response[9], ttl=29, input_name="192.0.2.", record_type="PTR", record_data="failed-update$.ptr.", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid IP address: "192.0.2.".', + 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + + # zone discovery failure + assert_failed_change_in_error_response(response[10], input_name="192.0.1.25", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Zone Discovery Failed: zone for \"192.0.1.25\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validation failures: record does not exist, failure on update with double add + assert_failed_change_in_error_response(response[11], input_name="192.0.2.199", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"192.0.2.199\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[12], ttl=300, input_name="192.0.2.200", record_type="PTR", record_data="has-updated.ptr.") + assert_failed_change_in_error_response(response[13], input_name="192.0.2.200", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"192.0.2.200\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[14], input_name="192.0.2.50", record_type="PTR", change_type="DeleteRecordSet"), + assert_failed_change_in_error_response(response[15], input_name="192.0.2.50", record_type="PTR", record_data="test.com.", + error_messages=['Record Name "192.0.2.50" Not Unique In Batch Change: cannot have multiple "PTR" records with the same name.']) + assert_failed_change_in_error_response(response[16], input_name="192.0.2.50", record_type="PTR", record_data="test.com.", ttl=350, + error_messages=['Record Name "192.0.2.50" Not Unique In Batch Change: cannot have multiple "PTR" records with the same name.']) + + finally: + clear_recordset_list(to_delete, ok_client) + + +def test_ipv6_ptr_recordtype_add_checks(shared_zone_test_context): + """ + Test all add, non-authorization validations performed on IPv6 PTR records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_ptr = get_recordset_json(shared_zone_test_context.ip6_reverse_zone, "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "test.com."}], 100) + + batch_change_input = { + "changes": [ + # valid change + get_change_PTR_json("fd69:27cc:fe91::1234"), + + # input validation failures + get_change_PTR_json("fd69:27cc:fe91::abe", ttl=29), + get_change_PTR_json("fd69:27cc:fe91::bae", ptrdname="$malformed.hostname."), + get_change_PTR_json("fd69:27cc:fe91de::ab", ptrdname="malformed.ip.address."), + + # zone discovery failure + get_change_PTR_json("fedc:ba98:7654::abc", ptrdname="zone.discovery.error."), + + # context validation failures + get_change_PTR_json("fd69:27cc:fe91::abc", ptrdname="duplicate.record1."), + get_change_PTR_json("fd69:27cc:fe91::abc", ptrdname="duplicate.record2."), + get_change_PTR_json("fd69:27cc:fe91::ffff", ptrdname="existing.ptr.") + ] + } + + to_create = [existing_ptr] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="fd69:27cc:fe91::1234", record_type="PTR", record_data="test.com.") + + # independent validations: bad TTL, malformed host name/IP address, duplicate record + assert_failed_change_in_error_response(response[1], input_name="fd69:27cc:fe91::abe", ttl=29, record_type="PTR", record_data="test.com.", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) + assert_failed_change_in_error_response(response[2], input_name="fd69:27cc:fe91::bae", record_type="PTR", record_data="$malformed.hostname.", + error_messages=['Invalid domain name: "$malformed.hostname.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[3], input_name="fd69:27cc:fe91de::ab", record_type="PTR", record_data="malformed.ip.address.", + error_messages=['Invalid IP address: "fd69:27cc:fe91de::ab".']) + + # zone discovery failure + assert_failed_change_in_error_response(response[4], input_name="fedc:ba98:7654::abc", record_type="PTR", record_data="zone.discovery.error.", + error_messages=["Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validations: duplicates in batch, existing record sets pre-request + assert_failed_change_in_error_response(response[5], input_name="fd69:27cc:fe91::abc", record_type="PTR", record_data="duplicate.record1.", + error_messages=["Record Name \"fd69:27cc:fe91::abc\" Not Unique In Batch Change: cannot have multiple \"PTR\" records with the same name."]) + assert_failed_change_in_error_response(response[6], input_name="fd69:27cc:fe91::abc", record_type="PTR", record_data="duplicate.record2.", + error_messages=["Record Name \"fd69:27cc:fe91::abc\" Not Unique In Batch Change: cannot have multiple \"PTR\" records with the same name."]) + + assert_failed_change_in_error_response(response[7], input_name="fd69:27cc:fe91::ffff", record_type="PTR", record_data="existing.ptr.", + error_messages=["Record \"fd69:27cc:fe91::ffff\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_ipv6_ptr_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on ipv6 PTR records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + ip6_reverse_zone = shared_zone_test_context.ip6_reverse_zone + + rs_delete_ipv6 = get_recordset_json(ip6_reverse_zone, "f.f.f.f.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "delete.ptr."}], 200) + rs_update_ipv6 = get_recordset_json(ip6_reverse_zone, "2.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "update.ptr."}], 200) + rs_update_ipv6_fail = get_recordset_json(ip6_reverse_zone, "8.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "failed-update.ptr."}], 200) + rs_doubly_updated = get_recordset_json(shared_zone_test_context.ip6_reverse_zone, "2.2.1.1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", "PTR", [{"ptrdname": "test.com."}], 100) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes ipv6 + get_change_PTR_json("fd69:27cc:fe91::ffff", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91::62", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json("fd69:27cc:fe91::62", change_type="DeleteRecordSet"), + + # input validations failures + get_change_PTR_json("fd69:27cc:fe91de::ab", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91de::ba", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91de::ba", ttl=29, ptrdname="failed-update$.ptr"), + + # zone discovery failures + get_change_PTR_json("fedc:ba98:7654::abc", change_type="DeleteRecordSet"), + + # context validation failures + get_change_PTR_json("fd69:27cc:fe91::60", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91::65", ttl=300, ptrdname="has-updated.ptr."), + get_change_PTR_json("fd69:27cc:fe91::65", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91::1122", change_type="DeleteRecordSet"), + get_change_PTR_json("fd69:27cc:fe91::1122"), + get_change_PTR_json("fd69:27cc:fe91::1122", ttl=350) + + ] + } + + to_create = [rs_delete_ipv6, rs_update_ipv6, rs_update_ipv6_fail, rs_doubly_updated] + to_delete = [] + + try: + for rs in to_create: + create_rs = ok_client.create_recordset(rs, status=202) + to_delete.append(ok_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="fd69:27cc:fe91::ffff", record_type="PTR", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], ttl=300, input_name="fd69:27cc:fe91::62", record_type="PTR", record_data="has-updated.ptr.") + assert_successful_change_in_error_response(response[2], input_name="fd69:27cc:fe91::62", record_type="PTR", record_data=None, change_type="DeleteRecordSet") + + # input validations failures: invalid IP, ttl, and record data + assert_failed_change_in_error_response(response[3], input_name="fd69:27cc:fe91de::ab", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid IP address: "fd69:27cc:fe91de::ab".']) + assert_failed_change_in_error_response(response[4], input_name="fd69:27cc:fe91de::ba", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=['Invalid IP address: "fd69:27cc:fe91de::ba".']) + assert_failed_change_in_error_response(response[5], ttl=29, input_name="fd69:27cc:fe91de::ba", record_type="PTR", record_data="failed-update$.ptr.", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid IP address: "fd69:27cc:fe91de::ba".', + 'Invalid domain name: "failed-update$.ptr.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + + # zone discovery failure + assert_failed_change_in_error_response(response[6], input_name="fedc:ba98:7654::abc", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Zone Discovery Failed: zone for \"fedc:ba98:7654::abc\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validation failures: record does not exist, failure on update with double add + assert_failed_change_in_error_response(response[7], input_name="fd69:27cc:fe91::60", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"fd69:27cc:fe91::60\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[8], ttl=300, input_name="fd69:27cc:fe91::65", record_type="PTR", record_data="has-updated.ptr.") + assert_failed_change_in_error_response(response[9], input_name="fd69:27cc:fe91::65", record_type="PTR", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"fd69:27cc:fe91::65\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[10], input_name="fd69:27cc:fe91::1122", record_type="PTR", change_type="DeleteRecordSet") + assert_failed_change_in_error_response(response[11], input_name="fd69:27cc:fe91::1122", record_type="PTR", record_data="test.com.", + error_messages=["Record Name \"fd69:27cc:fe91::1122\" Not Unique In Batch Change: cannot have multiple \"PTR\" records with the same name."]) + assert_failed_change_in_error_response(response[12], input_name="fd69:27cc:fe91::1122", record_type="PTR", record_data="test.com.", ttl=350, + error_messages=["Record Name \"fd69:27cc:fe91::1122\" Not Unique In Batch Change: cannot have multiple \"PTR\" records with the same name."]) + + + finally: + clear_recordset_list(to_delete, ok_client) + + +def test_txt_recordtype_add_checks(shared_zone_test_context): + """ + Test all add validations performed on TXT records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_txt = get_recordset_json(shared_zone_test_context.ok_zone, "existing-txt", "TXT", [{"text": "test"}], 100) + existing_cname = get_recordset_json(shared_zone_test_context.ok_zone, "existing-cname", "CNAME", [{"cname": "test."}], 100) + + batch_change_input = { + "changes": [ + # valid change + get_change_TXT_json("good-record.ok."), + + # input validation failures + get_change_TXT_json("bad-ttl-and-invalid-name$.ok.", ttl=29), + get_change_TXT_json("summed-fail.ok."), + get_change_TXT_json("summed-fail.ok.", text="test2"), + + # zone discovery failures + get_change_TXT_json("no.subzone.ok."), + get_change_TXT_json("no.zone.at.all."), + + # context validation failures + get_change_CNAME_json("cname-duplicate.ok."), + get_change_TXT_json("cname-duplicate.ok."), + get_change_TXT_json("existing-txt.ok."), + get_change_TXT_json("existing-cname.ok."), + get_change_TXT_json("user-add-unauthorized.dummy.") + ] + } + + to_create = [existing_txt, existing_cname] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="good-record.ok.", record_type="TXT", record_data="test") + + # ttl, domain name, record data + assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.ok.", ttl=29, record_type="TXT", record_data="test", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$.ok.", ' + 'valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[2], input_name="summed-fail.ok.", record_type="TXT", record_data="test", + error_messages=['Record Name "summed-fail.ok." Not Unique In Batch Change: cannot have multiple "TXT" records with the same name.']) + assert_failed_change_in_error_response(response[3], input_name="summed-fail.ok.", record_type="TXT", record_data="test2", + error_messages=['Record Name "summed-fail.ok." Not Unique In Batch Change: cannot have multiple "TXT" records with the same name.']) + + # zone discovery failures + assert_failed_change_in_error_response(response[4], input_name="no.subzone.ok.", record_type="TXT", record_data="test", + error_messages=['Zone Discovery Failed: zone for "no.subzone.ok." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="TXT", record_data="test", + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validations: cname duplicate + assert_failed_change_in_error_response(response[6], input_name="cname-duplicate.ok.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + + # context validations: conflicting recordsets, unauthorized error + assert_failed_change_in_error_response(response[8], input_name="existing-txt.ok.", record_type="TXT", record_data="test", + error_messages=["Record \"existing-txt.ok.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + assert_failed_change_in_error_response(response[9], input_name="existing-cname.ok.", record_type="TXT", record_data="test", + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-cname.ok.\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[10], input_name="user-add-unauthorized.dummy.", record_type="TXT", record_data="test", + error_messages=["User \"ok\" is not authorized."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_txt_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on TXT records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + dummy_zone = shared_zone_test_context.dummy_zone + + rs_delete_ok = get_recordset_json(ok_zone, "delete", "TXT", [{"text": "test"}], 200) + rs_update_ok = get_recordset_json(ok_zone, "update", "TXT", [{"text": "test"}], 200) + rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized", "TXT", [{"text": "test"}], 200) + rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized", "TXT", [{"text": "test"}], 200) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes + get_change_TXT_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_TXT_json("update.ok.", change_type="DeleteRecordSet"), + get_change_TXT_json("update.ok.", ttl=300), + + # input validations failures + get_change_TXT_json("invalid-name$.ok.", change_type="DeleteRecordSet"), + get_change_TXT_json("delete.ok.", ttl=29, text="bad-ttl"), + + # zone discovery failures + get_change_TXT_json("no.zone.at.all.", change_type="DeleteRecordSet"), + + # context validation failures + get_change_TXT_json("delete-nonexistent.ok.", change_type="DeleteRecordSet"), + get_change_TXT_json("update-nonexistent.ok.", change_type="DeleteRecordSet"), + get_change_TXT_json("update-nonexistent.ok.", text="test"), + get_change_TXT_json("delete-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_TXT_json("update-unauthorized.dummy.", text="test"), + get_change_TXT_json("update-unauthorized.dummy.", change_type="DeleteRecordSet") + ] + } + + to_create = [rs_delete_ok, rs_update_ok, rs_delete_dummy, rs_update_dummy] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + # Confirm that record set doesn't already exist + ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="delete.ok.", record_type="TXT", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name="update.ok.", record_type="TXT", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], ttl=300, input_name="update.ok.", record_type="TXT", record_data="test") + + # input validations failures: invalid input name, reverse zone error, invalid ttl + assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="TXT", record_data="test", change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="delete.ok.", ttl=29, record_type="TXT", record_data="bad-ttl", + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) + + # zone discovery failure + assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validation failures: record does not exist, not authorized + assert_failed_change_in_error_response(response[6], input_name="delete-nonexistent.ok.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_failed_change_in_error_response(response[7], input_name="update-nonexistent.ok.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[8], input_name="update-nonexistent.ok.", record_type="TXT", record_data="test",) + assert_failed_change_in_error_response(response[9], input_name="delete-unauthorized.dummy.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[10], input_name="update-unauthorized.dummy.", record_type="TXT", record_data="test", + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[11], input_name="update-unauthorized.dummy.", record_type="TXT", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + clear_recordset_list(dummy_deletes, dummy_client) + clear_recordset_list(ok_deletes, ok_client) + + +def test_mx_recordtype_add_checks(shared_zone_test_context): + """ + Test all add validations performed on MX records submitted in batch changes + """ + client = shared_zone_test_context.ok_vinyldns_client + + existing_mx = get_recordset_json(shared_zone_test_context.ok_zone, "existing-mx", "MX", [{"preference": 1, "exchange": "foo.bar."}], 100) + existing_cname = get_recordset_json(shared_zone_test_context.ok_zone, "existing-cname", "CNAME", [{"cname": "test."}], 100) + + batch_change_input = { + "changes": [ + # valid change + get_change_MX_json("good-record.ok."), + + # input validation failures + get_change_MX_json("bad-ttl-and-invalid-name$.ok.", ttl=29), + get_change_MX_json("bad-exchange.ok.", exchange="foo$.bar."), + get_change_MX_json("mx.2.0.192.in-addr.arpa."), + + # zone discovery failures + get_change_MX_json("no.subzone.ok."), + get_change_MX_json("no.zone.at.all."), + + # context validation failures + get_change_CNAME_json("cname-duplicate.ok."), + get_change_MX_json("cname-duplicate.ok."), + get_change_MX_json("existing-mx.ok."), + get_change_MX_json("existing-cname.ok."), + get_change_MX_json("user-add-unauthorized.dummy.") + ] + } + + to_create = [existing_mx, existing_cname] + to_delete = [] + try: + for create_json in to_create: + create_result = client.create_recordset(create_json, status=202) + to_delete.append(client.wait_until_recordset_change_status(create_result, 'Complete')) + + response = client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="good-record.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}) + + # ttl, domain name, record data + assert_failed_change_in_error_response(response[1], input_name="bad-ttl-and-invalid-name$.ok.", ttl=29, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.', + 'Invalid domain name: "bad-ttl-and-invalid-name$.ok.", ' + 'valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[2], input_name="bad-exchange.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, + error_messages=['Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[3], input_name="mx.2.0.192.in-addr.arpa.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Invalid Record Type In Reverse Zone: record with name "mx.2.0.192.in-addr.arpa." and type "MX" is not allowed in a reverse zone.']) + + # zone discovery failures + assert_failed_change_in_error_response(response[4], input_name="no.subzone.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Zone Discovery Failed: zone for "no.subzone.ok." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + assert_failed_change_in_error_response(response[5], input_name="no.zone.at.all.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Zone Discovery Failed: zone for "no.zone.at.all." does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS.']) + + # context validations: cname duplicate + assert_failed_change_in_error_response(response[6], input_name="cname-duplicate.ok.", record_type="CNAME", record_data="test.com.", + error_messages=["Record Name \"cname-duplicate.ok.\" Not Unique In Batch Change: cannot have multiple \"CNAME\" records with the same name."]) + + # context validations: conflicting recordsets, unauthorized error + assert_failed_change_in_error_response(response[8], input_name="existing-mx.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=["Record \"existing-mx.ok.\" Already Exists: cannot add an existing record; to update it, issue a DeleteRecordSet then an Add."]) + assert_failed_change_in_error_response(response[9], input_name="existing-cname.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=["CNAME Conflict: CNAME record names must be unique. Existing record with name \"existing-cname.ok.\" and type \"CNAME\" conflicts with this record."]) + assert_failed_change_in_error_response(response[10], input_name="user-add-unauthorized.dummy.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=["User \"ok\" is not authorized."]) + + finally: + clear_recordset_list(to_delete, client) + + +def test_mx_recordtype_update_delete_checks(shared_zone_test_context): + """ + Test all update and delete validations performed on MX records submitted in batch changes + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + dummy_zone = shared_zone_test_context.dummy_zone + + rs_delete_ok = get_recordset_json(ok_zone, "delete", "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_update_ok = get_recordset_json(ok_zone, "update", "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_delete_dummy = get_recordset_json(dummy_zone, "delete-unauthorized", "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + rs_update_dummy = get_recordset_json(dummy_zone, "update-unauthorized", "MX", [{"preference": 1, "exchange": "foo.bar."}], 200) + + batch_change_input = { + "comments": "this is optional", + "changes": [ + # valid changes + get_change_MX_json("delete.ok.", change_type="DeleteRecordSet"), + get_change_MX_json("update.ok.", change_type="DeleteRecordSet"), + get_change_MX_json("update.ok.", ttl=300), + + # input validations failures + get_change_MX_json("invalid-name$.ok.", change_type="DeleteRecordSet"), + get_change_MX_json("delete.ok.", ttl=29), + get_change_MX_json("bad-exchange.ok.", exchange="foo$.bar."), + get_change_MX_json("mx.2.0.192.in-addr.arpa."), + + # zone discovery failures + get_change_MX_json("no.zone.at.all.", change_type="DeleteRecordSet"), + + # context validation failures + get_change_MX_json("delete-nonexistent.ok.", change_type="DeleteRecordSet"), + get_change_MX_json("update-nonexistent.ok.", change_type="DeleteRecordSet"), + get_change_MX_json("update-nonexistent.ok.", preference=1000, exchange="foo.bar."), + get_change_MX_json("delete-unauthorized.dummy.", change_type="DeleteRecordSet"), + get_change_MX_json("update-unauthorized.dummy.", preference= 1000, exchange= "foo.bar."), + get_change_MX_json("update-unauthorized.dummy.", change_type="DeleteRecordSet") + ] + } + + to_create = [rs_delete_ok, rs_update_ok, rs_delete_dummy, rs_update_dummy] + to_delete = [] + + try: + for rs in to_create: + if rs['zoneId'] == dummy_zone['id']: + create_client = dummy_client + else: + create_client = ok_client + + create_rs = create_client.create_recordset(rs, status=202) + to_delete.append(create_client.wait_until_recordset_change_status(create_rs, 'Complete')) + + # Confirm that record set doesn't already exist + ok_client.get_recordset(ok_zone['id'], 'delete-nonexistent', status=404) + + response = ok_client.create_batch_change(batch_change_input, status=400) + + # successful changes + assert_successful_change_in_error_response(response[0], input_name="delete.ok.", record_type="MX", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[1], input_name="update.ok.", record_type="MX", record_data=None, change_type="DeleteRecordSet") + assert_successful_change_in_error_response(response[2], ttl=300, input_name="update.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}) + + # input validations failures: invalid input name, reverse zone error, invalid ttl + assert_failed_change_in_error_response(response[3], input_name="invalid-name$.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, change_type="DeleteRecordSet", + error_messages=['Invalid domain name: "invalid-name$.ok.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[4], input_name="delete.ok.", ttl=29, record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Invalid TTL: "29", must be a number between 30 and 2147483647.']) + assert_failed_change_in_error_response(response[5], input_name="bad-exchange.ok.", record_type="MX", record_data={"preference": 1, "exchange": "foo$.bar."}, + error_messages=['Invalid domain name: "foo$.bar.", valid domain names must be letters, numbers, and hyphens, joined by dots, and terminated with a dot.']) + assert_failed_change_in_error_response(response[6], input_name="mx.2.0.192.in-addr.arpa.", record_type="MX", record_data={"preference": 1, "exchange": "foo.bar."}, + error_messages=['Invalid Record Type In Reverse Zone: record with name "mx.2.0.192.in-addr.arpa." and type "MX" is not allowed in a reverse zone.']) + + # zone discovery failure + assert_failed_change_in_error_response(response[7], input_name="no.zone.at.all.", record_type="MX", record_data=None, change_type="DeleteRecordSet", + error_messages=["Zone Discovery Failed: zone for \"no.zone.at.all.\" does not exist in VinylDNS. If zone exists, then it must be created in VinylDNS."]) + + # context validation failures: record does not exist, not authorized + assert_failed_change_in_error_response(response[8], input_name="delete-nonexistent.ok.", record_type="MX", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"delete-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_failed_change_in_error_response(response[9], input_name="update-nonexistent.ok.", record_type="MX", record_data=None, change_type="DeleteRecordSet", + error_messages=["Record \"update-nonexistent.ok.\" Does Not Exist: cannot delete a record that does not exist."]) + assert_successful_change_in_error_response(response[10], input_name="update-nonexistent.ok.", record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."},) + assert_failed_change_in_error_response(response[11], input_name="delete-unauthorized.dummy.", record_type="MX", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[12], input_name="update-unauthorized.dummy.", record_type="MX", record_data={"preference": 1000, "exchange": "foo.bar."}, + error_messages=["User \"ok\" is not authorized."]) + assert_failed_change_in_error_response(response[13], input_name="update-unauthorized.dummy.", record_type="MX", record_data=None, change_type="DeleteRecordSet", + error_messages=["User \"ok\" is not authorized."]) + + finally: + # Clean up updates + dummy_deletes = [rs for rs in to_delete if rs['zone']['id'] == dummy_zone['id']] + ok_deletes = [rs for rs in to_delete if rs['zone']['id'] != dummy_zone['id']] + clear_recordset_list(dummy_deletes, dummy_client) + clear_recordset_list(ok_deletes, ok_client) diff --git a/modules/api/functional_test/live_tests/batch/get_batch_change_test.py b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py new file mode 100644 index 000000000..68cb7398d --- /dev/null +++ b/modules/api/functional_test/live_tests/batch/get_batch_change_test.py @@ -0,0 +1,75 @@ +from hamcrest import * +from utils import * + +def test_get_batch_change_success(shared_zone_test_context): + """ + Test successfully getting a batch change + """ + client = shared_zone_test_context.ok_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("parent.com.", address="4.5.6.7"), + get_change_A_AAAA_json("ok.", record_type="AAAA", address="fd69:27cc:fe91::60") + ] + } + to_delete = [] + try: + batch_change = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(batch_change) + + record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = set(record_set_list) + + result = client.get_batch_change(batch_change['id'], status=200) + assert_that(result, is_(completed_batch)) + finally: + for result_rs in to_delete: + try: + delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass + + +def test_get_batch_change_failure(shared_zone_test_context): + """ + Test that getting a batch change with invalid id returns a Not Found error + """ + client = shared_zone_test_context.ok_vinyldns_client + + error = client.get_batch_change("invalidId", status=404) + + assert_that(error, is_("Batch change with id invalidId cannot be found")) + + +def test_get_batch_change_with_unauthorized_user_fails(shared_zone_test_context): + """ + Test that getting a batch change with a user that didn't create the batch change fails + """ + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + batch_change_input = { + "comments": "this is optional", + "changes": [ + get_change_A_AAAA_json("parent.com.", address="4.5.6.7"), + get_change_A_AAAA_json("ok.", record_type="AAAA", address="fd69:27cc:fe91::60") + ] + } + to_delete = [] + try: + batch_change = client.create_batch_change(batch_change_input, status=202) + completed_batch = client.wait_until_batch_change_completed(batch_change) + + record_set_list = [(change['zoneId'], change['recordSetId']) for change in completed_batch['changes']] + to_delete = set(record_set_list) + + error = dummy_client.get_batch_change(batch_change['id'], status=403) + assert_that(error, is_("User does not have access to item " + batch_change['id'])) + finally: + for result_rs in to_delete: + try: + delete_result = client.delete_recordset(result_rs[0], result_rs[1], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass diff --git a/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py new file mode 100644 index 000000000..974068dcf --- /dev/null +++ b/modules/api/functional_test/live_tests/batch/list_batch_change_summaries_test.py @@ -0,0 +1,155 @@ +from hamcrest import * +from utils import * +import time +import pytest +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext + +class ListBatchChangeSummariesFixture(): + def __init__(self, shared_zone_test_context): + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listBatchSummariesAccessKey', 'listBatchSummariesSecretKey') + acl_rule = generate_acl_rule('Write', userId='list-batch-summaries-id') + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + initial_db_check = self.client.list_batch_change_summaries(status=200) + + batch_change_input_one = { + "comments": "first", + "changes": [ + get_change_CNAME_json("test-first.ok.", cname="one.") + ] + } + + batch_change_input_two = { + "comments": "second", + "changes": [ + get_change_CNAME_json("test-second.ok.", cname="two.") + ] + } + + batch_change_input_three = { + "comments": "last", + "changes": [ + get_change_CNAME_json("test-last.ok.", cname="three.") + ] + } + + batch_change_inputs = [batch_change_input_one, batch_change_input_two, batch_change_input_three] + + record_set_list = [] + self.completed_changes = [] + + if len(initial_db_check['batchChanges']) == 0: + # make some batch changes + for input in batch_change_inputs: + change = self.client.create_batch_change(input, status=202) + completed = self.client.wait_until_batch_change_completed(change) + assert_that(completed["comments"], equal_to(input["comments"])) + record_set_list += [(change['zoneId'], change['recordSetId']) for change in completed['changes']] + # sleep for consistent ordering of timestamps, must be at least one second apart + time.sleep(1) + + self.completed_changes = self.client.list_batch_change_summaries(status=200)['batchChanges'] + + assert_that(len(self.completed_changes), equal_to(3)) + else: + self.completed_changes = initial_db_check['batchChanges'] + + self.to_delete = set(record_set_list) + + def tear_down(self, shared_zone_test_context): + for result_rs in self.to_delete: + delete_result = shared_zone_test_context.ok_vinyldns_client.delete_recordset(result_rs[0], result_rs[1], status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(delete_result, 'Complete') + clear_ok_acl_rules(shared_zone_test_context) + + def check_batch_change_summaries_page_accuracy(self, summaries_page, size, next_id=False, start_from=False, max_items=100): + # validate fields + if next_id: + assert_that(summaries_page, has_key('nextId')) + else: + assert_that(summaries_page, is_not(has_key('nextId'))) + if start_from: + assert_that(summaries_page['startFrom'], is_(start_from)) + else: + assert_that(summaries_page, is_not(has_key('startFrom'))) + assert_that(summaries_page['maxItems'], is_(max_items)) + + + # validate actual page + list_batch_change_summaries = summaries_page['batchChanges'] + assert_that(list_batch_change_summaries, has_length(size)) + + for i, summary in enumerate(list_batch_change_summaries): + assert_that(summary["userId"], equal_to("list-batch-summaries-id")) + assert_that(summary["userName"], equal_to("list-batch-summaries-user")) + assert_that(summary["comments"], equal_to(self.completed_changes[i + start_from]["comments"])) + assert_that(summary["createdTimestamp"], equal_to(self.completed_changes[i + start_from]["createdTimestamp"])) + assert_that(summary["totalChanges"], equal_to(self.completed_changes[i + start_from]["totalChanges"])) + assert_that(summary["status"], equal_to(self.completed_changes[i + start_from]["status"])) + assert_that(summary["id"], equal_to(self.completed_changes[i + start_from]["id"])) + + +@pytest.fixture(scope = "module") +def list_fixture(request, shared_zone_test_context): + fix = ListBatchChangeSummariesFixture(shared_zone_test_context) + def fin(): + fix.tear_down(shared_zone_test_context) + + request.addfinalizer(fin) + + return fix + +def test_list_batch_change_summaries_success(list_fixture): + """ + Test successfully listing all of a user's batch change summaries with no parameters + """ + client = list_fixture.client + batch_change_summaries_result = client.list_batch_change_summaries(status=200) + + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=3) + + +def test_list_batch_change_summaries_with_max_items(list_fixture): + """ + Test listing a limited number of user's batch change summaries with maxItems parameter + """ + client = list_fixture.client + batch_change_summaries_result = client.list_batch_change_summaries(status=200, max_items=1) + + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=1, max_items=1, next_id=1) + + +def test_list_batch_change_summaries_with_start_from(list_fixture): + """ + Test listing a limited number of user's batch change summaries with startFrom parameter + """ + client = list_fixture.client + batch_change_summaries_result = client.list_batch_change_summaries(status=200, start_from=1) + + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=2, start_from=1) + + +def test_list_batch_change_summaries_with_next_id(list_fixture): + """ + Test getting user's batch change summaries with index of next batch change summary. + Apply retrieved nextId to get second page of batch change summaries. + """ + client = list_fixture.client + batch_change_summaries_result = client.list_batch_change_summaries(status=200, start_from=1, max_items=1) + + list_fixture.check_batch_change_summaries_page_accuracy(batch_change_summaries_result, size=1, start_from=1, max_items=1, next_id=2) + + next_page_result = client.list_batch_change_summaries(status=200, start_from=batch_change_summaries_result['nextId']) + + list_fixture.check_batch_change_summaries_page_accuracy(next_page_result, size=1, start_from=batch_change_summaries_result['nextId']) + + +def test_list_batch_change_summaries_with_list_batch_change_summaries_with_no_changes_passes(): + """ + Test successfully getting an empty list of summaries when user has no batch changes + """ + client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listZeroSummariesAccessKey', 'listZeroSummariesSecretKey') + + batch_change_summaries_result = client.list_batch_change_summaries(status=200)["batchChanges"] + assert_that(batch_change_summaries_result, has_length(0)) diff --git a/modules/api/functional_test/live_tests/conftest.py b/modules/api/functional_test/live_tests/conftest.py new file mode 100644 index 000000000..fb56b6a8f --- /dev/null +++ b/modules/api/functional_test/live_tests/conftest.py @@ -0,0 +1,28 @@ +import pytest + +@pytest.fixture(scope="session") +def shared_zone_test_context(request): + from shared_zone_test_context import SharedZoneTestContext + + ctx = SharedZoneTestContext() + + def fin(): + ctx.tear_down() + + request.addfinalizer(fin) + + return ctx + + +@pytest.fixture(scope="session") +def zone_history_context(request): + from zone_history_context import ZoneHistoryContext + + context = ZoneHistoryContext() + + def fin(): + context.tear_down() + + request.addfinalizer(fin) + + return context diff --git a/modules/api/functional_test/live_tests/internal/color_test.py b/modules/api/functional_test/live_tests/internal/color_test.py new file mode 100644 index 000000000..02301d8c3 --- /dev/null +++ b/modules/api/functional_test/live_tests/internal/color_test.py @@ -0,0 +1,14 @@ +import pytest + +from hamcrest import * +from vinyldns_python import VinylDNSClient + + +def test_color(shared_zone_test_context): + """ + Tests that the color endpoint works appropriately + """ + client = shared_zone_test_context.ok_vinyldns_client + result = client.color() + + assert_that(["green", "blue"], has_item(result)) diff --git a/modules/api/functional_test/live_tests/internal/health_test.py b/modules/api/functional_test/live_tests/internal/health_test.py new file mode 100644 index 000000000..12d42981a --- /dev/null +++ b/modules/api/functional_test/live_tests/internal/health_test.py @@ -0,0 +1,13 @@ +import pytest + +from hamcrest import * +from vinyldns_python import VinylDNSClient + + +def test_health(shared_zone_test_context): + """ + Tests that the health check endpoint works + """ + client = shared_zone_test_context.ok_vinyldns_client + client.health() + diff --git a/modules/api/functional_test/live_tests/internal/ping_test.py b/modules/api/functional_test/live_tests/internal/ping_test.py new file mode 100644 index 000000000..287bd32fa --- /dev/null +++ b/modules/api/functional_test/live_tests/internal/ping_test.py @@ -0,0 +1,14 @@ +import pytest + +from hamcrest import * +from vinyldns_python import VinylDNSClient + + +def test_ping(shared_zone_test_context): + """ + Tests that the ping endpoint works appropriately + """ + client = shared_zone_test_context.ok_vinyldns_client + result = client.ping() + + assert_that(result, is_("PONG")) diff --git a/modules/api/functional_test/live_tests/internal/status_test.py b/modules/api/functional_test/live_tests/internal/status_test.py new file mode 100644 index 000000000..d84282446 --- /dev/null +++ b/modules/api/functional_test/live_tests/internal/status_test.py @@ -0,0 +1,74 @@ +import pytest +import time + +from hamcrest import * + +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + + +def test_get_status_success(shared_zone_test_context): + """ + Tests that the status endpoint returns the current processing status, color, key name and version + """ + client = shared_zone_test_context.ok_vinyldns_client + result = client.get_status() + + assert_that([True, False], has_item(result['processingDisabled'])) + assert_that(["green","blue"], has_item(result['color'])) + assert_that(result['keyName'], not_none()) + assert_that(result['version'], not_none()) + +@pytest.mark.skip_production +def test_toggle_processing(shared_zone_test_context): + """ + Test that updating a zone when processing is disabled does not happen + """ + + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + + # disable processing + client.post_status(True) + + status = client.get_status() + assert_that(status['processingDisabled'], is_(True)) + + client.post_status(False) + status = client.get_status() + assert_that(status['processingDisabled'], is_(False)) + + # Create changes to make sure we can process after the toggle + # attempt to perform an update + ok_zone['email'] = 'foo@bar.com' + zone_change_result = client.update_zone(ok_zone, status=202) + + # attempt to a create a record + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'test-status-disable-processing', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + + record_change = client.create_recordset(new_rs, status=202) + assert_that(record_change['status'], is_('Pending')) + + # Make sure that the changes are processed + client.wait_until_zone_change_status(zone_change_result, 'Synced') + client.wait_until_recordset_change_status(record_change, 'Complete') + + recordset_length = len(client.list_recordsets(ok_zone['id'])['recordSets']) + + client.delete_recordset(ok_zone['id'], record_change['recordSet']['id'], status=202) + client.wait_until_recordset_deleted(ok_zone['id'], record_change['recordSet']['id']) + assert_that(client.list_recordsets(ok_zone['id'])['recordSets'], has_length(recordset_length - 1)) diff --git a/modules/api/functional_test/live_tests/membership/create_group_test.py b/modules/api/functional_test/live_tests/membership/create_group_test.py new file mode 100644 index 000000000..4670fd72f --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/create_group_test.py @@ -0,0 +1,221 @@ +import pytest +import uuid +import json + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext + +def test_create_group_success(shared_zone_test_context): + """ + Tests that creating a group works + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + + try: + new_group = { + 'name': 'test-create-group-success', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + print json.dumps(result, indent=3) + + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + assert_that(result['description'], is_(new_group['description'])) + assert_that(result['status'], is_('Active')) + assert_that(result['created'], not_none()) + assert_that(result['id'], not_none()) + assert_that(result['members'], has_length(1)) + assert_that(result['members'][0]['id'], is_('ok')) + assert_that(result['admins'], has_length(1)) + assert_that(result['admins'][0]['id'], is_('ok')) + + finally: + if result: + client.delete_group(result['id'], status=(200, 404)) + + +def test_creator_is_an_admin(shared_zone_test_context): + """ + Tests that the creator is an admin + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + + try: + new_group = { + 'name': 'test-create-group-success', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [] + } + result = client.create_group(new_group, status=200) + print json.dumps(result, indent=3) + + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + assert_that(result['description'], is_(new_group['description'])) + assert_that(result['status'], is_('Active')) + assert_that(result['created'], not_none()) + assert_that(result['id'], not_none()) + assert_that(result['members'], has_length(1)) + assert_that(result['members'][0]['id'], is_('ok')) + assert_that(result['admins'], has_length(1)) + assert_that(result['admins'][0]['id'], is_('ok')) + + finally: + if result: + client.delete_group(result['id'], status=(200, 404)) + + +def test_create_group_without_name(shared_zone_test_context): + """ + Tests that creating a group without a name fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_group = { + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + errors = client.create_group(new_group, status=400)['errors'] + assert_that(errors[0], is_("Missing Group.name")) + + +def test_create_group_without_email(shared_zone_test_context): + """ + Tests that creating a group without an email fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_group = { + 'name': 'without-email', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + errors = client.create_group(new_group, status=400)['errors'] + assert_that(errors[0], is_("Missing Group.email")) + + +def test_create_group_without_name_or_email(shared_zone_test_context): + """ + Tests that creating a group without name or an email fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_group = { + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + errors = client.create_group(new_group, status=400)['errors'] + assert_that(errors, has_length(2)) + assert_that(errors, contains_inanyorder( + "Missing Group.name", + "Missing Group.email" + )) + + +def test_create_group_without_members_or_admins(shared_zone_test_context): + """ + Tests that creating a group without members or admins fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_group = { + 'name': 'some-group-name', + 'email': 'test@test.com', + 'description': 'this is a description' + } + errors = client.create_group(new_group, status=400)['errors'] + assert_that(errors, has_length(2)) + assert_that(errors, contains_inanyorder( + "Missing Group.members", + "Missing Group.admins" + )) + + +def test_create_group_adds_admins_as_members(shared_zone_test_context): + """ + Tests that creating a group adds admins as members + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + try: + + new_group = { + 'name': 'test-create-group-add-admins-as-members', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + assert_that(result['description'], is_(new_group['description'])) + assert_that(result['status'], is_('Active')) + assert_that(result['created'], not_none()) + assert_that(result['id'], not_none()) + assert_that(result['members'][0]['id'], is_('ok')) + assert_that(result['admins'][0]['id'], is_('ok')) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_create_group_duplicate(shared_zone_test_context): + """ + Tests that creating a group that has already been created fails + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + try: + new_group = { + 'name': 'test-create-group-duplicate', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + + result = client.create_group(new_group, status=200) + client.create_group(new_group, status=409) + + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_create_group_no_members(shared_zone_test_context): + """ + Tests that creating a group that has no members adds current user as a member and an admin + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + + try: + new_group = { + 'name': 'test-create-group-no-members', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [], + 'admins': [] + } + + result = client.create_group(new_group, status=200) + assert_that(result['members'][0]['id'], is_('ok')) + assert_that(result['admins'][0]['id'], is_('ok')) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/membership/delete_group_test.py b/modules/api/functional_test/live_tests/membership/delete_group_test.py new file mode 100644 index 000000000..20d03ebdb --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/delete_group_test.py @@ -0,0 +1,143 @@ +import pytest +import uuid +import json + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext + + +def test_delete_group_success(shared_zone_test_context): + """ + Tests that we can delete a group that has been created + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-delete-group-success', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + result = client.delete_group(saved_group['id'], status=200) + assert_that(result['status'], is_('Deleted')) + finally: + if result: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_delete_group_not_found(shared_zone_test_context): + """ + Tests that deleting a group that does not exist returns a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + client.delete_group('doesntexist', status=404) + + +def test_delete_group_that_is_already_deleted(shared_zone_test_context): + """ + Tests that deleting a group that is already deleted + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-delete-group-already', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + client.delete_group(saved_group['id'], status=200) + client.delete_group(saved_group['id'], status=404) + + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_delete_admin_group(shared_zone_test_context): + """ + Tests that we cannot delete a group that is the admin of a zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_group = None + result_zone = None + + try: + #Create group + new_group = { + 'name': 'test-delete-group-already', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + + result_group = client.create_group(new_group, status=200) + print result_group + + #Create zone with that group ID as admin + zone = { + 'name': 'one-time.', + 'email': 'test@test.com', + 'adminGroupId': result_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + client.delete_group(result_group['id'], status=400) + + #Delete zone + client.delete_zone(result_zone['id'], status=202) + client.wait_until_zone_deleted(result_zone['id']) + + #Should now be able to delete group + client.delete_group(result_group['id'], status=200) + finally: + if result_zone: + client.delete_zone(result_zone['id'], status=(202,404)) + if result_group: + client.delete_group(result_group['id'], status=(200,404)) + +def test_delete_group_not_authorized(shared_zone_test_context): + """ + Tests that only the admins can delete a zone + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + not_admin_client = shared_zone_test_context.dummy_vinyldns_client + try: + new_group = { + 'name': 'test-delete-group-not-authorized', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = ok_client.create_group(new_group, status=200) + not_admin_client.delete_group(saved_group['id'], status=403) + finally: + if saved_group: + ok_client.delete_group(saved_group['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/membership/get_group_changes_test.py b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py new file mode 100644 index 000000000..a9733e9fb --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/get_group_changes_test.py @@ -0,0 +1,209 @@ +import pytest +import datetime + +from hamcrest import * + +from vinyldns_python import VinylDNSClient + +@pytest.fixture(scope="module") +def group_activity_context(request, shared_zone_test_context): + client = shared_zone_test_context.ok_vinyldns_client + created_group = None + + group_name = 'test-list-group-activity-max-item-success' + + # cleanup existing group if it's already in there + groups = client.list_all_my_groups() + existing = [grp for grp in groups if grp['name'] == group_name] + for grp in existing: + client.delete_group(grp['id'], status=200) + + + members = [ { 'id': 'ok'} ] + new_group = { + 'name': group_name, + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + created_group = client.create_group(new_group, status=200) + + update_groups = [] + updated_groups = [] + # each update changes the member + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members = [{ 'id': id }] + update_groups.append({ + 'id': created_group['id'], + 'name': group_name, + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + }) + updated_groups.append(client.update_group(update_groups[runner]['id'], update_groups[runner], status=200)) + + def fin(): + if created_group: + client.delete_group(created_group['id'], status=(200,404)) + + request.addfinalizer(fin) + + return { + 'created_group': created_group, + 'updated_groups': updated_groups + } + + +def test_list_group_activity_start_from_success(group_activity_context, shared_zone_test_context): + """ + Test that we can list the changes starting from a given timestamp + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + updated_groups = group_activity_context['updated_groups'] + + page_one = client.get_group_changes(created_group['id'], status=200) + + start_from_index = 50 + start_from = page_one['changes'][start_from_index]['created'] # start from a known good timestamp + + result = client.get_group_changes(created_group['id'], start_from=start_from, status=200) + + assert_that(result['changes'], has_length(100)) + assert_that(result['maxItems'], is_(100)) + assert_that(result['startFrom'], is_(start_from)) + assert_that(result['nextId'], is_not(none())) + + for i in range(0,100): + assert_that(result['changes'][i]['newGroup'], is_(updated_groups[199-start_from_index-i-1])) + assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[199-start_from_index-i-2])) + +def test_list_group_activity_start_from_fake_time(group_activity_context, shared_zone_test_context): + """ + Test that we can start from a fake time stamp + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + updated_groups = group_activity_context['updated_groups'] + start_from = '9999999999999' # start from a random timestamp far in the future + + result = client.get_group_changes(created_group['id'], start_from=start_from, status=200) + + # there are 200 updates, and 1 create + assert_that(result['changes'], has_length(100)) + assert_that(result['maxItems'], is_(100)) + assert_that(result['startFrom'], is_(start_from)) + assert_that(result['nextId'], is_not(none())) + + for i in range(0,100): + assert_that(result['changes'][i]['newGroup'], is_(updated_groups[199-i])) + assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[199-i-1])) + + +def test_list_group_activity_max_item_success(group_activity_context, shared_zone_test_context): + """ + Test that we can set the max_items returned + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + updated_groups = group_activity_context['updated_groups'] + + result = client.get_group_changes(created_group['id'], max_items=50, status=200) + + # there are 200 updates, and 1 create + assert_that(result['changes'], has_length(50)) + assert_that(result['maxItems'], is_(50)) + assert_that(result, is_not(has_key('startFrom'))) + assert_that(result['nextId'], is_not(none())) + + for i in range(0,50): + assert_that(result['changes'][i]['newGroup'], is_(updated_groups[199-i])) + assert_that(result['changes'][i]['oldGroup'], is_(updated_groups[199-i-1])) + + +def test_list_group_activity_max_item_zero(group_activity_context, shared_zone_test_context): + """ + Test that max_item set to zero fails + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + client.get_group_changes(created_group['id'], max_items=0, status=400) + + +def test_list_group_activity_max_item_over_1000(group_activity_context, shared_zone_test_context): + """ + Test that when max_item is over 1000 fails + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + client.get_group_changes(created_group['id'], max_items=1001, status=400) + + +def test_get_group_changes_paging(group_activity_context, shared_zone_test_context): + """ + Test that we can page through multiple pages of group changes + """ + + client = shared_zone_test_context.ok_vinyldns_client + created_group = group_activity_context['created_group'] + updated_groups = group_activity_context['updated_groups'] + + page_one = client.get_group_changes(created_group['id'], max_items=100, status=200) + page_two = client.get_group_changes(created_group['id'], start_from=page_one['nextId'], max_items=100, status=200) + page_three = client.get_group_changes(created_group['id'], start_from=page_two['nextId'], max_items=100, status=200) + + assert_that(page_one['changes'], has_length(100)) + assert_that(page_one['maxItems'], is_(100)) + assert_that(page_one, is_not(has_key('startFrom'))) + assert_that(page_one['nextId'], is_not(none())) + + for i in range(0, 100): + assert_that(page_one['changes'][i]['newGroup'], is_(updated_groups[199-i])) + assert_that(page_one['changes'][i]['oldGroup'], is_(updated_groups[199-i-1])) + + assert_that(page_two['changes'], has_length(100)) + assert_that(page_two['maxItems'], is_(100)) + assert_that(page_two['startFrom'], is_(page_one['nextId'])) + assert_that(page_two['nextId'], is_not(none())) + + for i in range(100, 199): + assert_that(page_two['changes'][i-100]['newGroup'], is_(updated_groups[199-i])) + assert_that(page_two['changes'][i-100]['oldGroup'], is_(updated_groups[199-i-1])) + assert_that(page_two['changes'][99]['oldGroup'], is_(created_group)) + + assert_that(page_three['changes'], has_length(1)) + assert_that(page_three['maxItems'], is_(100)) + assert_that(page_three['startFrom'], is_(page_two['nextId'])) + assert_that(page_three, is_not(has_key('nextId'))) + + assert_that(page_three['changes'][0]['newGroup'], is_(created_group)) + +def test_get_group_changes_unauthed(shared_zone_test_context): + """ + Tests that we cant get group changes without access + """ + + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-list-group-admins-unauthed', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + dummy_client.get_group_changes(saved_group['id'], status=403) + client.get_group_changes(saved_group['id'], status=200) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + diff --git a/modules/api/functional_test/live_tests/membership/get_group_test.py b/modules/api/functional_test/live_tests/membership/get_group_test.py new file mode 100644 index 000000000..8d51b9073 --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/get_group_test.py @@ -0,0 +1,93 @@ +import pytest +import json + +from hamcrest import * +from vinyldns_python import VinylDNSClient + + +def test_get_group_success(shared_zone_test_context): + """ + Tests that we can get a group that has been created + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-get-group-success', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + group = client.get_group(saved_group['id'], status=200) + + assert_that(group['name'], is_(saved_group['name'])) + assert_that(group['email'], is_(saved_group['email'])) + assert_that(group['description'], is_(saved_group['description'])) + assert_that(group['status'], is_(saved_group['status'])) + assert_that(group['created'], is_(saved_group['created'])) + assert_that(group['id'], is_(saved_group['id'])) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_get_group_not_found(shared_zone_test_context): + """ + Tests that getting a group that does not exist returns a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + client.get_group('doesntexist', status=404) + + +def test_get_deleted_group(shared_zone_test_context): + """ + Tests getting a group that was already deleted + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-get-deleted-group', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + client.delete_group(saved_group['id'], status=200) + client.get_group(saved_group['id'], status=404) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_get_group_unauthed(shared_zone_test_context): + """ + Tests that we cant get a group were not in + """ + + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-get-group-unauthed', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + dummy_client.get_group(saved_group['id'], status=403) + client.get_group(saved_group['id'], status=200) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/membership/list_group_admins_test.py b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py new file mode 100644 index 000000000..389743f09 --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/list_group_admins_test.py @@ -0,0 +1,81 @@ + +import pytest +import json + +from hamcrest import * + +from vinyldns_python import VinylDNSClient + + +def test_list_group_admins_success(shared_zone_test_context): + """ + Test that we can list all the admins of a given group + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-list-group-admins-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'}, { 'id': 'dummy'} ] + } + saved_group = client.create_group(new_group, status=200) + + admin_user_1_id = 'ok' + admin_user_2_id = 'dummy' + + result = client.get_group(saved_group['id'], status=200) + + assert_that(result['admins'], has_length(2)) + assert_that([admin_user_1_id, admin_user_2_id], has_item(result['admins'][0]['id'])) + assert_that([admin_user_1_id, admin_user_2_id], has_item(result['admins'][1]['id'])) + + result = client.list_group_admins(saved_group['id'], status=200) + print json.dumps(result, indent=3) + + result = sorted(result['admins'], key=lambda user: user['userName']) + assert_that(result, has_length(2)) + assert_that(result[0]['userName'], is_('dummy')) + assert_that(result[0]['id'], is_('dummy')) + assert_that(result[0]['created'], not_none()) + assert_that(result[1]['userName'], is_('ok')) + assert_that(result[1]['id'], is_('ok')) + assert_that(result[1]['created'], not_none()) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_admins_group_not_found(shared_zone_test_context): + """ + Test that listing the admins of a non-existent group fails + """ + + client = shared_zone_test_context.ok_vinyldns_client + client.list_group_admins('doesntexist', status=404) + + +def test_list_group_admins_unauthed(shared_zone_test_context): + """ + Tests that we cant list admins without access + """ + + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-list-group-admins-unauthed', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + dummy_client.list_group_admins(saved_group['id'], status=403) + client.list_group_admins(saved_group['id'], status=200) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/membership/list_group_members_test.py b/modules/api/functional_test/live_tests/membership/list_group_members_test.py new file mode 100644 index 000000000..bb5cce43e --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/list_group_members_test.py @@ -0,0 +1,567 @@ +import pytest +import json + +from hamcrest import * + +from vinyldns_python import VinylDNSClient + + +def test_list_group_members_success(shared_zone_test_context): + """ + Test that we can list all the members of a group + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-list-group-members-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ], + 'admins': [ { 'id': 'ok'} ] + } + + members = sorted(['dummy', 'ok']) + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + assert_that(result['members'], has_length(len(members))) + + result_member_ids = map(lambda member: member['id'], result['members']) + for id in members: + assert_that(result_member_ids, has_item(id)) + + result = client.list_members_group(saved_group['id'], status=200) + result = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result, has_length(len(members))) + dummy = result[0] + assert_that(dummy['id'], is_('dummy')) + assert_that(dummy['userName'], is_('dummy')) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + + ok = result[1] + assert_that(ok['id'], is_('ok')) + assert_that(ok['userName'], is_('ok')) + assert_that(ok['isAdmin'], is_(True)) + assert_that(ok['firstName'], is_('ok')) + assert_that(ok['lastName'], is_('ok')) + assert_that(ok['email'], is_('test@test.com')) + assert_that(ok['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_not_found(shared_zone_test_context): + """ + Tests that we can not list the members of a non-existent group + """ + + client = shared_zone_test_context.ok_vinyldns_client + + client.list_members_group('not_found', status=404) + + +def test_list_group_members_start_from(shared_zone_test_context): + """ + Test that we can list the members starting from a given user + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-start-from', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + assert_that(result['members'], has_item({ 'id': 'ok'})) + result_member_ids = map(lambda member: member['id'], result['members']) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], start_from='dummy050', status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result['startFrom'], is_('dummy050')) + assert_that(result['nextId'], is_('dummy150')) + + assert_that(group_members, has_length(100)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i+51) #starts from dummy051 + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_start_from_non_user(shared_zone_test_context): + """ + Test that we can list the members starting from a non existent username + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-start-from-nonexistent', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], start_from='abc', status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result['startFrom'], is_('abc')) + assert_that(result['nextId'], is_('dummy099')) + + assert_that(group_members, has_length(100)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_max_item(shared_zone_test_context): + """ + Test that we can chose the number of items to list + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-max-items', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], max_items=10, status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result['nextId'], is_('dummy009')) + assert_that(result['maxItems'], is_(10)) + + assert_that(group_members, has_length(10)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_max_item_default(shared_zone_test_context): + """ + Test that the default for max_item is 100 items + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-max-items-default', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result['nextId'], is_('dummy099')) + + assert_that(group_members, has_length(100)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_max_item_zero(shared_zone_test_context): + """ + Test that the call fails when max_item is 0 + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-max-items-zero', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + client.list_members_group(saved_group['id'], max_items=0, status=400) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_max_item_over_1000(shared_zone_test_context): + """ + Test that the call fails when max_item is over 1000 + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-max-items-over-limit', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + client.list_members_group(saved_group['id'], max_items=1001, status=400) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_next_id_correct(shared_zone_test_context): + """ + Test that the correct next_id is returned + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 200): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-next-id', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result['nextId'], is_('dummy099')) + + assert_that(group_members, has_length(100)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy['isAdmin'], is_(False)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_next_id_exhausted(shared_zone_test_context): + """ + Test that the next_id is null when the list is exhausted + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 5): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-next-id-exhausted', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + result = client.list_members_group(saved_group['id'], status=200) + + group_members = sorted(result['members'], key=lambda user: user['id']) + + assert_that(result, is_not(has_key('nextId'))) + + assert_that(group_members, has_length(6)) # add one more for the admin + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_next_id_exhausted_two_pages(shared_zone_test_context): + """ + Test that the next_id is null when the list is exhausted over 2 pages + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + members = [] + for runner in range(0, 19): + id = "dummy{0:0>3}".format(runner) + members.append({ 'id': id }) + members = sorted(members) + + new_group = { + 'name': 'test-list-group-members-next-id-exhausted-two-pages', + 'email': 'test@test.com', + 'members': members, + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + + # members has one more because admins are added as members + assert_that(result['members'], has_length(len(members) + 1)) + result_member_ids = map(lambda member: member['id'], result['members']) + assert_that(result_member_ids, has_item('ok')) + for user in members: + assert_that(result_member_ids, has_item(user['id'])) + + first_page = client.list_members_group(saved_group['id'], max_items=10, status=200) + + group_members = sorted(first_page['members'], key=lambda user: user['id']) + + assert_that(first_page['nextId'], is_('dummy009')) + assert_that(first_page['maxItems'], is_(10)) + + assert_that(group_members, has_length(10)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + + second_page = client.list_members_group(saved_group['id'], + start_from=first_page['nextId'], + max_items=10, + status=200) + + group_members = sorted(second_page['members'], key=lambda user: user['id']) + + assert_that(second_page, is_not(has_key('nextId'))) + assert_that(second_page['maxItems'], is_(10)) + + assert_that(group_members, has_length(10)) + for i in range(0, len(group_members)-1): + dummy = group_members[i] + id = "dummy{0:0>3}".format(i+10) + user_name = "name-"+id + assert_that(dummy['id'], is_(id)) + assert_that(dummy['userName'], is_(user_name)) + assert_that(dummy, is_not(has_key('firstName'))) + assert_that(dummy, is_not(has_key('lastName'))) + assert_that(dummy, is_not(has_key('email'))) + assert_that(dummy['created'], is_not(none())) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_list_group_members_unauthed(shared_zone_test_context): + """ + Tests that we cant list members without access + """ + + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-list-group-members-unauthed', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + dummy_client.list_members_group(saved_group['id'], status=403) + client.list_members_group(saved_group['id'], status=200) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/membership/list_my_groups_test.py b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py new file mode 100644 index 000000000..4ef0b018a --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/list_my_groups_test.py @@ -0,0 +1,165 @@ +import pytest +import json + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from utils import * +from vinyldns_context import VinylDNSTestContext + +class ListGroupsSearchContext(object): + def __init__(self): + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, access_key='listGroupAccessKey', secret_key='listGroupSecretKey') + self.tear_down() # ensures that the environment is clean before starting + + try: + for runner in range(0, 50): + new_group = { + 'name': "test-list-my-groups-{0:0>3}".format(runner), + 'email': 'test@test.com', + 'members': [ { 'id': 'list-group-user'} ], + 'admins': [ { 'id': 'list-group-user'} ] + } + self.client.create_group(new_group, status=200) + + except: + # teardown if there was any issue in setup + try: + self.tear_down() + except: + pass + raise + + def tear_down(self): + clear_zones(self.client) + clear_groups(self.client) + +@pytest.fixture(scope="module") +def list_my_groups_context(request): + ctx = ListGroupsSearchContext() + + def fin(): + ctx.tear_down() + + request.addfinalizer(fin) + + return ctx + +def test_list_my_groups_no_parameters(list_my_groups_context): + """ + Test that we can get all the groups where a user is a member + """ + + results = list_my_groups_context.client.list_my_groups(status=200) + + assert_that(results, has_length(2)) # 2 fields + + assert_that(results['groups'], has_length(50)) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results, is_not(has_key('startFrom'))) + assert_that(results, is_not(has_key('nextId'))) + assert_that(results['maxItems'], is_(100)) + + results['groups'] = sorted(results['groups'], key=lambda x: x['name']) + + for i in range(0, 50): + assert_that(results['groups'][i]['name'], is_("test-list-my-groups-{0:0>3}".format(i))) + + +def test_get_my_groups_using_old_account_auth(list_my_groups_context): + """ + Test passing in an account will return an empty set + """ + results = list_my_groups_context.client.list_my_groups(status=200) + assert_that(results, has_length(2)) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results, is_not(has_key('startFrom'))) + assert_that(results, is_not(has_key('nextId'))) + assert_that(results['maxItems'], is_(100)) + + +def test_list_my_groups_max_items(list_my_groups_context): + """ + Tests that when maxItem is set, only return #maxItems items + """ + results = list_my_groups_context.client.list_my_groups(max_items=5, status=200) + + assert_that(results, has_length(3)) # 3 fields + + assert_that(results, has_key('groups')) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results, is_not(has_key('startFrom'))) + assert_that(results, has_key('nextId')) + assert_that(results['maxItems'], is_(5)) + + +def test_list_my_groups_paging(list_my_groups_context): + """ + Tests that we can return all items by paging + """ + results=list_my_groups_context.client.list_my_groups(max_items=20, status=200) + + assert_that(results, has_length(3)) # 3 fields + assert_that(results, has_key('groups')) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results, is_not(has_key('startFrom'))) + assert_that(results, has_key('nextId')) + assert_that(results['maxItems'], is_(20)) + + while 'nextId' in results: + prev = results + results = list_my_groups_context.client.list_my_groups(max_items=20, start_from=results['nextId'], status=200) + + if 'nextId' in results: + assert_that(results, has_length(4)) # 4 fields + assert_that(results, has_key('groups')) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results['startFrom'], is_(prev['nextId'])) + assert_that(results, has_key('nextId')) + assert_that(results['maxItems'], is_(20)) + + else: + assert_that(results, has_length(3)) # 3 fields + assert_that(results, has_key('groups')) + assert_that(results, is_not(has_key('groupNameFilter'))) + assert_that(results['startFrom'], is_(prev['nextId'])) + assert_that(results, is_not(has_key('nextId'))) + assert_that(results['maxItems'], is_(20)) + + +def test_list_my_groups_filter_matches(list_my_groups_context): + """ + Tests that only matched groups are returned + """ + results = list_my_groups_context.client.list_my_groups(group_name_filter="test-list-my-groups-01", status=200) + + assert_that(results, has_length(3)) # 3 fields + + assert_that(results['groups'], has_length(10)) + assert_that(results['groupNameFilter'], is_('test-list-my-groups-01')) + assert_that(results, is_not(has_key('startFrom'))) + assert_that(results, is_not(has_key('nextId'))) + assert_that(results['maxItems'], is_(100)) + + results['groups'] = sorted(results['groups'], key=lambda x: x['name']) + + for i in range(0, 10): + assert_that(results['groups'][i]['name'], is_("test-list-my-groups-{0:0>3}".format(i+10))) + + +def test_list_my_groups_no_deleted(list_my_groups_context): + """ + Tests that no deleted groups are returned + """ + results=list_my_groups_context.client.list_my_groups(max_items=100, status=200) + + assert_that(results, has_key('groups')) + for g in results['groups']: + assert_that(g['status'], is_not('Deleted')) + + while 'nextId' in results: + results = client.list_my_groups(max_items=20, group_name_filter="test-list-my-groups-", start_from=results['nextId'], status=200) + + assert_that(results, has_key('groups')) + for g in results['groups']: + assert_that(g['status'], is_not('Deleted')) + diff --git a/modules/api/functional_test/live_tests/membership/update_group_test.py b/modules/api/functional_test/live_tests/membership/update_group_test.py new file mode 100644 index 000000000..189d49d98 --- /dev/null +++ b/modules/api/functional_test/live_tests/membership/update_group_test.py @@ -0,0 +1,616 @@ +import pytest +import json +import time + +from hamcrest import * +from vinyldns_python import VinylDNSClient + + +def test_update_group_success(shared_zone_test_context): + """ + Tests that we can update a group that has been created + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-update-group-success', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + group = client.get_group(saved_group['id'], status=200) + + assert_that(group['name'], is_(saved_group['name'])) + assert_that(group['email'], is_(saved_group['email'])) + assert_that(group['description'], is_(saved_group['description'])) + assert_that(group['status'], is_(saved_group['status'])) + assert_that(group['created'], is_(saved_group['created'])) + assert_that(group['id'], is_(saved_group['id'])) + + time.sleep(1) # sleep to ensure that update doesnt change created time + + update_group = { + 'id': group['id'], + 'name': 'updated-name', + 'email': 'update@test.com', + 'description': 'this is a new description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + group = client.update_group(update_group['id'], update_group, status=200) + + assert_that(group['name'], is_(update_group['name'])) + assert_that(group['email'], is_(update_group['email'])) + assert_that(group['description'], is_(update_group['description'])) + assert_that(group['status'], is_(saved_group['status'])) + assert_that(group['created'], is_(saved_group['created'])) + assert_that(group['id'], is_(saved_group['id'])) + assert_that(group['members'][0]['id'], is_('ok')) + assert_that(group['admins'][0]['id'], is_('ok')) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_update_group_without_name(shared_zone_test_context): + """ + Tests that updating a group without a name fails + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + try: + new_group = { + 'name': 'test-update-without-name', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + + update_group = { + 'id': result['id'], + 'email': 'update@test.com', + 'description': 'this is a new description' + } + + errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + assert_that(errors[0], is_("Missing Group.name")) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_update_group_without_email(shared_zone_test_context): + """ + Tests that updating a group without an email fails + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + try: + new_group = { + 'name': 'test-update-without-email', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + + update_group = { + 'id': result['id'], + 'name': 'without-email', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + assert_that(errors[0], is_("Missing Group.email")) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_updating_group_without_name_or_email(shared_zone_test_context): + """ + Tests that updating a group without name or an email fails + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + try: + new_group = { + 'name': 'test-update-without-name-and-email', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + + update_group = { + 'id': result['id'], + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + assert_that(errors, has_length(2)) + assert_that(errors, contains_inanyorder( + "Missing Group.name", + "Missing Group.email" + )) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_updating_group_without_members_or_admins(shared_zone_test_context): + """ + Tests that updating a group without members or admins fails + """ + client = shared_zone_test_context.ok_vinyldns_client + result = None + + try: + new_group = { + 'name': 'test-update-without-members', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(new_group, status=200) + assert_that(result['name'], is_(new_group['name'])) + assert_that(result['email'], is_(new_group['email'])) + + update_group = { + 'id': result['id'], + 'name': 'test-update-without-members', + 'email': 'test@test.com', + 'description': 'this is a description', + } + errors = client.update_group(update_group['id'], update_group, status=400)['errors'] + assert_that(errors, has_length(2)) + assert_that(errors, contains_inanyorder( + "Missing Group.members", + "Missing Group.admins" + )) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + + +def test_update_group_adds_admins_as_members(shared_zone_test_context): + """ + Tests that when we add an admin to a group the admin is also a member + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-update-group-admins-as-members', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + group = client.get_group(saved_group['id'], status=200) + + assert_that(group['name'], is_(saved_group['name'])) + assert_that(group['email'], is_(saved_group['email'])) + assert_that(group['description'], is_(saved_group['description'])) + assert_that(group['status'], is_(saved_group['status'])) + assert_that(group['created'], is_(saved_group['created'])) + assert_that(group['id'], is_(saved_group['id'])) + + update_group = { + 'id': group['id'], + 'name': 'test-update-group-admins-as-members', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'}, { 'id': 'dummy' } ] + } + group = client.update_group(update_group['id'], update_group, status=200) + + import json + print json.dumps(group, indent=4) + + assert_that(group['members'], has_length(2)) + assert_that(['ok', 'dummy'], has_item(group['members'][0]['id'])) + assert_that(['ok', 'dummy'], has_item(group['members'][1]['id'])) + assert_that(group['admins'], has_length(2)) + assert_that(['ok', 'dummy'], has_item(group['admins'][0]['id'])) + assert_that(['ok', 'dummy'], has_item(group['admins'][1]['id'])) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_update_group_conflict(shared_zone_test_context): + """ + Tests that we can not update a groups name to a name already in use + """ + + client = shared_zone_test_context.ok_vinyldns_client + result = None + conflict_group=None + try: + new_group = { + 'name': 'test_update_group_conflict', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + conflict_group = client.create_group(new_group, status=200) + assert_that(conflict_group['name'], is_(new_group['name'])) + + other_group = { + 'name': 'change_me', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + result = client.create_group(other_group, status=200) + assert_that(result['name'], is_(other_group['name'])) + + # change the name of the other_group to the first group (conflict) + update_group = { + 'id': result['id'], + 'name': 'test_update_group_conflict', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + client.update_group(update_group['id'], update_group, status=409) + finally: + if result: + client.delete_group(result['id'], status=(200,404)) + if conflict_group: + client.delete_group(conflict_group['id'], status=(200,404)) + + +def test_update_group_not_found(shared_zone_test_context): + """ + Tests that we can not update a group that has not been created + """ + + client = shared_zone_test_context.ok_vinyldns_client + + update_group = { + 'id': 'test-update-group-not-found', + 'name': 'test-update-group-not-found', + 'email': 'update@test.com', + 'description': 'this is a new description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + client.update_group(update_group['id'], update_group, status=404) + + +def test_update_group_deleted(shared_zone_test_context): + """ + Tests that we can not update a group that has been deleted + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-update-group-deleted', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + client.delete_group(saved_group['id'], status=200) + + update_group = { + 'id': saved_group['id'], + 'name': 'test-update-group-deleted-updated', + 'email': 'update@test.com', + 'description': 'this is a new description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + client.update_group(update_group['id'], update_group, status=404) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_add_member_via_update_group_success(shared_zone_test_context): + """ + Tests that we can add a member to a group via update successfully + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-add-member-to-via-update-group-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-add-member-to-via-update-group-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ], + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + expected_members = ['ok', 'dummy'] + assert_that(saved_group['members'], has_length(2)) + assert_that(expected_members, has_item(saved_group['members'][0]['id'])) + assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_add_member_to_group_twice_via_update_group(shared_zone_test_context): + """ + Tests that we can add a member to a group twice successfully via update group + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + try: + new_group = { + 'name': 'test-add-member-to-group-twice-success-via-update-group', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-add-member-to-group-twice-success-via-update-group', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, { 'id': 'dummy' } ], + 'admins': [ { 'id': 'ok'} ] + } + + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + expected_members = ['ok', 'dummy'] + assert_that(saved_group['members'], has_length(2)) + assert_that(expected_members, has_item(saved_group['members'][0]['id'])) + assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_add_not_found_member_to_group_via_update_group(shared_zone_test_context): + """ + Tests that we can not add a non-existent member to a group via update group + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-add-not-found-member-to-group-via-update-group', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + result = client.get_group(saved_group['id'], status=200) + assert_that(result['members'], has_length(1)) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-add-not-found-member-to-group-via-update-group', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, { 'id': 'not_found' } ], + 'admins': [ { 'id': 'ok'} ] + } + + client.update_group(updated_group['id'], updated_group, status=404) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_remove_member_via_update_group_success(shared_zone_test_context): + """ + Tests that we can remove a member via update group successfully + """ + + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-remove-member-via-update-group-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.create_group(new_group, status=200) + assert_that(saved_group['members'], has_length(2)) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-remove-member-via-update-group-success', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + + assert_that(saved_group['members'], has_length(1)) + assert_that(saved_group['members'][0]['id'], is_('ok')) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_remove_member_and_admin(shared_zone_test_context): + """ + Tests that if we remove a member who is an admin, the admin is also removed + """ + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-remove-member-and-admin', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ] + } + saved_group = client.create_group(new_group, status=200) + assert_that(saved_group['members'], has_length(2)) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-remove-member-and-admin', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + + assert_that(saved_group['members'], has_length(1)) + assert_that(saved_group['members'][0]['id'], is_('ok')) + assert_that(saved_group['admins'], has_length(1)) + assert_that(saved_group['admins'][0]['id'], is_('ok')) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_remove_member_but_not_admin_keeps_member(shared_zone_test_context): + """ + Tests that if we remove a member but do not remove the admin, the admin remains a member + """ + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-remove-member-not-admin-keeps-member', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ] + } + saved_group = client.create_group(new_group, status=200) + assert_that(saved_group['members'], has_length(2)) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-remove-member-not-admin-keeps-member', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ] + } + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + + expected_members = ['ok', 'dummy'] + assert_that(saved_group['members'], has_length(2)) + assert_that(expected_members, has_item(saved_group['members'][0]['id'])) + assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + assert_that(expected_members, has_item(saved_group['admins'][0]['id'])) + assert_that(expected_members, has_item(saved_group['admins'][1]['id'])) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_remove_admin_keeps_member(shared_zone_test_context): + """ + Tests that if we remove a member from admins, the member still remains part of the group + """ + client = shared_zone_test_context.ok_vinyldns_client + saved_group = None + + try: + new_group = { + 'name': 'test-remove-admin-keeps-member', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'}, {'id': 'dummy'} ] + } + saved_group = client.create_group(new_group, status=200) + assert_that(saved_group['members'], has_length(2)) + + updated_group = { + 'id': saved_group['id'], + 'name': 'test-remove-admin-keeps-member', + 'email': 'test@test.com', + 'members': [ { 'id': 'ok'}, {'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = client.update_group(updated_group['id'], updated_group, status=200) + + expected_members = ['ok', 'dummy'] + assert_that(saved_group['members'], has_length(2)) + assert_that(expected_members, has_item(saved_group['members'][0]['id'])) + assert_that(expected_members, has_item(saved_group['members'][1]['id'])) + + assert_that(saved_group['admins'], has_length(1)) + assert_that(saved_group['admins'][0]['id'], is_('ok')) + finally: + if saved_group: + client.delete_group(saved_group['id'], status=(200,404)) + + +def test_update_group_not_authorized(shared_zone_test_context): + """ + Tests that only the admins can update a zone + """ + ok_client = shared_zone_test_context.ok_vinyldns_client + not_admin_client = shared_zone_test_context.dummy_vinyldns_client + try: + new_group = { + 'name': 'test-update-group-not-authorized', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + saved_group = ok_client.create_group(new_group, status=200) + + update_group = { + 'id': saved_group['id'], + 'name': 'updated-name', + 'email': 'update@test.com', + 'description': 'this is a new description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + not_admin_client.update_group(update_group['id'], update_group, status=403) + finally: + if saved_group: + ok_client.delete_group(saved_group['id'], status=(200,404)) diff --git a/modules/api/functional_test/live_tests/production_verify_test.py b/modules/api/functional_test/live_tests/production_verify_test.py new file mode 100644 index 000000000..87cd57a64 --- /dev/null +++ b/modules/api/functional_test/live_tests/production_verify_test.py @@ -0,0 +1,70 @@ +import pytest +import sys +import dns.query +import dns.tsigkeyring +import dns.update + +from utils import * +from hamcrest import * +from vinyldns_python import VinylDNSClient +from test_data import TestData +from dns.resolver import * + + +def test_verify_production(shared_zone_test_context): + """ + Test that production works. This test sets up the shared context, which creates a lot of groups and zones + and then really just creates a single recordset and delete it. + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_create_recordset_with_dns_verify', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.1.1.1', is_in(records)) + assert_that('10.2.2.2', is_in(records)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(answers, has_length(2)) + assert_that('10.1.1.1', is_in(rdata_strings)) + assert_that('10.2.2.2', is_in(rdata_strings)) + finally: + if result_rs: + try: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_deleted(delete_result['zoneId'], delete_result['id']) + except: + pass \ No newline at end of file diff --git a/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py new file mode 100644 index 000000000..e5dedd9ff --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/create_recordset_test.py @@ -0,0 +1,1646 @@ +import pytest +from utils import * +from hamcrest import * +from vinyldns_python import VinylDNSClient +from test_data import TestData +from dns.resolver import * + + +def test_create_recordset_with_dns_verify(shared_zone_test_context): + """ + Test creating a new record set in an existing zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_create_recordset_with_dns_verify', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.1.1.1', is_in(records)) + assert_that('10.2.2.2', is_in(records)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(answers, has_length(2)) + assert_that('10.1.1.1', is_in(rdata_strings)) + assert_that('10.2.2.2', is_in(rdata_strings)) + finally: + if result_rs: + try: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass + + +def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context): + """ + Test creating a new srv record set with service and protocol works + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': '_sip._tcp._test-create-srv-ok', + 'type': 'SRV', + 'ttl': 100, + 'records': [ + { + 'priority': 1, + 'weight': 2, + 'port': 8000, + 'target': 'srv.' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_srv_recordset_with_service_and_protocol(shared_zone_test_context): + """ + Test creating a new srv record set with service and protocol works + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': '_sip._tcp._test-create-srv-ok', + 'type': 'SRV', + 'ttl': 100, + 'records': [ + { + 'priority': 1, + 'weight': 2, + 'port': 8000, + 'target': 'srv.' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_aaaa_recordset_with_shorthand_record(shared_zone_test_context): + """ + Test creating an AAAA record using shorthand for record data works + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'testAAAA', + 'type': 'AAAA', + 'ttl': 100, + 'records': [ + { + 'address': '1::2' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_aaaa_recordset_with_normal_record(shared_zone_test_context): + """ + Test creating an AAAA record not using shorthand for record data works + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'testAAAA', + 'type': 'AAAA', + 'ttl': 100, + 'records': [ + { + 'address': '1:2:3:4:5:6:7:8' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_recordset_conflict(shared_zone_test_context): + """ + Test creating a record set with the same name and type of an existing one returns a 409 + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_create_recordset_conflict', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = None + result_rs = None + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + client.create_recordset(result_rs, status=409) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_recordset_conflict_with_case_insensitive_name(shared_zone_test_context): + """ + Test creating a record set with the same name, but different casing, and type of an existing one returns a 409 + """ + client = shared_zone_test_context.ok_vinyldns_client + first_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_create_recordset_conflict', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = None + result_rs = None + + try: + result = client.create_recordset(first_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + first_rs['name'] = 'test_create_recordset_CONFLICT' + client.create_recordset(first_rs, status=409) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_recordset_conflict_with_trailing_dot_insensitive_name(shared_zone_test_context): + """ + Test creating a record set with the same name (but without a trailing dot) and type of an existing one returns a 409 + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + first_rs = { + 'zoneId': zone['id'], + 'name': 'parent.com.', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result_rs = None + try: + result = client.create_recordset(first_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + first_rs['name'] = 'parent.com' + client.create_recordset(first_rs, status=409) + + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_recordset_conflict_with_dns(shared_zone_test_context): + """ + Test creating a duplicate record set with the same name and same type of an existing one in DNS fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'backend-conflict', + 'type': 'A', + 'ttl': 38400, + 'records': [ + { + 'address': '7.7.7.7' #records with different data should fail, these live in the dns hosts + } + ] + } + + try: + dns_add(shared_zone_test_context.ok_zone, "backend-conflict", 200, "A", "1.2.3.4") + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print json.dumps(result, indent=3) + client.wait_until_recordset_change_status(result, 'Failed') + + finally: + dns_delete(shared_zone_test_context.ok_zone, "backend-conflict", "A") + + +def test_create_recordset_conflict_with_dns_different_type(shared_zone_test_context): + """ + Test creating a new record set in a zone with the same name as an existing record + but with a different record type succeeds + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'already-exists', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'should succeed' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + text = [x['text'] for x in result_rs['records']] + assert_that(text, has_length(1)) + assert_that('should succeed', is_in(text)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(1)) + assert_that('"should succeed"', is_in(rdata_strings)) + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_recordset_zone_not_found(shared_zone_test_context): + """ + Test creating a new record set in a zone that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': '1234', + 'name': 'test_create_recordset_zone_not_found', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + client.create_recordset(new_rs, status=404) + + +def test_create_missing_record_data(shared_zone_test_context): + """ + Test that creating a record without providing necessary data returns errors + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = dict({"no": "data"}, zoneId=shared_zone_test_context.system_test_zone['id']) + + errors = client.create_recordset(new_rs, status=400)['errors'] + assert_that(errors, contains_inanyorder( + "Missing RecordSet.name", + "Missing RecordSet.type", + "Missing RecordSet.ttl" + )) + + +def test_create_invalid_record_type(shared_zone_test_context): + """ + Test that creating a record with invalid data returns errors + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_invalid_record_type', + 'type': 'invalid type', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + + errors = client.create_recordset(new_rs, status=400)['errors'] + assert_that(errors, contains_inanyorder("Invalid RecordType")) + + +def test_create_invalid_record_data(shared_zone_test_context): + """ + Test that creating a record with invalid data returns errors + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_invalid_record.data', + 'type': 'A', + 'ttl': 5, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': 'not.ipv4' + }, + { # Currently, list validation is fail-fast, so the "Missing A.address" that should happen here never does + 'nonsense': 'gibberish' + } + ] + } + + errors = client.create_recordset(new_rs, status=400)['errors'] + + import json + + print json.dumps(errors, indent=4) + assert_that(errors, contains_inanyorder( + "A must be a valid IPv4 Address", + "RecordSet.ttl must be a positive signed 32 bit number greater than or equal to 30" + )) + +def test_create_dotted_a_record_not_apex_fails(shared_zone_test_context): + """ + Test that creating a dotted host name A record set fails. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + dotted_host_a_record = { + 'zoneId': zone['id'], + 'name': 'hello.world', + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + + error = client.create_recordset(dotted_host_a_record, status=422) + assert_that(error, is_("Record with name " + dotted_host_a_record['name'] + " is a dotted host which " + "is illegal in this zone " + zone['name'])) + +def test_create_dotted_a_record_apex_succeeds(shared_zone_test_context): + """ + Test that creating an apex A record set containing dots succeeds. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + apex_a_record = { + 'zoneId': zone['id'], + 'name': zone['name'].rstrip('.'), + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + apex_a_rs = None + try: + apex_a_response = client.create_recordset(apex_a_record, status=202) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] + assert_that(apex_a_rs['name'],is_(apex_a_record['name'] + '.')) + + finally: + if apex_a_rs: + delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + +def test_create_dotted_a_record_apex_with_trailing_dot_succeeds(shared_zone_test_context): + """ + Test that creating an apex A record set containing dots succeeds (with trailing dot) + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + apex_a_record = { + 'zoneId': zone['id'], + 'name': zone['name'], + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + apex_a_rs = None + try: + apex_a_response = client.create_recordset(apex_a_record, status=202) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] + assert_that(apex_a_rs['name'],is_(apex_a_record['name'])) + + finally: + if apex_a_rs: + delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + +def test_create_dotted_cname_record_apex_fails(shared_zone_test_context): + """ + Test that creating a CNAME record set with record name matching dotted apex returns an error. + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + apex_cname_rs = { + 'zoneId': zone['id'], + 'name': zone['name'].rstrip('.'), + 'type': 'CNAME', + 'ttl': 500, + 'records': [{'cname': 'foo'}] + } + + errors = client.create_recordset(apex_cname_rs, status=400)['errors'] + assert_that(errors[0], is_("Record name cannot contain '.' with given type")) + +def test_create_cname_with_multiple_records(shared_zone_test_context): + """ + Test that creating a CNAME record set with multiple records returns an error + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_cname_with_multiple_records', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.com' + }, + { + 'cname': 'cname2.com' + } + ] + } + + errors = client.create_recordset(new_rs, status=400)['errors'] + assert_that(errors[0], is_("CNAME record sets cannot contain multiple records")) + + +def test_create_cname_pointing_to_origin_symbol_fails(shared_zone_test_context): + """ + Test that creating a CNAME record set with name '@' fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': '@', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname.' + } + ] + } + + error = client.create_recordset(new_rs, status=422) + assert_that(error, is_("CNAME RecordSet cannot have name '@' because it points to zone origin")) + + +def test_create_cname_with_existing_record_with_name_fails(shared_zone_test_context): + """ + Test that creating a CNAME fails if a record with the same name exists + """ + client = shared_zone_test_context.ok_vinyldns_client + + a_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'A', + 'ttl': 500, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + + cname_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.com' + } + ] + } + + try: + a_create = client.create_recordset(a_rs, status=202) + a_record = client.wait_until_recordset_change_status(a_create, 'Complete')['recordSet'] + + error = client.create_recordset(cname_rs, status=409) + assert_that(error, is_('RecordSet with name duplicate-test-name already exists in zone system-test., CNAME record cannot use duplicate name')) + + finally: + delete_result = client.delete_recordset(a_record['zoneId'], a_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_record_with_existing_cname_fails(shared_zone_test_context): + """ + Test that creating a record fails if a cname with the same name exists + """ + client = shared_zone_test_context.ok_vinyldns_client + + cname_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.com' + } + ] + } + + a_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'A', + 'ttl': 500, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + + try: + cname_create = client.create_recordset(cname_rs, status=202) + cname_record = client.wait_until_recordset_change_status(cname_create, 'Complete')['recordSet'] + + error = client.create_recordset(a_rs, status=409) + assert_that(error, is_('RecordSet with name duplicate-test-name and type CNAME already exists in zone system-test.')) + + finally: + delete_result = client.delete_recordset(cname_record['zoneId'], cname_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_cname_forces_record_to_be_absolute(shared_zone_test_context): + """ + Test that CNAME record data is made absolute after being created + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_cname_with_multiple_records', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.com' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'cname' : 'cname1.com.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_cname_relative_fails(shared_zone_test_context): + """ + Test that relative (no dots) CNAME record data fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_cname_relative', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'relative' + } + ] + } + + client.create_recordset(new_rs, status=400) + + +def test_create_cname_does_not_change_absolute_record(shared_zone_test_context): + """ + Test that CNAME record data that's already absolute is not changed after being created + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_create_cname_with_multiple_records', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'cname' : 'cname1.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_mx_forces_record_to_be_absolute(shared_zone_test_context): + """ + Test that MX exchange is made absolute after being created + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'mx_not_absolute', + 'type': 'MX', + 'ttl': 500, + 'records': [ + { + 'preference': 1, + 'exchange': 'foo' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'preference' : 1, 'exchange' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_mx_does_not_change_if_absolute(shared_zone_test_context): + """ + Test that MX exchange is unchanged if already absolute + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'mx_absolute', + 'type': 'MX', + 'ttl': 500, + 'records': [ + { + 'preference': 1, + 'exchange': 'foo.' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'preference' : 1, 'exchange' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_ptr_forces_record_to_be_absolute(shared_zone_test_context): + """ + Test that ptr record data is made absolute after being created + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.ip4_reverse_zone + + new_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '30.30', + 'type': 'PTR', + 'ttl': 500, + 'records': [ + { + 'ptrdname': 'foo' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'ptrdname' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_ptr_does_not_change_if_absolute(shared_zone_test_context): + """ + Test that ptr record data is unchanged if already absolute + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.ip4_reverse_zone + + new_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '30.30', + 'type': 'PTR', + 'ttl': 500, + 'records': [ + { + 'ptrdname': 'foo.' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'ptrdname' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_srv_forces_record_to_be_absolute(shared_zone_test_context): + """ + Test that srv target is made absolute after being created + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'srv_not_absolute', + 'type': 'SRV', + 'ttl': 500, + 'records': [ + { + 'priority': 1, + 'weight': 1, + 'port': 1, + 'target': 'foo' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'priority' : 1, 'weight' : 1, 'port' : 1, 'target' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_srv_does_not_change_if_absolute(shared_zone_test_context): + """ + Test that srv target is unchanged if already absolute + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'srv_absolute', + 'type': 'SRV', + 'ttl': 500, + 'records': [ + { + 'priority': 1, + 'weight': 1, + 'port': 1, + 'target': 'foo.' + } + ] + } + + try: + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + assert_that(result_rs['records'], is_([{'priority' : 1, 'weight' : 1, 'port' : 1, 'target' : 'foo.'}])) + finally: + if result_rs: + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +def test_create_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test creating a new record set in an existing zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +def test_reverse_create_recordset_reverse_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test creating a new record set in an existing reverse zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_create_invalid_recordset_name(shared_zone_test_context): + """ + Test creating a record set where the name is too long + """ + client = shared_zone_test_context.ok_vinyldns_client + + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'a'*256, + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + client.create_recordset(new_rs, status=400) + + +def test_user_cannot_create_record_in_unowned_zone(shared_zone_test_context): + """ + Test user can create a record that it a shared zone that it is a member of + """ + client = shared_zone_test_context.ok_vinyldns_client + new_record_set = { + 'zoneId': shared_zone_test_context.dummy_zone['id'], + 'name': 'test_user_cannot_create_record_in_unowned_zone', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.10.10.10' + } + ] + } + client.create_recordset(new_record_set, status=403) + + +def test_create_recordset_no_authorization(shared_zone_test_context): + """ + Test creating a new record set without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_create_recordset_no_authorization', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + client.create_recordset(new_rs, sign_request=False, status=401) + + +def test_create_ipv4_ptr_recordset_with_verify(shared_zone_test_context): + """ + Test creating a new IPv4 PTR recordset in an existing IPv4 reverse lookup zone + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.ip4_reverse_zone + result_rs = None + try: + new_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '30.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + print "\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(answers, has_length(1)) + assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + + +def test_create_ipv4_ptr_recordset_in_forward_zone_fails(shared_zone_test_context): + """ + Test creating a new IPv4 PTR record set in an existing forward lookup zone fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': '35.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + client.create_recordset(new_rs, status=422) + + +def test_create_address_recordset_in_ipv4_reverse_zone_fails(shared_zone_test_context): + """ + Test creating an A recordset in an existing IPv4 reverse lookup zone fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ip4_reverse_zone['id'], + 'name': 'test_create_address_recordset_in_ipv4_reverse_zone_fails', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + client.create_recordset(new_rs, status=422) + + +def test_create_ipv6_ptr_recordset(shared_zone_test_context): + """ + Test creating a new PTR record set in an existing IPv6 reverse lookup zone + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse6_zone = shared_zone_test_context.ip6_reverse_zone + result_rs = None + try: + new_rs = { + 'zoneId': reverse6_zone['id'], + 'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(reverse6_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(answers, has_length(1)) + assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_ipv6_ptr_recordset_in_forward_zone_fails(shared_zone_test_context): + """ + Test creating a new PTR record set in an existing forward lookup zone fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': '3.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + client.create_recordset(new_rs, status=422) + + +def test_create_address_recordset_in_ipv6_reverse_zone_fails(shared_zone_test_context): + """ + Test creating a new A record set in an existing IPv6 reverse lookup zone fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], + 'name': 'test_create_address_recordset_in_ipv6_reverse_zone_fails', + 'type': 'AAAA', + 'ttl': 100, + 'records': [ + { + 'address': 'fd69:27cc:fe91::60' + }, + { + 'address': 'fd69:27cc:fe91:1:2:3:4:61' + } + ] + } + client.create_recordset(new_rs, status=422) + + +def test_create_invalid_ipv6_ptr_recordset(shared_zone_test_context): + """ + Test creating an incorrect IPv6 PTR record in an existing IPv6 reverse lookup zone fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], + 'name': '0.6.0.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + client.create_recordset(new_rs, status=422) + + +def test_at_create_recordset(shared_zone_test_context): + """ + Test creating a new record set with name @ in an existing zone + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': '@', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'someText' + } + ] + } + print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + expected_rs = new_rs + expected_rs['name'] = ok_zone['name'] + verify_recordset(result_rs, expected_rs) + + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('someText')) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(ok_zone, ok_zone['name'], result_rs['type']) + + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(1)) + assert_that('"someText"', is_in(rdata_strings)) + finally: + if result_rs: + client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + +def test_create_record_with_escape_characters_in_record_data_succeeds(shared_zone_test_context): + """ + Test creating a new record set with escape characters (i.e. "" and \) in the record data + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'testing', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'escaped\char"act"ers' + } + ] + } + print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + expected_rs = new_rs + expected_rs['name'] = 'testing' + verify_recordset(result_rs, expected_rs) + + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('escaped\\char\"act\"ers')) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(ok_zone, 'testing', result_rs['type']) + + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(1)) + assert_that('\"escapedchar\\"act\\"ers\"', is_in(rdata_strings)) + finally: + if result_rs: + client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + + + +def test_create_record_with_existing_wildcard_succeeds(shared_zone_test_context): + """ + Test that creating a record when a wildcard record of the same type already exists succeeds + """ + client = shared_zone_test_context.ok_vinyldns_client + + wildcard_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': '*', + 'type': 'TXT', + 'ttl': 500, + 'records': [ + { + 'text': 'wildcard func test 1' + } + ] + } + + test_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'create-record-with-existing-wildcard-succeeds', + 'type': 'TXT', + 'ttl': 500, + 'records': [ + { + 'text': 'wildcard this should be ok' + } + ] + } + + try: + wildcard_create = client.create_recordset(wildcard_rs, status=202) + wildcard_rs = client.wait_until_recordset_change_status(wildcard_create, 'Complete')['recordSet'] + + test_create = client.create_recordset(test_rs, status=202) + test_rs = client.wait_until_recordset_change_status(test_create, 'Complete')['recordSet'] + except: + pass + finally: + try: + delete_result = client.delete_recordset(wildcard_rs['zoneId'], wildcard_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + finally: + try: + delete_result = client.delete_recordset(test_rs['zoneId'], test_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass + + +def test_dotted_host_create_fails(shared_zone_test_context): + """ + Tests that a dotted host recordset create fails + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'record-with.dot', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'should fail' + } + ] + } + error = client.create_recordset(new_rs, status=422) + assert_that(error, is_('Record with name record-with.dot is a dotted host which is illegal in this zone ok.')) + + +def test_ns_create_for_non_approved_group_fails(shared_zone_test_context): + """ + Tests that an ns change on a group whose admin group is not approved fails (only ok group is approved) + """ + client = shared_zone_test_context.dummy_vinyldns_client + zone = shared_zone_test_context.parent_zone + + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + error = client.create_recordset(new_rs, status=403) + assert_that(error, is_('Do not have permissions to manage NS recordsets, please contact vinyldns-support')) + + +def test_ns_create_for_approved_group_passes(shared_zone_test_context): + """ + Tests that an ns change on a group whose admin group is approved passes (only ok group is approved) + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + result_rs = None + + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + finally: + if result_rs: + client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + + +def test_ns_create_for_origin_fails(shared_zone_test_context): + """ + Tests that an ns create for origin fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + new_rs = { + 'zoneId': zone['id'], + 'name': '@', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + client.create_recordset(new_rs, status=409) + + +def test_create_ipv4_ptr_recordset_with_verify_in_classless(shared_zone_test_context): + """ + Test creating a new IPv4 PTR record set in an existing IPv4 classless delegation zone + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.classless_zone_delegation_zone + result_rs = None + + try: + new_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '196', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + print "\r\nCreating recordset in zone " + str(reverse4_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records[0]['ptrdname'], is_('ftp.vinyldns.')) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(answers, has_length(1)) + assert_that(rdata_strings[0], is_('ftp.vinyldns.')) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_create_ipv4_ptr_recordset_in_classless_outside_cidr(shared_zone_test_context): + """ + Test new IPv4 PTR recordset fails outside the cidr range for a IPv4 classless delegation zone + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.classless_zone_delegation_zone + + new_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '190', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + + error = client.create_recordset(new_rs, status=422) + assert_that(error, is_('RecordSet 190 does not specify a valid IP address in zone 192/30.2.0.192.in-addr.arpa.')) diff --git a/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py new file mode 100644 index 000000000..f58625e63 --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/delete_recordset_test.py @@ -0,0 +1,642 @@ +import pytest +import sys +from utils import * + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from test_data import TestData +import time + + +@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +def test_delete_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test deleting a recordset for forward record types + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # now delete + delete_rs = result_rs + + result = client.delete_recordset(delete_rs['zoneId'], delete_rs['id'], status=202) + assert_that(result['status'], is_('Pending')) + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # retry until the recordset is not found + client.get_recordset(result_rs['zoneId'], result_rs['id'], retries=20, status=404) + result_rs = None + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +def test_delete_recordset_reverse_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test deleting a recordset for reverse record types + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # now delete + delete_rs = result_rs + + result = client.delete_recordset(delete_rs['zoneId'], delete_rs['id'], status=202) + assert_that(result['status'], is_('Pending')) + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # retry until the recordset is not found + client.get_recordset(result_rs['zoneId'], result_rs['id'], retries=20, status=404) + result_rs = None + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_delete_recordset_with_verify(shared_zone_test_context): + """ + Test deleting a new record set removes it from the backend + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_delete_recordset_with_verify', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + print "\r\nCreating recordset in zone " + str(shared_zone_test_context.ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.1.1.1', is_in(records)) + assert_that('10.2.2.2', is_in(records)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(2)) + assert_that('10.1.1.1', is_in(rdata_strings)) + assert_that('10.2.2.2', is_in(rdata_strings)) + + # Delete the record set and verify that it is removed + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + answers = dns_resolve(shared_zone_test_context.ok_zone, result_rs['name'], result_rs['type']) + not_found = len(answers) == 0 + + assert_that(not_found, is_(True)) + + result_rs = None + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_can_delete_record_in_owned_zone(shared_zone_test_context): + """ + Test user can delete a record that in a zone that it is owns + """ + + client = shared_zone_test_context.ok_vinyldns_client + rs = None + try: + rs = client.create_recordset( + { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_user_can_delete_record_in_owned_zone', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.10.10.10' + } + ] + }, status=202)['recordSet'] + client.wait_until_recordset_exists(rs['zoneId'], rs['id']) + + client.delete_recordset(rs['zoneId'], rs['id'], status=202) + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + rs = None + finally: + if rs: + try: + client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + finally: + pass + + +def test_user_cannot_delete_record_in_unowned_zone(shared_zone_test_context): + """ + Test user cannot delete a record that in an unowned zone + """ + + client = shared_zone_test_context.dummy_vinyldns_client + unauthorized_client = shared_zone_test_context.ok_vinyldns_client + rs = None + try: + rs = client.create_recordset( + { + 'zoneId': shared_zone_test_context.dummy_zone['id'], + 'name': 'test-user-cannot-delete-record-in-unowned-zone', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.10.10.10' + } + ] + }, status=202)['recordSet'] + + client.wait_until_recordset_exists(rs['zoneId'], rs['id']) + unauthorized_client.delete_recordset(rs['zoneId'], rs['id'], status=403) + finally: + if rs: + try: + client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + finally: + pass + + +def test_delete_recordset_no_authorization(shared_zone_test_context): + """ + Test delete a recordset without authorization + """ + client = shared_zone_test_context.dummy_vinyldns_client + client.delete_recordset(shared_zone_test_context.ok_zone['id'], '1234', sign_request=False, status=401) + + +def test_delete_ipv4_ptr_recordset(shared_zone_test_context): + """ + Test deleting an IPv4 PTR recordset deletes the record + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.ip4_reverse_zone + result_rs = None + + try: + orig_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '30.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + result = client.create_recordset(orig_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Deleting..." + + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = None + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_delete_ipv4_ptr_recordset_does_not_exist_fails(shared_zone_test_context): + """ + Test deleting a nonexistant IPv4 PTR recordset returns not found + """ + client =shared_zone_test_context.ok_vinyldns_client + client.delete_recordset(shared_zone_test_context.ip4_reverse_zone['id'], '4444', status=404) + + +def test_delete_ipv6_ptr_recordset(shared_zone_test_context): + """ + Test deleting an IPv6 PTR recordset deletes the record + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + orig_rs = { + 'zoneId': shared_zone_test_context.ip6_reverse_zone['id'], + 'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + result = client.create_recordset(orig_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Deleting..." + + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = None + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + + +def test_delete_ipv6_ptr_recordset_does_not_exist_fails(shared_zone_test_context): + """ + Test deleting a nonexistant IPv6 PTR recordset returns not found + """ + client = shared_zone_test_context.ok_vinyldns_client + client.delete_recordset(shared_zone_test_context.ip6_reverse_zone['id'], '6666', status=404) + + +def test_delete_recordset_zone_not_found(shared_zone_test_context): + """ + Test deleting a recordset in a zone that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + client.delete_recordset('1234', '4567', status=404) + + +def test_delete_recordset_not_found(shared_zone_test_context): + """ + Test deleting a recordset that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + client.delete_recordset(shared_zone_test_context.ok_zone['id'], '1234', status=404) + + +def test_at_delete_recordset(shared_zone_test_context): + """ + Test deleting a recordset with name @ in an existing zone + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + new_rs = { + 'zoneId': ok_zone['id'], + 'name': '@', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'someText' + } + ] + } + print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + + print json.dumps(result, indent=3) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + expected_rs = new_rs + expected_rs['name'] = ok_zone['name'] + verify_recordset(result_rs, expected_rs) + + print "\r\n\r\n!!!recordset verified..." + + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('someText')) + + print "\r\n\r\n!!!deleting recordset in dns backend" + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + # verify that the record does not exist in the backend dns server + answers = dns_resolve(ok_zone, ok_zone['name'], result_rs['type']) + not_found = len(answers) == 0 + assert_that(not_found) + + +def test_delete_recordset_with_different_dns_data(shared_zone_test_context): + """ + Test deleting a recordset with out-of-sync rdata in dns (ex. if the record was modified manually) + """ + + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'test_delete_recordset_with_different_dns_data', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + print "\r\nCreating recordset in zone " + str(ok_zone) + "\r\n" + result = client.create_recordset(new_rs, status=202) + print str(result) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + result_rs['records'][0]['address'] = "10.8.8.8" + result = client.update_recordset(result_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + assert_that(answers, has_length(1)) + + response = dns_update(ok_zone, result_rs['name'], 300, result_rs['type'], '10.9.9.9') + print "\nSuccessfully updated the record, record is now out of sync\n" + print str(response) + + # check you can delete + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + result_rs = None + + finally: + if result_rs: + try: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if delete_result: + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass + + +def test_user_can_delete_record_via_user_acl_rule(shared_zone_test_context): + """ + Test user DELETE ACL rule - delete + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Delete', userId='dummy') + + result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_user_acl_rule", ok_zone) + + #Dummy user cannot delete record in zone + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=403, retries=3) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can delete record + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + result_rs = None + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_cannot_delete_record_with_write_txt_read_all(shared_zone_test_context): + """ + Test user WRITE TXT READ all ACL rule + """ + client = shared_zone_test_context.ok_vinyldns_client + dummy_client = shared_zone_test_context.dummy_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + created_rs = None + try: + acl_rule1 = generate_acl_rule('Read', userId='dummy', recordMask='www-*') + acl_rule2 = generate_acl_rule('Write', userId='dummy', recordMask='www-user-cant-delete', recordTypes=['TXT']) + + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + # verify dummy can see ok_zone + dummy_view = dummy_client.list_zones()['zones'] + zone_ids = [zone['id'] for zone in dummy_view] + assert_that(zone_ids, has_item(ok_zone['id'])) + + # dummy should be able to add the RS + new_rs = get_recordset_json(ok_zone, "www-user-cant-delete", "TXT", [{'text':'should-work'}]) + rs_change = dummy_client.create_recordset(new_rs, status=202) + created_rs = client.wait_until_recordset_change_status(rs_change, 'Complete')['recordSet'] + verify_recordset(created_rs, new_rs) + + #dummy cannot delete the RS + dummy_client.delete_recordset(ok_zone['id'], created_rs['id'], status=403) + + finally: + clear_ok_acl_rules(shared_zone_test_context) + if created_rs: + delete_result = client.delete_recordset(created_rs['zoneId'], created_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_can_delete_record_via_group_acl_rule(shared_zone_test_context): + """ + Test group DELETE ACL rule - delete + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Delete', groupId=shared_zone_test_context.dummy_group['id']) + + result_rs = seed_text_recordset(client, "test_user_can_delete_record_via_group_acl_rule", ok_zone) + + #Dummy user cannot delete record in zone + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=403) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can delete record + shared_zone_test_context.dummy_vinyldns_client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + result_rs = None + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_ns_delete_for_non_approved_group_fails(shared_zone_test_context): + """ + Tests that someone not in the approved group could not delete a ns record + """ + client = shared_zone_test_context.ok_vinyldns_client + not_approved_client = shared_zone_test_context.dummy_vinyldns_client + zone = shared_zone_test_context.parent_zone + + ns_rs = None + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + ns_rs = result['recordSet'] + + assert_that(client.wait_until_recordset_exists(ns_rs['zoneId'], ns_rs['id'])) + + error = not_approved_client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=403) + assert_that(error, is_('Do not have permissions to manage NS recordsets, please contact vinyldns-support')) + + finally: + if ns_rs: + client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + +def test_ns_delete_for_approved_group_passes(shared_zone_test_context): + """ + Tests that an ns delete on a group whose admin group is approved passes (only ok group is approved) + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + ns_rs = None + + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + delete_result = client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + ns_rs = None + + finally: + if ns_rs: + client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + + +def test_ns_delete_existing_ns_origin_fails(shared_zone_test_context): + """ + Tests that an ns delete for existing ns origin fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + list_results_page = client.list_recordsets(zone['id'], status=200)['recordSets'] + + apex_ns = [item for item in list_results_page if item['type'] == 'NS' and item['name'] in zone['name']][0] + + client.delete_recordset(apex_ns['zoneId'], apex_ns['id'], status=422) + +def test_delete_dotted_a_record_apex_succeeds(shared_zone_test_context): + """ + Test that creating an apex A record set containing dots succeeds. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + apex_a_record = { + 'zoneId': zone['id'], + 'name': zone['name'].rstrip('.'), + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + try: + apex_a_response = client.create_recordset(apex_a_record, status=202) + apex_a_rs = client.wait_until_recordset_change_status(apex_a_response, 'Complete')['recordSet'] + assert_that(apex_a_rs['name'],is_(apex_a_record['name'] + '.')) + + finally: + delete_result = client.delete_recordset(apex_a_rs['zoneId'], apex_a_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') diff --git a/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py new file mode 100644 index 000000000..3cbabb185 --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/get_recordset_test.py @@ -0,0 +1,130 @@ +import pytest +import uuid + +from utils import * +from hamcrest import * +from vinyldns_python import VinylDNSClient + +def test_get_recordset_no_authorization(shared_zone_test_context): + """ + Test getting a recordset without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + client.get_recordset(shared_zone_test_context.ok_zone['id'], '12345', sign_request=False, status=401) + + +def test_get_recordset(shared_zone_test_context): + """ + Test getting a recordset + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_get_recordset', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # Get the recordset we just made and verify + result = client.get_recordset(result_rs['zoneId'], result_rs['id']) + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.1.1.1', is_in(records)) + assert_that('10.2.2.2', is_in(records)) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_get_recordset_zone_doesnt_exist(shared_zone_test_context): + """ + Test getting a recordset in a zone that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_get_recordset_zone_doesnt_exist', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result_rs = None + try: + result = client.create_recordset(new_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + client.get_recordset('5678', result_rs['id'], status=404) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_get_recordset_doesnt_exist(shared_zone_test_context): + """ + Test getting a new recordset that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + client.get_recordset(shared_zone_test_context.ok_zone['id'], '123', status=404) + + +def test_at_get_recordset(shared_zone_test_context): + """ + Test getting a recordset with name @ + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': '@', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'someText' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # Get the recordset we just made and verify + result = client.get_recordset(result_rs['zoneId'], result_rs['id']) + result_rs = result['recordSet'] + + expected_rs = new_rs + expected_rs['name'] = ok_zone['name'] + verify_recordset(result_rs, expected_rs) + + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('someText')) + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py new file mode 100644 index 000000000..7795385a1 --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/list_recordset_changes_test.py @@ -0,0 +1,176 @@ +from hamcrest import * +from utils import * +from vinyldns_python import VinylDNSClient + + +def check_changes_response(response, recordChanges=False, nextId=False, startFrom=False, maxItems=100): + """ + :param response: return value of list_recordset_changes() + :param recordChanges: true if not empty or False if empty, cannot check exact values because don't have access to all attributes + :param nextId: true if exists, false if doesn't, wouldn't be able to check exact value + :param startFrom: the string for startFrom or false if doesnt exist + :param maxItems: maxItems is defined as an Int by default so will always return an Int + """ + + assert_that(response, has_key('zoneId')) #always defined as random string + if recordChanges: + assert_that(response['recordSetChanges'], is_not(has_length(0))) + else: + assert_that(response['recordSetChanges'], has_length(0)) + if nextId: + assert_that(response, has_key('nextId')) + else: + assert_that(response, is_not(has_key('nextId'))) + if startFrom: + assert_that(response['startFrom'], is_(startFrom)) + else: + assert_that(response, is_not(has_key('startFrom'))) + assert_that(response['maxItems'], is_(maxItems)) + + for change in response['recordSetChanges']: + assert_that(change['userName'], is_('history-user')) + + +def test_list_recordset_changes_no_authorization(shared_zone_test_context): + """ + Test that recordset changes without authorization fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + client.list_recordset_changes('12345', sign_request=False, status=401) + + +def test_list_recordset_changes_member_auth_success(shared_zone_test_context): + """ + Test recordset changes succeeds with membership auth for member of admin group + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.ok_zone + client.list_recordset_changes(zone['id'], status=200) + + +def test_list_recordset_changes_member_auth_no_access(shared_zone_test_context): + """ + Test recordset changes fails for user not in admin group with no acl rules + """ + client = shared_zone_test_context.dummy_vinyldns_client + zone = shared_zone_test_context.ok_zone + client.list_recordset_changes(zone['id'], status=403) + + +def test_list_recordset_changes_member_auth_with_acl(shared_zone_test_context): + """ + Test recordset changes succeeds for user with acl rules + """ + zone = shared_zone_test_context.ok_zone + acl_rule = generate_acl_rule('Write', userId='dummy') + try: + client = shared_zone_test_context.dummy_vinyldns_client + + client.list_recordset_changes(zone['id'], status=403) + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + client.list_recordset_changes(zone['id'], status=200) + finally: + clear_ok_acl_rules(shared_zone_test_context) + + +def test_list_recordset_changes_no_start(zone_history_context): + """ + Test getting all recordset changes on one page (max items will default to default value) + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=None) + check_changes_response(response, recordChanges=True, startFrom=False, nextId=False) + + deleteChanges = response['recordSetChanges'][0:3] + updateChanges = response['recordSetChanges'][3:6] + createChanges = response['recordSetChanges'][6:9] + + for change in deleteChanges: + assert_that(change['changeType'], is_('Delete')) + for change in updateChanges: + assert_that(change['changeType'], is_('Update')) + for change in createChanges: + assert_that(change['changeType'], is_('Create')) + + +def test_list_recordset_changes_paging(zone_history_context): + """ + Test paging for recordset changes can use previous nextId as start key of next page + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + response_1 = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=3) + response_2 = client.list_recordset_changes(original_zone['id'], start_from=response_1['nextId'], max_items=3) + # nextId differs local/in dev where we get exactly the last item + # Requesting one over the total in the local in memory dynamo will force consistent behavior. + response_3 = client.list_recordset_changes(original_zone['id'], start_from=response_2['nextId'], max_items=11) + + check_changes_response(response_1, recordChanges=True, nextId=True, startFrom=False, maxItems=3) + check_changes_response(response_2, recordChanges=True, nextId=True, startFrom=response_1['nextId'], maxItems=3) + check_changes_response(response_3, recordChanges=True, nextId=False, startFrom=response_2['nextId'], maxItems=11) + + for change in response_1['recordSetChanges']: + assert_that(change['changeType'], is_('Delete')) + for change in response_2['recordSetChanges']: + assert_that(change['changeType'], is_('Update')) + for change in response_3['recordSetChanges']: + assert_that(change['changeType'], is_('Create')) + + +def test_list_recordset_changes_exhausted(zone_history_context): + """ + Test next id is none when zone changes are exhausted + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=17) + check_changes_response(response, recordChanges=True, startFrom=False, nextId=False, maxItems=17) + + deleteChanges = response['recordSetChanges'][0:3] + updateChanges = response['recordSetChanges'][3:6] + createChanges = response['recordSetChanges'][6:9] + + for change in deleteChanges: + assert_that(change['changeType'], is_('Delete')) + for change in updateChanges: + assert_that(change['changeType'], is_('Update')) + for change in createChanges: + assert_that(change['changeType'], is_('Create')) + + +def test_list_recordset_returning_no_changes(zone_history_context): + """ + Pass in startFrom of 0 should return empty list because start key is created time + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + response = client.list_recordset_changes(original_zone['id'], start_from='0', max_items=None) + check_changes_response(response, recordChanges=False, startFrom='0', nextId=False) + + +def test_list_recordset_changes_default_max_items(zone_history_context): + """ + Test default max items is 100 + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + response = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=None) + check_changes_response(response, recordChanges=True, startFrom=False, nextId=False, maxItems=100) + + +def test_list_recordset_changes_max_items_boundaries(zone_history_context): + """ + Test 0 < max_items <= 100 + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + too_large = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=101, status=400) + too_small = client.list_recordset_changes(original_zone['id'], start_from=None, max_items=0, status=400) + + assert_that(too_large, is_("maxItems was 101, maxItems must be between 0 exclusive and 100 inclusive")) + assert_that(too_small, is_("maxItems was 0, maxItems must be between 0 exclusive and 100 inclusive")) diff --git a/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py new file mode 100644 index 000000000..78ddd5a51 --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/list_recordsets_test.py @@ -0,0 +1,303 @@ +import pytest +import sys +from utils import * + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from test_data import TestData + + +class ListRecordSetsFixture(): + def __init__(self, shared_zone_test_context): + self.test_context = shared_zone_test_context.ok_zone + self.client = shared_zone_test_context.ok_vinyldns_client + self.new_rs = {} + existing_records = self.client.list_recordsets(self.test_context['id'])['recordSets'] + assert_that(existing_records, has_length(7)) + rs_template = { + 'zoneId': self.test_context['id'], + 'name': '00-test-list-recordsets-', + 'type': '', + 'ttl': 100, + 'records': [0] + } + rs_types = [ + ['A', + [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + ], + ['CNAME', + [ + { + 'cname': 'cname1.' + } + ] + ] + ] + self.all_records = {} + result_list = {} + for i in range(10): + self.all_records[i] = copy.deepcopy(rs_template) + self.all_records[i]['type'] = rs_types[(i % 2)][0] + self.all_records[i]['records'] = rs_types[(i % 2)][1] + self.all_records[i]['name'] = "{0}{1}-{2}".format(self.all_records[i]['name'], i, self.all_records[i]['type']) + result_list[i] = self.client.create_recordset(self.all_records[i], status=202) + self.client.wait_until_recordset_change_status(result_list[i], 'Complete') + self.new_rs[i] = result_list[i]['recordSet'] + + for i in range(7): + self.all_records[i + 10] = existing_records[i] + + def tear_down(self): + for key in self.new_rs: + self.client.delete_recordset(self.new_rs[key]['zoneId'], self.new_rs[key]['id'], status=202) + for key in self.new_rs: + self.client.wait_until_recordset_deleted(self.new_rs[key]['zoneId'], self.new_rs[key]['id']) + + def check_recordsets_page_accuracy(self, list_results_page, size, offset, nextId=False, startFrom=False, maxItems=100): + # validate fields + if nextId: + assert_that(list_results_page, has_key('nextId')) + else: + assert_that(list_results_page, is_not(has_key('nextId'))) + if startFrom: + assert_that(list_results_page['startFrom'], is_(startFrom)) + else: + assert_that(list_results_page, is_not(has_key('startFrom'))) + assert_that(list_results_page['maxItems'], is_(maxItems)) + + # validate actual page + list_results_recordSets_page = list_results_page['recordSets'] + assert_that(list_results_recordSets_page, has_length(size)) + for i in range(len(list_results_recordSets_page)): + assert_that(list_results_recordSets_page[i]['name'], is_(self.all_records[i+offset]['name'])) + verify_recordset(list_results_recordSets_page[i], self.all_records[i+offset]) + assert_that(list_results_recordSets_page[i]['accessLevel'], is_('Delete')) + + +@pytest.fixture(scope = "module") +def rs_fixture(request, shared_zone_test_context): + + fix = ListRecordSetsFixture(shared_zone_test_context) + def fin(): + fix.tear_down() + + request.addfinalizer(fin) + + return fix + + +def test_list_recordsets_no_start(rs_fixture): + """ + Test listing all recordsets + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + list_results = client.list_recordsets(ok_zone['id'], status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=17, offset=0) + + +def test_list_recordsets_multiple_pages(rs_fixture): + """ + Test listing record sets in pages, using nextId from previous page for new one + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + # first page of 2 items + list_results_page = client.list_recordsets(ok_zone['id'], max_items=2, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=2, offset=0, nextId=True, maxItems=2) + + # second page of 5 items + start = list_results_page['nextId'] + list_results_page = client.list_recordsets(ok_zone['id'], start_from=start, max_items=5, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=5, offset=2, nextId=True, startFrom=start, maxItems=5) + + # third page of 4 items + start = list_results_page['nextId'] + # nextId differs local/in dev where we get exactly the last item + # If you put 3 items in local in memory dynamo and request three items, you always get an exclusive start key, + # but in real dynamo you don't. Requesting something over 4 will force consistent behavior + list_results_page = client.list_recordsets(ok_zone['id'], start_from=start, max_items=11, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=10, offset=7, nextId=False, startFrom=start, maxItems=11) + + +def test_list_recordsets_excess_page_size(rs_fixture): + """ + Test listing record set with page size larger than record sets count returns all records and nextId of None + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + #page of 19 items + list_results_page = client.list_recordsets(ok_zone['id'], max_items=19, status=200) + rs_fixture.check_recordsets_page_accuracy(list_results_page, size=17, offset=0, maxItems=19, nextId=False) + + +def test_list_recordsets_fails_max_items_too_large(rs_fixture): + """ + Test listing record set with page size larger than max page size + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + client.list_recordsets(ok_zone['id'], max_items=200, status=400) + + +def test_list_recordsets_fails_max_items_too_small(rs_fixture): + """ + Test listing record set with page size of zero + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + client.list_recordsets(ok_zone['id'], max_items=0, status=400) + + +def test_list_recordsets_default_size_is_100(rs_fixture): + """ + Test default page size is 100 + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + list_results = client.list_recordsets(ok_zone['id'], status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=17, offset=0, maxItems=100) + + +def test_list_recordsets_with_record_name_filter_all(rs_fixture): + """ + Test listing all recordsets whose name contains a substring, all recordsets have substring 'list' in name + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + list_results = client.list_recordsets(ok_zone['id'], record_name_filter="list", status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=10, offset=0) + + +def test_list_recordsets_with_record_name_filter_and_page_size(rs_fixture): + """ + First Listing 4 out of 5 recordsets with substring 'CNAME' in name + Second Listing 5 out of 5 recordsets with substring 'CNAME' in name with an excess page size of 7 + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + #page of 4 items + list_results = client.list_recordsets(ok_zone['id'], max_items=4, record_name_filter="CNAME", status=200) + assert_that(list_results['recordSets'], has_length(4)) + + list_results_records = list_results['recordSets']; + for i in range(len(list_results_records)): + assert_that(list_results_records[i]['name'], contains_string('CNAME')) + + #page of 5 items but excess max items + list_results = client.list_recordsets(ok_zone['id'], max_items=7, record_name_filter="CNAME", status=200) + assert_that(list_results['recordSets'], has_length(5)) + + list_results_records = list_results['recordSets']; + for i in range(len(list_results_records)): + assert_that(list_results_records[i]['name'], contains_string('CNAME')) + + +def test_list_recordsets_with_record_name_filter_and_chaining_pages_with_nextId(rs_fixture): + """ + First Listing 2 out 5 recordsets with substring 'CNAME' in name, then using next Id of + previous page to be the start key of next page + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + #page of 2 items + list_results = client.list_recordsets(ok_zone['id'], max_items=2, record_name_filter="CNAME", status=200) + assert_that(list_results['recordSets'], has_length(2)) + start_key = list_results['nextId'] + + #page of 2 items + list_results = client.list_recordsets(ok_zone['id'], start_from=start_key, max_items=2, record_name_filter="CNAME", status=200) + assert_that(list_results['recordSets'], has_length(2)) + + list_results_records = list_results['recordSets']; + assert_that(list_results_records[0]['name'], contains_string('5')) + assert_that(list_results_records[1]['name'], contains_string('7')) + + +def test_list_recordsets_with_record_name_filter_one(rs_fixture): + """ + Test listing all recordsets whose name contains a substring, only one record set has substring '8' in name + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + list_results = client.list_recordsets(ok_zone['id'], record_name_filter="8", status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=1, offset=8) + + +def test_list_recordsets_with_record_name_filter_none(rs_fixture): + """ + Test listing all recordsets whose name contains a substring, no record set has substring 'Dummy' in name + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + + list_results = client.list_recordsets(ok_zone['id'], record_name_filter="Dummy", status=200) + rs_fixture.check_recordsets_page_accuracy(list_results, size=0, offset=0) + + +def test_list_recordsets_no_authorization(rs_fixture): + """ + Test listing record sets without authorization + """ + client = rs_fixture.client + ok_zone = rs_fixture.test_context + client.list_recordsets(ok_zone['id'], sign_request=False, status=401) + + +def test_list_recordsets_with_acl(shared_zone_test_context): + """ + Test listing all recordsets + """ + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + new_rs = [] + + try: + acl_rule1 = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') + acl_rule2 = generate_acl_rule('Write', userId='dummy', recordMask='test-list-recordsets-with-acl1') + + rec1 = seed_text_recordset(client, "test-list-recordsets-with-acl1", ok_zone) + rec2 = seed_text_recordset(client, "test-list-recordsets-with-acl2", ok_zone) + rec3 = seed_text_recordset(client, "BAD-test-list-recordsets-with-acl", ok_zone) + + new_rs = [rec1, rec2, rec3] + + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + result = shared_zone_test_context.dummy_vinyldns_client.list_recordsets(ok_zone['id'], status=200) + result = result['recordSets'] + + for rs in result: + if rs['name'] == rec1['name']: + verify_recordset(rs, rec1) + assert_that(rs['accessLevel'], is_('Write')) + elif rs['name'] == rec2['name']: + verify_recordset(rs, rec2) + assert_that(rs['accessLevel'], is_('Read')) + elif rs['name'] == rec3['name']: + verify_recordset(rs, rec3) + assert_that(rs['accessLevel'], is_('NoAccess')) + + finally: + clear_ok_acl_rules(shared_zone_test_context) + for rs in new_rs: + client.delete_recordset(rs['zoneId'], rs['id'], status=202) + for rs in new_rs: + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) diff --git a/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py new file mode 100644 index 000000000..b6e84574e --- /dev/null +++ b/modules/api/functional_test/live_tests/recordsets/update_recordset_test.py @@ -0,0 +1,1931 @@ +import pytest +import copy +from utils import * + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from test_data import TestData +from vinyldns_context import VinylDNSTestContext +import time + + +def test_update_a_with_same_name_as_cname(shared_zone_test_context): + """ + Test that updating a A record fails if the name change conflicts with an existing CNAME name + """ + client = shared_zone_test_context.ok_vinyldns_client + + try: + cname_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.' + } + ] + } + + a_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'unique-test-name', + 'type': 'A', + 'ttl': 500, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + + cname_create = client.create_recordset(cname_rs, status=202) + cname_record = client.wait_until_recordset_change_status(cname_create, 'Complete')['recordSet'] + + a_create = client.create_recordset(a_rs, status=202) + a_record = client.wait_until_recordset_change_status(a_create, 'Complete')['recordSet'] + + a_rs_update = copy.deepcopy(a_record) + a_rs_update['name'] = 'duplicate-test-name' + + error = client.update_recordset(a_rs_update, status=409) + assert_that(error, is_('RecordSet with name duplicate-test-name and type CNAME already exists in zone system-test.')) + finally: + delete_result_cname = client.delete_recordset(cname_record['zoneId'], cname_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result_cname, 'Complete') + delete_result_a = client.delete_recordset(a_record['zoneId'], a_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result_a, 'Complete') + + +def test_update_cname_with_same_name_as_another_record(shared_zone_test_context): + """ + Test that updating a CNAME record fails if the name change conflicts with an existing record name + """ + client = shared_zone_test_context.ok_vinyldns_client + + try: + cname_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'unique-test-name', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.' + } + ] + } + + a_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'duplicate-test-name', + 'type': 'A', + 'ttl': 500, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + + cname_create = client.create_recordset(cname_rs, status=202) + cname_record = client.wait_until_recordset_change_status(cname_create, 'Complete')['recordSet'] + + a_create = client.create_recordset(a_rs, status=202) + a_record = client.wait_until_recordset_change_status(a_create, 'Complete')['recordSet'] + + cname_rs_update = copy.deepcopy(cname_record) + cname_rs_update['name'] = 'duplicate-test-name' + + error = client.update_recordset(cname_rs_update, status=409) + assert_that(error, is_('RecordSet with name duplicate-test-name already exists in zone system-test., CNAME record cannot use duplicate name')) + finally: + delete_result_cname = client.delete_recordset(cname_record['zoneId'], cname_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result_cname, 'Complete') + delete_result_a = client.delete_recordset(a_record['zoneId'], a_record['id'], status=202) + client.wait_until_recordset_change_status(delete_result_a, 'Complete') + + +def test_update_cname_with_multiple_records(shared_zone_test_context): + """ + Test that creating a CNAME record set and then updating with multiple records returns an error + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_update_cname_with_multiple_records', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # update the record set, adding another cname record so there are multiple + updated_rs = copy.deepcopy(result_rs) + updated_rs['records'] = [ + { + 'cname': 'cname1.' + }, + { + 'cname': 'cname2.' + } + ] + + errors = client.update_recordset(updated_rs, status=400)['errors'] + assert_that(errors[0], is_("CNAME record sets cannot contain multiple records")) + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_update_cname_with_multiple_records(shared_zone_test_context): + """ + Test that creating a CNAME record set and then updating with multiple records returns an error + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test_update_cname_with_multiple_records', + 'type': 'CNAME', + 'ttl': 500, + 'records': [ + { + 'cname': 'cname1.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # update the record set, adding another cname record so there are multiple + updated_rs = copy.deepcopy(result_rs) + updated_rs['records'] = [ + { + 'cname': 'cname1.' + }, + { + 'cname': 'cname2.' + } + ] + + errors = client.update_recordset(updated_rs, status=400)['errors'] + assert_that(errors[0], is_("CNAME record sets cannot contain multiple records")) + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_update_change_name_success(shared_zone_test_context): + """ + Tests updating a record set and changing the name works + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + new_rs = { + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'test-update-change-name-success-1', + 'type': 'A', + 'ttl': 500, + 'records': [ + { + 'address': '1.1.1.1' + }, + { + 'address': '1.1.1.2' + } + ] + } + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # update the record set, changing the name + updated_rs = copy.deepcopy(result_rs) + updated_rs['name'] = 'test-update-change-name-success-2' + updated_rs['ttl'] = 600 + updated_rs['records'] = [ + { + 'address': '2.2.2.2' + } + ] + + result = client.update_recordset(updated_rs, status=202) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(600)) + assert_that(result_rs['name'], is_('test-update-change-name-success-2')) + assert_that(result_rs['records'][0]['address'], is_('2.2.2.2')) + assert_that(result_rs['records'], has_length(1)) + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +@pytest.mark.parametrize('record_name,test_rs', TestData.FORWARD_RECORDS) +def test_update_recordset_forward_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test updating a record set in a forward zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.system_test_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # now update + update_rs = result_rs + update_rs['ttl'] = 1000 + + result = client.update_recordset(update_rs, status=202) + assert_that(result['status'], is_('Pending')) + result_rs = result['recordSet'] + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(1000)) + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +@pytest.mark.parametrize('record_name,test_rs', TestData.REVERSE_RECORDS) +def test_reverse_update_reverse_record_types(shared_zone_test_context, record_name, test_rs): + """ + Test updating a record set in a reverse zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = dict(test_rs, zoneId=shared_zone_test_context.ip4_reverse_zone['id']) + + result = client.create_recordset(new_rs, status=202) + assert_that(result['status'], is_('Pending')) + print str(result) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + + records = result_rs['records'] + + for record in new_rs['records']: + assert_that(records, has_item(has_entries(record))) + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # now update + update_rs = result_rs + update_rs['ttl'] = 1000 + + result = client.update_recordset(update_rs, status=202) + assert_that(result['status'], is_('Pending')) + result_rs = result['recordSet'] + + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(1000)) + + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_update_recordset_long_name(shared_zone_test_context): + """ + Test updating a record set where the name is too long + """ + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + + try: + new_rs = { + 'id': 'abc', + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'a', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + result = client.create_recordset(new_rs, status=202) + + result_rs = result['recordSet'] + verify_recordset(result_rs, new_rs) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + update_rs = { + 'id': 'abc', + 'zoneId': shared_zone_test_context.system_test_zone['id'], + 'name': 'a'*256, + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + } + client.update_recordset(update_rs, status=400) + finally: + if result_rs: + result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + if result: + client.wait_until_recordset_change_status(result, 'Complete') + + +def test_user_can_update_record_in_zone_it_owns(shared_zone_test_context): + """ + Test user can update a record that it owns + """ + client = shared_zone_test_context.ok_vinyldns_client + rs = None + try: + rs = client.create_recordset( + { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_user_can_update_record_in_zone_it_owns', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + } + ] + }, status=202 + )['recordSet'] + client.wait_until_recordset_exists(rs['zoneId'], rs['id']) + + rs['ttl'] = rs['ttl'] + 1000 + + result = client.update_recordset(rs, status=202, retries=3) + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(rs['ttl'])) + finally: + if rs: + try: + client.delete_recordset(rs['zoneId'], rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + finally: + pass + + +def test_update_recordset_no_authorization(shared_zone_test_context): + """ + Test updating a record set without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + rs = { + 'id': '12345', + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_update_recordset_no_authorization', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + client.update_recordset(rs, sign_request=False, status=401) + + +def test_update_recordset_replace_2_records_with_1_different_record(shared_zone_test_context): + """ + Test creating a new record set in an existing zone and then updating that record set to replace the existing + records with one new one + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'test_update_recordset_replace_2_records_with_1_different_record', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + verify_recordset(result_rs, new_rs) + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.1.1.1', is_in(records)) + assert_that('10.2.2.2', is_in(records)) + + result_rs['ttl'] = 200 + + modified_records = [ + { + 'address': '1.1.1.1' + } + ] + result_rs['records'] = modified_records + + result = client.update_recordset(result_rs, status=202) + assert_that(result['status'], is_('Pending')) + result = client.wait_until_recordset_change_status(result, 'Complete') + + assert_that(result['changeType'], is_('Update')) + assert_that(result['status'], is_('Complete')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + # make sure the update was applied + result_rs = result['recordSet'] + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(1)) + assert_that(records[0], is_('1.1.1.1')) + + # verify that the record exists in the backend dns server + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(1)) + assert_that('1.1.1.1', is_in(rdata_strings)) + + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_existing_record_set_add_record(shared_zone_test_context): + """ + Test creating a new record set in an existing zone and then updating that record set to add a record + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'test_update_existing_record_set_add_record', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.2.2.2' + } + ] + } + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + verify_recordset(result_rs, new_rs) + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(1)) + assert_that(records[0], is_('10.2.2.2')) + + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + print "GOT ANSWERS BACK FOR INITIAL CREATE:" + print str(rdata_strings) + + # Update the record set, adding a new record to the existing one + modified_records = [ + { + 'address': '4.4.4.8' + }, + { + 'address': '10.2.2.2' + } + ] + result_rs['records'] = modified_records + + import json + print "UPDATING RECORD SET, NEW RECORD SET IS..." + print json.dumps(result_rs, indent=3) + + result = client.update_recordset(result_rs, status=202) + assert_that(result['status'], is_('Pending')) + result = client.wait_until_recordset_change_status(result, 'Complete') + + assert_that(result['changeType'], is_('Update')) + assert_that(result['status'], is_('Complete')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + # make sure the update was applied + result_rs = result['recordSet'] + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(2)) + assert_that('10.2.2.2', is_in(records)) + assert_that('4.4.4.8', is_in(records)) + + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + print "GOT BACK ANSWERS FOR UPDATE" + print str(rdata_strings) + assert_that(rdata_strings, has_length(2)) + assert_that('10.2.2.2', is_in(rdata_strings)) + assert_that('4.4.4.8', is_in(rdata_strings)) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_existing_record_set_delete_record(shared_zone_test_context): + """ + Test creating a new record set in an existing zone and then updating that record set to delete a record + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': 'test_update_existing_record_set_delete_record', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + }, + { + 'address': '10.3.3.3' + }, + { + 'address': '10.4.4.4' + } + ] + } + result = client.create_recordset(new_rs, status=202) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + verify_recordset(result_rs, new_rs) + + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(4)) + assert_that(records[0], is_('10.1.1.1')) + assert_that(records[1], is_('10.2.2.2')) + assert_that(records[2], is_('10.3.3.3')) + assert_that(records[3], is_('10.4.4.4')) + + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(4)) + + # Update the record set, delete three records and leave one + modified_records = [ + { + 'address': '10.2.2.2' + } + ] + result_rs['records'] = modified_records + + result = client.update_recordset(result_rs, status=202) + result = client.wait_until_recordset_change_status(result, 'Complete') + + # make sure the update was applied + result_rs = result['recordSet'] + records = [x['address'] for x in result_rs['records']] + assert_that(records, has_length(1)) + assert_that('10.2.2.2', is_in(records)) + + # do a DNS query + answers = dns_resolve(ok_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(rdata_strings, has_length(1)) + assert_that('10.2.2.2', is_in(rdata_strings)) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_ipv4_ptr_recordset_with_verify(shared_zone_test_context): + """ + Test updating an IPv4 PTR record set returns the updated values after complete + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse4_zone = shared_zone_test_context.ip4_reverse_zone + result_rs = None + try: + orig_rs = { + 'zoneId': reverse4_zone['id'], + 'name': '30.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + result = client.create_recordset(orig_rs, status=202) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Updating..." + + new_ptr_target = 'www.vinyldns.' + new_rs = result_rs + print new_rs + new_rs['records'][0]['ptrdname'] = new_ptr_target + print new_rs + result = client.update_recordset(new_rs, status=202) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!updated recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + print result_rs + records = result_rs['records'] + assert_that(records[0]['ptrdname'], is_(new_ptr_target)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + # verify that the record exists in the backend dns server + answers = dns_resolve(reverse4_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + + assert_that(rdata_strings, has_length(1)) + assert_that(rdata_strings[0], is_(new_ptr_target)) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_ipv6_ptr_recordset(shared_zone_test_context): + """ + Test updating an IPv6 PTR record set returns the updated values after complete + """ + client = shared_zone_test_context.ok_vinyldns_client + reverse6_zone = shared_zone_test_context.ip6_reverse_zone + result_rs = None + try: + orig_rs = { + 'zoneId': reverse6_zone['id'], + 'name': '0.6.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0', + 'type': 'PTR', + 'ttl': 100, + 'records': [ + { + 'ptrdname': 'ftp.vinyldns.' + } + ] + } + result = client.create_recordset(orig_rs, status=202) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!recordset is active! Updating..." + + new_ptr_target = 'www.vinyldns.' + new_rs = result_rs + print new_rs + new_rs['records'][0]['ptrdname'] = new_ptr_target + print new_rs + result = client.update_recordset(new_rs, status=202) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + print "\r\n\r\n!!!updated recordset is active! Verifying..." + + verify_recordset(result_rs, new_rs) + print "\r\n\r\n!!!recordset verified..." + + print result_rs + records = result_rs['records'] + assert_that(records[0]['ptrdname'], is_(new_ptr_target)) + + print "\r\n\r\n!!!verifying recordset in dns backend" + answers = dns_resolve(reverse6_zone, result_rs['name'], result_rs['type']) + rdata_strings = rdata(answers) + assert_that(rdata_strings, has_length(1)) + assert_that(rdata_strings[0], is_(new_ptr_target)) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_recordset_fails_when_changing_name_to_an_existing_name(shared_zone_test_context): + """ + Test creating a new record set fails when an update attempts to change the name of one recordset + to the name of another that already exists + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs_1 = None + result_rs_2 = None + try: + new_rs_1 = { + 'zoneId': ok_zone['id'], + 'name': 'update_recordset_fails_when_changing_name_to_an_existing_name', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = client.create_recordset(new_rs_1, status=202) + result_rs_1 = result['recordSet'] + result_rs_1 = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + new_rs_2 = { + 'zoneId': ok_zone['id'], + 'name': 'update_recordset_fails_when_changing_name_to_an_existing_name_2', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '2.2.2.2' + }, + { + 'address': '3.3.3.3' + } + ] + } + result = client.create_recordset(new_rs_2, status=202) + result_rs_2 = result['recordSet'] + result_rs_2 = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + # attempt to change the name of the second to the name of the first + result_rs_2['name'] = result_rs_1['name'] + + client.update_recordset(result_rs_2, status=409) + + finally: + if result_rs_1: + delete_result = client.delete_recordset(result_rs_1['zoneId'], result_rs_1['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + if result_rs_2: + delete_result = client.delete_recordset(result_rs_2['zoneId'], result_rs_2['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_recordset_zone_not_found(shared_zone_test_context): + """ + Test updating a record set in a zone that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = None + + try: + new_rs = { + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_update_recordset_zone_not_found', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + result = client.create_recordset(new_rs, status=202) + new_rs = result['recordSet'] + client.wait_until_recordset_exists(new_rs['zoneId'], new_rs['id']) + new_rs['zoneId'] = '1234' + client.update_recordset(new_rs, status=404) + finally: + if new_rs: + try: + client.delete_recordset(shared_zone_test_context.ok_zone['id'], new_rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(shared_zone_test_context.ok_zone['id'], new_rs['id']) + finally: + pass + + +def test_update_recordset_not_found(shared_zone_test_context): + """ + Test updating a record set that doesn't exist should return a 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + new_rs = { + 'id': 'nothere', + 'zoneId': shared_zone_test_context.ok_zone['id'], + 'name': 'test_update_recordset_not_found', + 'type': 'A', + 'ttl': 100, + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + client.update_recordset(new_rs, status=404) + + +def test_at_update_recordset(shared_zone_test_context): + """ + Test creating a new record set with name @ in an existing zone and then updating that recordset with name @ + """ + client = shared_zone_test_context.ok_vinyldns_client + ok_zone = shared_zone_test_context.ok_zone + result_rs = None + try: + new_rs = { + 'zoneId': ok_zone['id'], + 'name': '@', + 'type': 'TXT', + 'ttl': 100, + 'records': [ + { + 'text': 'someText' + } + ] + } + + result = client.create_recordset(new_rs, status=202) + print str(result) + + assert_that(result['changeType'], is_('Create')) + assert_that(result['status'], is_('Pending')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + result_rs = result['recordSet'] + result_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + expected_rs = new_rs + expected_rs['name'] = ok_zone['name'] + verify_recordset(result_rs, expected_rs) + + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('someText')) + + result_rs['ttl'] = 200 + result_rs['records'][0]['text'] = 'differentText' + + result = client.update_recordset(result_rs, status=202) + assert_that(result['status'], is_('Pending')) + result = client.wait_until_recordset_change_status(result, 'Complete') + + assert_that(result['changeType'], is_('Update')) + assert_that(result['status'], is_('Complete')) + assert_that(result['created'], is_not(none())) + assert_that(result['userId'], is_not(none())) + + # make sure the update was applied + result_rs = result['recordSet'] + records = result_rs['records'] + assert_that(records, has_length(1)) + assert_that(records[0]['text'], is_('differentText')) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_can_update_record_via_user_acl_rule(shared_zone_test_context): + """ + Test user WRITE ACL rule - update + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy') + + result_rs = seed_text_recordset(client, "test_user_can_update_record_via_user_acl_rule", ok_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + # Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + # add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + # Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_can_update_record_via_group_acl_rule(shared_zone_test_context): + """ + Test group WRITE ACL rule - update + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id']) + try: + result_rs = seed_text_recordset(client, "test_user_can_update_record_via_group_acl_rule", ok_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + # Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + + # add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + # Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_rule_priority_over_group_acl_rule(shared_zone_test_context): + """ + Test user rule takes priority over group rule + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + group_acl_rule = generate_acl_rule('Read', groupId=shared_zone_test_context.dummy_group['id']) + user_acl_rule = generate_acl_rule('Write', userId='dummy') + + result_rs = seed_text_recordset(client, "test_user_rule_priority_over_group_acl_rule", ok_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #add rules + add_ok_acl_rules(shared_zone_test_context, [group_acl_rule, user_acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=(202, 404)) + client.wait_until_recordset_deleted(result_rs['zoneId'], result_rs['id']) + + +def test_more_restrictive_acl_rule_priority(shared_zone_test_context): + """ + Test more restrictive rule takes priority + """ + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + read_rule = generate_acl_rule('Read', userId='dummy') + write_rule = generate_acl_rule('Write', userId='dummy') + + result_rs = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #add rules + add_ok_acl_rules(shared_zone_test_context, [read_rule, write_rule]) + + #Dummy user cannot update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_record_type_success(shared_zone_test_context): + """ + Test a rule on a specific record type applies to that type + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['TXT']) + + result_rs = seed_text_recordset(client, "test_acl_rule_with_record_type_success", ok_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + z = client.get_zone(ok_zone['id']) + print "this is the zone before we try an update..." + print json.dumps(z, indent=3) + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_cidr_ip4_success(shared_zone_test_context): + """ + Test a rule on a specific record type applies to that type + """ + result_rs = None + ip4_zone = shared_zone_test_context.ip4_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="172.30.0.0/32") + + result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ip4_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ip4_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_cidr_ip4_failure(shared_zone_test_context): + """ + Test a rule on a specific record type applies to that type + """ + result_rs = None + ip4_zone = shared_zone_test_context.ip4_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="172.30.0.0/32") + + result_rs = seed_ptr_recordset(client, "0.1", ip4_zone) + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ip4_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user still cant update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ip4_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_cidr_ip6_success(shared_zone_test_context): + """ + Test a rule on a specific record type applies to that type + """ + result_rs = None + ip6_zone = shared_zone_test_context.ip6_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") + + result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) + + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ip6_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ip6_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_cidr_ip6_failure(shared_zone_test_context): + """ + Test a rule on a specific record type applies to that type + """ + result_rs = None + ip6_zone = shared_zone_test_context.ip6_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/127") + + result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.5.0.0.0.0.0", ip6_zone) + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ip6_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user still cant update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ip6_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_more_restrictive_cidr_ip4_rule_priority(shared_zone_test_context): + """ + Test more restrictive cidr rule takes priority + """ + ip4_zone = shared_zone_test_context.ip4_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + slash16_rule = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR'], recordMask="172.30.0.0/16") + slash32_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="172.30.0.0/32") + + result_rs = seed_ptr_recordset(client, "0.0", ip4_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #add rules + add_ip4_acl_rules(shared_zone_test_context, [slash16_rule, slash32_rule]) + + #Dummy user can update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + finally: + clear_ip4_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_more_restrictive_cidr_ip6_rule_priority(shared_zone_test_context): + """ + Test more restrictive cidr rule takes priority + """ + ip6_zone = shared_zone_test_context.ip6_reverse_zone + client = shared_zone_test_context.ok_vinyldns_client + result_rs = None + try: + slash50_rule = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR'], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") + slash100_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/100") + + + result_rs = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #add rules + add_ip6_acl_rules(shared_zone_test_context, [slash50_rule, slash100_rule]) + + #Dummy user can update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + finally: + clear_ip6_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_mix_of_cidr_ip6_and_acl_rules_priority(shared_zone_test_context): + """ + A and AAAA should have read from mixed rule, PTR should have Write from rule with mask + """ + ip6_zone = shared_zone_test_context.ip6_reverse_zone + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + result_rs_PTR = None + result_rs_A = None + result_rs_AAAA = None + + try: + mixed_type_rule_no_mask = generate_acl_rule('Read', userId='dummy', recordTypes=['PTR','AAAA','A']) + ptr_rule_with_mask = generate_acl_rule('Write', userId='dummy', recordTypes=['PTR'], recordMask="fd69:27cc:fe91:0000:0000:0000:0000:0000/50") + + result_rs_PTR = seed_ptr_recordset(client, "0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0", ip6_zone) + result_rs_PTR['ttl'] = result_rs_PTR['ttl'] + 1000 + + result_rs_A = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_1", ok_zone) + result_rs_A['ttl'] = result_rs_A['ttl'] + 1000 + + result_rs_AAAA = seed_text_recordset(client, "test_more_restrictive_acl_rule_priority_2", ok_zone) + result_rs_AAAA['ttl'] = result_rs_AAAA['ttl'] + 1000 + + #add rules + add_ip6_acl_rules(shared_zone_test_context, [mixed_type_rule_no_mask, ptr_rule_with_mask]) + add_ok_acl_rules(shared_zone_test_context, [mixed_type_rule_no_mask, ptr_rule_with_mask]) + + #Dummy user cannot update record for A,AAAA, but can for PTR + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_PTR, status=202) + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_A, status=403) + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs_AAAA, status=403) + finally: + clear_ip6_acl_rules(shared_zone_test_context) + clear_ok_acl_rules(shared_zone_test_context) + if result_rs_A: + delete_result = client.delete_recordset(result_rs_A['zoneId'], result_rs_A['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + if result_rs_AAAA: + delete_result = client.delete_recordset(result_rs_AAAA['zoneId'], result_rs_AAAA['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + if result_rs_PTR: + delete_result = client.delete_recordset(result_rs_PTR['zoneId'], result_rs_PTR['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_wrong_record_type(shared_zone_test_context): + """ + Test a rule on a specific record type does not apply to other types + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=['CNAME']) + + result_rs = seed_text_recordset(client, "test_acl_rule_with_wrong_record_type", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user cannot update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_empty_acl_record_type_applies_to_all(shared_zone_test_context): + """ + Test an empty record set rule applies to all types + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', userId='dummy', recordTypes=[]) + + result_rs = seed_text_recordset(client, "test_empty_acl_record_type_applies_to_all", ok_zone) + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = expected_ttl + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403, retries=3) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_fewer_record_types_prioritized(shared_zone_test_context): + """ + Test a rule on a specific record type takes priority over a group of types + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule_base = generate_acl_rule('Write', userId='dummy') + acl_rule1 = generate_acl_rule('Write', userId='dummy', recordTypes=['TXT', 'CNAME']) + acl_rule2 = generate_acl_rule('Read', userId='dummy', recordTypes=['TXT']) + + result_rs = seed_text_recordset(client, "test_acl_rule_with_fewer_record_types_prioritized", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) + + #Dummy user can update record in zone with base rule + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + #Dummy user cannot update record + result_rs['ttl'] = result_rs['ttl'] + 1000 + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_user_over_record_type_priority(shared_zone_test_context): + """ + Test the user priority takes precedence over record type priority + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule_base = generate_acl_rule('Write', userId='dummy') + acl_rule1 = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordTypes=['TXT']) + acl_rule2 = generate_acl_rule('Read', userId='dummy', recordTypes=['TXT', 'CNAME']) + + result_rs = seed_text_recordset(client, "test_acl_rule_user_over_record_type_priority", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) + + #Dummy user can update record in zone with base rule + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + #Dummy user cannot update record + result_rs['ttl'] = result_rs['ttl'] + 1000 + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_record_mask_success(shared_zone_test_context): + """ + Test rule with record mask allows user to update record + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') + + result_rs = seed_text_recordset(client, "test_acl_rule_with_record_mask_success", ok_zone) + expected_ttl = result_rs['ttl'] + 1000 + result_rs['ttl'] = expected_ttl + + #Dummy user cannot update record in zone + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user can update record + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + assert_that(result_rs['ttl'], is_(expected_ttl)) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_record_mask_failure(shared_zone_test_context): + """ + Test rule with unmatching record mask is not applied + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='bad.*') + + result_rs = seed_text_recordset(client, "test_acl_rule_with_record_mask_failure", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + + #Dummy user cannot update record + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_acl_rule_with_defined_mask_prioritized(shared_zone_test_context): + """ + Test a rule on a specific record mask takes priority over All + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule_base = generate_acl_rule('Write', userId='dummy') + acl_rule1 = generate_acl_rule('Write', userId='dummy', recordMask='.*') + acl_rule2 = generate_acl_rule('Read', userId='dummy', recordMask='test.*') + + result_rs = seed_text_recordset(client, "test_acl_rule_with_defined_mask_prioritized", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) + + #Dummy user can update record in zone with base rule + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + #Dummy user cannot update record + result_rs['ttl'] = result_rs['ttl'] + 1000 + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_user_rule_over_mask_prioritized(shared_zone_test_context): + """ + Test user/group logic priority over record mask + """ + + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + acl_rule_base = generate_acl_rule('Write', userId='dummy') + acl_rule1 = generate_acl_rule('Write', groupId=shared_zone_test_context.dummy_group['id'], recordMask='test.*') + acl_rule2 = generate_acl_rule('Read', userId='dummy', recordMask='.*') + + result_rs = seed_text_recordset(client, "test_user_rule_over_mask_prioritized", ok_zone) + result_rs['ttl'] = result_rs['ttl'] + 1000 + + add_ok_acl_rules(shared_zone_test_context, [acl_rule_base]) + + #Dummy user can update record in zone with base rule + result = shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=202) + result_rs = shared_zone_test_context.ok_vinyldns_client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + #add rule + add_ok_acl_rules(shared_zone_test_context, [acl_rule1, acl_rule2]) + + #Dummy user cannot update record + result_rs['ttl'] = result_rs['ttl'] + 1000 + shared_zone_test_context.dummy_vinyldns_client.update_recordset(result_rs, status=403) + finally: + clear_ok_acl_rules(shared_zone_test_context) + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_ns_update_for_non_approved_group_fails(shared_zone_test_context): + """ + Tests that someone not in the approved admin group cannot update ns record (only ok group is approved for tests) + """ + + client = shared_zone_test_context.ok_vinyldns_client + not_approved_client = shared_zone_test_context.dummy_vinyldns_client + zone = shared_zone_test_context.parent_zone + + ns_rs = None + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + changed_rs = ns_rs + changed_rs['ttl'] = changed_rs['ttl'] + 100 + + error = not_approved_client.update_recordset(changed_rs, status=403) + assert_that(error, is_('Do not have permissions to manage NS recordsets, please contact vinyldns-support')) + + finally: + if ns_rs: + client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + + +def test_ns_update_for_approved_group_passes(shared_zone_test_context): + """ + Tests that someone in the approved admin group ok-group can update ns record (only ok group is approved for tests) + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + ns_rs = None + + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'someNS', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + changed_rs = ns_rs + changed_rs['ttl'] = changed_rs['ttl'] + 100 + + change_result = client.update_recordset(changed_rs, status=202) + client.wait_until_recordset_change_status(change_result, 'Complete') + + finally: + if ns_rs: + client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + + +def test_update_to_dotted_host_fails(shared_zone_test_context): + """ + Tests that a dotted host record set update fails + """ + result_rs = None + ok_zone = shared_zone_test_context.ok_zone + client = shared_zone_test_context.ok_vinyldns_client + try: + result_rs = seed_text_recordset(client, "update_with_dots", ok_zone) + + result_rs['name'] = "update_with.dots" + + error = client.update_recordset(result_rs, status=422) + assert_that(error, is_('Record with name update_with.dots is a dotted host which is illegal in this zone ok.')) + finally: + if result_rs: + delete_result = client.delete_recordset(result_rs['zoneId'], result_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_ns_update_change_ns_name_to_origin_fails(shared_zone_test_context): + """ + Tests that an ns update for origin fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + ns_rs = None + + try: + new_rs = { + 'zoneId': zone['id'], + 'name': 'update-change-ns-name-to-origin', + 'type': 'NS', + 'ttl': 38400, + 'records': [ + { + 'nsdname': 'ns1.parent.com.' + } + ] + } + result = client.create_recordset(new_rs, status=202) + ns_rs = client.wait_until_recordset_change_status(result, 'Complete')['recordSet'] + + changed_rs = ns_rs + changed_rs['name'] = "@" + + client.update_recordset(changed_rs, status=409) + + finally: + if ns_rs: + client.delete_recordset(ns_rs['zoneId'], ns_rs['id'], status=(202,404)) + client.wait_until_recordset_deleted(ns_rs['zoneId'], ns_rs['id']) + + +def test_ns_update_existing_ns_origin_fails(shared_zone_test_context): + """ + Tests that an ns update for existing ns origin fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + list_results_page = client.list_recordsets(zone['id'], status=200)['recordSets'] + + apex_ns = [item for item in list_results_page if item['type'] == 'NS' and item['name'] in zone['name']][0] + + apex_ns['ttl'] = apex_ns['ttl'] + 100 + + client.update_recordset(apex_ns, status=422) + +def test_update_dotted_a_record_not_apex_fails(shared_zone_test_context): + """ + Test that updating a dotted host name A record set fails. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + dotted_host_rs = { + 'zoneId': zone['id'], + 'name': 'fubu', + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + + create_response = client.create_recordset(dotted_host_rs, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + + create_rs['name'] = 'foo.bar' + + try: + error = client.update_recordset(create_rs, status=422) + assert_that(error, is_("Record with name " + create_rs['name'] + " is a dotted host which is illegal " + "in this zone " + zone['name'])) + + finally: + delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + +def test_update_dotted_a_record_apex_succeeds(shared_zone_test_context): + """ + Test that updating an apex A record set containing dots succeeds. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + zone_name = zone['name'] + + apex_rs = { + 'zoneId': zone['id'], + 'name': 'fubu', + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + + create_response = client.create_recordset(apex_rs, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + create_rs['name'] = zone_name + + try: + update_response = client.update_recordset(create_rs, status=202) + update_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] + assert_that(update_rs['name'], is_(zone_name)) + + finally: + delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + +def test_update_dotted_a_record_apex_adds_trailing_dot_to_name(shared_zone_test_context): + """ + Test that updating an A record set to apex adds a trailing dot to the name if it is not already in the name. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + zone_name = zone['name'] + + recordset = { + 'zoneId': zone['id'], + 'name': 'silly', + 'type': 'A', + 'ttl': 500, + 'records': [{'address': '127.0.0.1'}] + } + + create_response = client.create_recordset(recordset, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + update_rs = create_rs + update_rs['name'] = zone['name'].rstrip('.') + + try: + update_response = client.update_recordset(update_rs, status=202) + updated_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] + assert_that(updated_rs['name'], is_(zone_name)) + + finally: + delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + + +def test_update_dotted_cname_record_apex_fails(shared_zone_test_context): + """ + Test that updating a CNAME record set with record name matching dotted apex returns an error. + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + zone_name = zone['name'].rstrip('.') + + apex_cname_rs = { + 'zoneId': zone['id'], + 'name': 'ygritte', + 'type': 'CNAME', + 'ttl': 500, + 'records': [{'cname': 'got.reference'}] + } + + create_response = client.create_recordset(apex_cname_rs, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + + create_rs['name'] = zone_name + + try: + errors = client.update_recordset(create_rs, status=400)['errors'] + assert_that(errors[0],is_("Record name cannot contain '.' with given type")) + + finally: + delete_response = client.delete_recordset(zone['id'],create_rs['id'], status=202)['status'] + client.wait_until_recordset_deleted(delete_response, 'Complete') + +def test_update_succeeds_for_applied_unsynced_record_change(shared_zone_test_context): + """ + Update should succeed if record change is not synced with DNS backend, but has already been applied + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + a_rs = get_recordset_json(zone, 'already-applied-unsynced-update', 'A', [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]) + + create_rs = {} + + try: + create_response = client.create_recordset(a_rs, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + + dns_update(zone, 'already-applied-unsynced-update', 550, 'A', '8.8.8.8') + + updates = create_rs + updates['ttl'] = 550 + updates['records'] = [ + { + 'address': '8.8.8.8' + } + ] + + update_response = client.update_recordset(updates, status=202) + update_rs = client.wait_until_recordset_change_status(update_response, 'Complete')['recordSet'] + + retrieved_rs = client.get_recordset(zone['id'], update_rs['id'])['recordSet'] + verify_recordset(retrieved_rs, updates) + + finally: + try: + delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass + + +def test_update_fails_for_unapplied_unsynced_record_change(shared_zone_test_context): + """ + Update should fail if record change is not synced with DNS backend + """ + + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.parent_zone + + a_rs = get_recordset_json(zone, 'unapplied-unsynced-update', 'A', [{'address': '1.1.1.1'}, {'address': '2.2.2.2'}]) + + create_rs = {} + + try: + create_response = client.create_recordset(a_rs, status=202) + create_rs = client.wait_until_recordset_change_status(create_response, 'Complete')['recordSet'] + + dns_update(zone, 'unapplied-unsynced-update', 550, 'A', '8.8.8.8') + + update_rs = create_rs + update_rs['records'] = [ + { + 'address': '5.5.5.5' + } + ] + update_response = client.update_recordset(update_rs, status=202) + response = client.wait_until_recordset_change_status(update_response, 'Failed') + assert_that(response['systemMessage'], is_("Failed validating update to DNS for change " + response['id'] + + ":" + a_rs['name'] + ": This record set is out of sync with the DNS backend; sync this zone before attempting to update this record set.")) + + finally: + try: + delete_result = client.delete_recordset(zone['id'], create_rs['id'], status=202) + client.wait_until_recordset_change_status(delete_result, 'Complete') + except: + pass diff --git a/modules/api/functional_test/live_tests/shared_zone_test_context.py b/modules/api/functional_test/live_tests/shared_zone_test_context.py new file mode 100644 index 000000000..db022e999 --- /dev/null +++ b/modules/api/functional_test/live_tests/shared_zone_test_context.py @@ -0,0 +1,275 @@ +import time +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from hamcrest import * +from utils import * + +class SharedZoneTestContext(object): + """ + Creates multiple zones to test authorization / access to shared zones across users + """ + def __init__(self): + self.ok_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'okAccessKey', 'okSecretKey') + self.dummy_vinyldns_client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'dummyAccessKey', 'dummySecretKey') + + self.dummy_group = None + self.ok_group = None + + self.tear_down() # ensures that the environment is clean before starting + + try: + self.ok_group = self.ok_vinyldns_client.get_group("ok", status=200) + # in theory this shouldn't be needed, but getting 'user is not in group' errors on zone creation + self.confirm_member_in_group(self.ok_vinyldns_client, self.ok_group) + + dummy_group = { + 'name': 'dummy-group', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'dummy'} ], + 'admins': [ { 'id': 'dummy'} ] + } + self.dummy_group = self.dummy_vinyldns_client.create_group(dummy_group, status=200) + # in theory this shouldn't be needed, but getting 'user is not in group' errors on zone creation + self.confirm_member_in_group(self.dummy_vinyldns_client, self.dummy_group) + + ok_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': 'ok.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'ok.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'ok.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.ok_zone = ok_zone_change['zone'] + + dummy_zone_change = self.dummy_vinyldns_client.create_zone( + { + 'name': 'dummy.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.dummy_group['id'], + 'connection': { + 'name': 'dummy.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'dummy.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.dummy_zone = dummy_zone_change['zone'] + + ip6_reverse_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': '1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.', + 'email': 'test@test.com', + 'shared': True, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'ip6.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'ip6.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + self.ip6_reverse_zone = ip6_reverse_zone_change['zone'] + + ip4_reverse_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': '30.172.in-addr.arpa.', + 'email': 'test@test.com', + 'shared': True, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'ip4.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'ip4.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + self.ip4_reverse_zone = ip4_reverse_zone_change['zone'] + + classless_base_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': '2.0.192.in-addr.arpa.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'classless-base.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'classless-base.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + self.classless_base_zone = classless_base_zone_change['zone'] + + classless_zone_delegation_change = self.ok_vinyldns_client.create_zone( + { + 'name': '192/30.2.0.192.in-addr.arpa.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'classless.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'classless.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + self.classless_zone_delegation_zone = classless_zone_delegation_change['zone'] + + system_test_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': 'system-test.', + 'email': 'test@test.com', + 'shared': True, + 'adminGroupId': self.ok_group['id'], + 'connection': { + 'name': 'system-test.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'system-test.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + self.system_test_zone = system_test_zone_change['zone'] + + # parent zone gives access to the dummy user, dummy user cannot manage ns records + parent_zone_change = self.ok_vinyldns_client.create_zone( + { + 'name': 'parent.com.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.ok_group['id'], + 'acl': { + 'rules': [ + { + 'accessLevel': 'Delete', + 'description': 'some_test_rule', + 'userId': 'dummy' + } + ] + }, + 'connection': { + 'name': 'parent.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'parent.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.parent_zone = parent_zone_change['zone'] + + # wait until our zones are created + self.ok_vinyldns_client.wait_until_zone_exists(system_test_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(ok_zone_change) + self.dummy_vinyldns_client.wait_until_zone_exists(dummy_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(ip6_reverse_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(ip4_reverse_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(classless_base_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(classless_zone_delegation_change) + self.ok_vinyldns_client.wait_until_zone_exists(system_test_zone_change) + self.ok_vinyldns_client.wait_until_zone_exists(parent_zone_change) + + # validate all in there + zones = self.dummy_vinyldns_client.list_zones()['zones'] + assert_that(len(zones), is_(2)) + zones = self.ok_vinyldns_client.list_zones()['zones'] + assert_that(len(zones), is_(7)) + + except: + # teardown if there was any issue in setup + try: + self.tear_down() + except: + pass + raise + + + def tear_down(self): + """ + The ok_vinyldns_client is a zone admin on _all_ the zones. + + We shouldn't have to do any checks now, as zone admins have full rights to all zones, including + deleting all records (even in the old shared model) + """ + clear_zones(self.dummy_vinyldns_client) + clear_zones(self.ok_vinyldns_client) + clear_groups(self.dummy_vinyldns_client) + clear_groups(self.ok_vinyldns_client, exclude=['ok']) + + # reset ok_group + ok_group = { + 'id': 'ok', + 'name': 'ok', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok'} ], + 'admins': [ { 'id': 'ok'} ] + } + self.ok_vinyldns_client.update_group(ok_group['id'], ok_group, status=200) + + def confirm_member_in_group(self, client, group): + retries = 2 + success = group in client.list_all_my_groups(status=200) + while retries >= 0 and not success: + success = group in client.list_all_my_groups(status=200) + time.sleep(.05) + retries -= 1 + assert_that(success, is_(True)) diff --git a/modules/api/functional_test/live_tests/test_data.py b/modules/api/functional_test/live_tests/test_data.py new file mode 100644 index 000000000..3069d9f2c --- /dev/null +++ b/modules/api/functional_test/live_tests/test_data.py @@ -0,0 +1,124 @@ +class TestData: + A = { + 'zoneId': None, + 'name': 'test-create-a-ok', + 'type': 'A', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'address': '10.1.1.1' + }, + { + 'address': '10.2.2.2' + } + ] + } + AAAA = { + 'zoneId': None, + 'name': 'test-create-aaaa-ok', + 'type': 'AAAA', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'address': '2001:db8:0:0:0:0:0:3' + }, + { + 'address': '2002:db8:0:0:0:0:0:3' + } + ] + } + CNAME = { + 'zoneId': None, + 'name': 'test-create-cname-ok', + 'type': 'CNAME', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'cname': 'cname.' + } + ] + } + MX = { + 'zoneId': None, + 'name': 'test-create-mx-ok', + 'type': 'MX', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'preference': 100, + 'exchange': 'exchange.' + } + ] + } + PTR = { + 'zoneId': None, + 'name': '10.20', + 'type': 'PTR', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'ptrdname': 'ptr.' + } + ] + } + SPF = { + 'zoneId': None, + 'name': 'test-create-spf-ok', + 'type': 'SPF', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'text': 'spf.' + } + ] + } + SRV = { + 'zoneId': None, + 'name': 'test-create-srv-ok', + 'type': 'SRV', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'priority': 1, + 'weight': 2, + 'port': 8000, + 'target': 'srv.' + } + ] + } + SSHFP = { + 'zoneId': None, + 'name': 'test-create-sshfp-ok', + 'type': 'SSHFP', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'algorithm': 1, + 'type': 2, + 'fingerprint': 'fp' + } + ] + } + TXT = { + 'zoneId': None, + 'name': 'test-create-txt-ok', + 'type': 'TXT', + 'ttl': 100, + 'account': 'foo', + 'records': [ + { + 'text': 'some text' + } + ] + } + RECORDS = [('A', A), ('AAAA', AAAA), ('CNAME', CNAME), ('MX', MX), ('PTR', PTR), ('SPF', SPF), ('SRV', SRV), ('SSHFP', SSHFP), ('TXT', TXT)] + FORWARD_RECORDS = [('A', A), ('AAAA', AAAA), ('CNAME', CNAME), ('MX', MX), ('SPF', SPF), ('SRV', SRV), ('SSHFP', SSHFP), ('TXT', TXT)] + REVERSE_RECORDS = [('CNAME', CNAME), ('PTR', PTR), ('TXT', TXT)] diff --git a/modules/api/functional_test/live_tests/zone_history_context.py b/modules/api/functional_test/live_tests/zone_history_context.py new file mode 100644 index 000000000..08c2f1656 --- /dev/null +++ b/modules/api/functional_test/live_tests/zone_history_context.py @@ -0,0 +1,169 @@ +import sys +import json + +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from hamcrest import * +from itertools import * +from hamcrest import * +from utils import * +from test_data import TestData + + +class ZoneHistoryContext(object): + """ + Creates a zone with multiple zone changes and record set changes + """ + + def __init__(self): + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'history-key', 'history-secret') + self.tear_down() + self.group = None + + group = { + 'name': 'history-group', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'history-id'} ], + 'admins': [ { 'id': 'history-id'} ] + } + + self.group = self.client.create_group(group, status=200) + # in theory this shouldn't be needed, but getting 'user is not in group' errors on zone creation + self.confirm_member_in_group(self.client, self.group) + + zone_change = self.client.create_zone( + { + 'name': 'system-test-history.', + 'email': 'i.changed.this.1.times@history-test.com', + 'shared': True, + 'adminGroupId': self.group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.zone = zone_change['zone'] + + self.client.wait_until_zone_exists(zone_change) + + # change the zone nine times to we have update events in zone change history, ten total changes including creation + for i in range(2,11): + zone_update = dict(self.zone) + zone_update['connection']['key'] = VinylDNSTestContext.dns_key + zone_update['transferConnection']['key'] = VinylDNSTestContext.dns_key + zone_update['email'] = 'i.changed.this.{0}.times@history-test.com'.format(i) + zone_update = self.client.update_zone(zone_update, status=202)['zone'] + + # create some record sets + (achange, a_record) = self.create_recordset(TestData.A) + (aaaachange, aaaa_record) = self.create_recordset(TestData.AAAA) + (cnamechange, cname_record) = self.create_recordset(TestData.CNAME) + + # wait here for all the record sets to be created + self.client.wait_until_recordset_exists(a_record['zoneId'], a_record['id']) + self.client.wait_until_recordset_exists(aaaa_record['zoneId'], aaaa_record['id']) + self.client.wait_until_recordset_exists(cname_record['zoneId'], cname_record['id']) + + # update the record sets + a_record_update = dict(a_record) + a_record_update['ttl'] += 100 + a_record_update['records'][0]['address'] = '9.9.9.9' + (achange, a_record_update) = self.update_recordset(a_record_update) + + aaaa_record_update = dict(aaaa_record) + aaaa_record_update['ttl'] += 100 + aaaa_record_update['records'][0]['address'] = '2003:db8:0:0:0:0:0:4' + (aaaachange, aaaa_record_update) = self.update_recordset(aaaa_record_update) + + cname_record_update = dict(cname_record) + cname_record_update['ttl'] += 100 + cname_record_update['records'][0]['cname'] = 'changed-cname.' + (cnamechange, cname_record_update) = self.update_recordset(cname_record_update) + + self.client.wait_until_recordset_change_status(achange, 'Complete') + self.client.wait_until_recordset_change_status(aaaachange, 'Complete') + self.client.wait_until_recordset_change_status(cnamechange, 'Complete') + + + # delete the recordsets + self.delete_recordset(a_record) + self.delete_recordset(aaaa_record) + self.delete_recordset(cname_record) + + self.client.wait_until_recordset_deleted(a_record['zoneId'], a_record['id']) + self.client.wait_until_recordset_deleted(aaaa_record['zoneId'], aaaa_record['id']) + self.client.wait_until_recordset_deleted(cname_record['zoneId'], cname_record['id']) + + + # the resulting context should contain all of the parts so it makes it simple to test + self.results = { + 'zone': self.zone, + 'zoneUpdate': zone_update, + 'creates': [a_record, aaaa_record, cname_record], + 'updates': [a_record_update, aaaa_record_update, cname_record_update] + } + + # finalizer called by py.test when the simulation is torn down + def tear_down(self): + self.clear_zones() + self.clear_group() + + + def clear_group(self): + groups = self.client.list_all_my_groups() + group_ids = map(lambda x: x['id'], groups) + + for group_id in group_ids: + self.client.delete_group(group_id, status=200) + + + def clear_zones(self): + # Get the groups for the ok user + groups = self.client.list_all_my_groups() + group_ids = map(lambda x: x['id'], groups) + + zones = self.client.list_zones()['zones'] + + # we only want to delete zones that the ok user "owns" + zones_to_delete = filter(lambda x: (x['adminGroupId'] in group_ids) or (x['account'] in group_ids), zones) + zone_names_to_delete = map(lambda x: x['name'], zones_to_delete) + + zoneids_to_delete = map(lambda x: x['id'], zones_to_delete) + + self.client.abandon_zones(zoneids_to_delete) + + + def create_recordset(self, rs): + rs['zoneId'] = self.zone['id'] + result = self.client.create_recordset(rs, status=202) + return result, result['recordSet'] + + + def update_recordset(self, rs): + rs['zoneId'] = self.zone['id'] + result = self.client.update_recordset(rs, status=202) + return result, result['recordSet'] + + + def delete_recordset(self, rs): + result = self.client.delete_recordset(self.zone['id'], rs['id'], status=202) + return result, result['recordSet'] + + + def confirm_member_in_group(self, client, group): + retries = 2 + success = group in client.list_all_my_groups(status=200) + while retries >= 0 and not success: + success = group in client.list_all_my_groups(status=200) + time.sleep(.05) + retries -= 1 + assert_that(success, is_(True)) diff --git a/modules/api/functional_test/live_tests/zones/create_zone_test.py b/modules/api/functional_test/live_tests/zones/create_zone_test.py new file mode 100644 index 000000000..5a6570b4e --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/create_zone_test.py @@ -0,0 +1,473 @@ +import pytest +import uuid + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + +records_in_dns = [ + {'name': 'one-time.', + 'type': 'SOA', + 'records': [{u'mname': u'172.17.42.1.', + u'rname': u'admin.test.com.', + u'retry': 3600, + u'refresh': 10800, + u'minimum': 38400, + u'expire': 604800, + u'serial': 1439234395}]}, + {'name': u'one-time.', + 'type': u'NS', + 'records': [{u'nsdname': u'172.17.42.1.'}]}, + {'name': u'jenkins', + 'type': u'A', + 'records': [{u'address': u'10.1.1.1'}]}, + {'name': u'foo', + 'type': u'A', + 'records': [{u'address': u'2.2.2.2'}]}, + {'name': u'test', + 'type': u'A', + 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, + {'name': u'one-time.', + 'type': u'A', + 'records': [{u'address': u'5.5.5.5'}]}, + {'name': u'already-exists', + 'type': u'A', + 'records': [{u'address': u'6.6.6.6'}]}] + +def test_create_zone_success(shared_zone_test_context): + """ + Test successfully creating a zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_change_status(result, 'Synced') + + get_result = client.get_zone(result_zone['id']) + + get_zone = get_result['zone'] + assert_that(get_zone['name'], is_(zone['name']+'.')) + assert_that(get_zone['email'], is_(zone['email'])) + assert_that(get_zone['adminGroupId'], is_(zone['adminGroupId'])) + assert_that(get_zone['latestSync'], is_not(none())) + assert_that(get_zone['status'], is_('Active')) + + # confirm that the recordsets in DNS have been saved in vinyldns + recordsets = client.list_recordsets(result_zone['id'])['recordSets'] + + assert_that(len(recordsets), is_(7)) + for rs in recordsets: + small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) + small_rs['records'] = sorted(small_rs['records']) + assert_that(records_in_dns, has_item(small_rs)) + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +@pytest.mark.skip_production +def test_create_zone_without_transfer_connection_leaves_it_empty(shared_zone_test_context): + """ + Test that creating a zone with a valid connection but without a transfer connection leaves the transfer connection empty + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + get_result = client.get_zone(result_zone['id']) + + get_zone = get_result['zone'] + assert_that(get_zone['name'], is_(zone['name']+'.')) + assert_that(get_zone['email'], is_(zone['email'])) + assert_that(get_zone['adminGroupId'], is_(zone['adminGroupId'])) + + assert_that(get_zone, is_not(has_key('transferConnection'))) + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +def test_create_zone_fails_no_authorization(shared_zone_test_context): + """ + Test creating a new zone without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = { + 'name': str(uuid.uuid4()), + 'email': 'test@test.com', + } + client.create_zone(zone, sign_request=False, status=401) + + +def test_create_missing_zone_data(shared_zone_test_context): + """ + Test that creating a zone without providing necessary data (name and email) returns errors + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = { + 'random_key': 'some_value', + 'another_key': 'meaningless_data' + } + + errors = client.create_zone(zone, status=400)['errors'] + assert_that(errors, contains_inanyorder('Missing Zone.name', 'Missing Zone.email')) + + +def test_create_invalid_zone_data(shared_zone_test_context): + """ + Test that creating a zone with invalid data returns errors + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'test.zone.invalid.' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'status': 'invalid_status' + } + + errors = client.create_zone(zone, status=400)['errors'] + assert_that(errors, contains_inanyorder('Invalid ZoneStatus')) + + +def test_create_zone_with_connection_failure(shared_zone_test_context): + """ + Test creating a new zone with a an invalid key and connection info fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time.' + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'connection': { + 'name': zone_name, + 'keyName': zone_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + client.create_zone(zone, status=400) + + +def test_create_zone_returns_409_if_already_exists(shared_zone_test_context): + """ + Test creating a zone returns a 409 Conflict if the zone name already exists + """ + create_conflict = dict(shared_zone_test_context.ok_zone) + create_conflict['connection']['key'] = VinylDNSTestContext.dns_key # necessary because we encrypt the key + create_conflict['transferConnection']['key'] = VinylDNSTestContext.dns_key + + shared_zone_test_context.ok_vinyldns_client.create_zone(create_conflict, status=409) + + +def test_create_zone_returns_400_for_invalid_data(shared_zone_test_context): + """ + Test creating a zone returns a 400 if the request body is invalid + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = { + 'jim': 'bob', + 'hey': 'you' + } + client.create_zone(zone, status=400) + + +@pytest.mark.skip_production +def test_create_zone_no_connection_uses_defaults(shared_zone_test_context): + + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'] + } + + try: + zone_change = client.create_zone(zone, status=202) + zone = zone_change['zone'] + client.wait_until_zone_exists(zone_change) + + # Check response from create + assert_that(zone['name'], is_(zone_name+'.')) + print "'connection' not in zone = " + 'connection' not in zone + + assert_that('connection' not in zone)#KeyError: 'connection' + assert_that('transferConnection' not in zone) + + # Check that it was internally stored correctly using GET + zone_get = client.get_zone(zone['id'])['zone'] + assert_that(zone_get['name'], is_(zone_name+'.')) + assert_that('connection' not in zone_get) + assert_that('transferConnection' not in zone_get) + + finally: + if 'id' in zone: + client.abandon_zones([zone['id']], status=202) + + +def test_zone_connection_only(shared_zone_test_context): + + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + expected_connection = { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + + try: + zone_change = client.create_zone(zone, status=202) + zone = zone_change['zone'] + client.wait_until_zone_exists(zone_change) + + # Check response from create + assert_that(zone['name'], is_(zone_name+'.')) + assert_that(zone['connection']['name'], is_(expected_connection['name'])) + assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) + assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + + # Check that it was internally stored correctly using GET + zone_get = client.get_zone(zone['id'])['zone'] + assert_that(zone_get['name'], is_(zone_name+'.')) + assert_that(zone['connection']['name'], is_(expected_connection['name'])) + assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) + assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + + finally: + if 'id' in zone: + client.abandon_zones([zone['id']], status=202) + + +def test_zone_bad_connection(shared_zone_test_context): + + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'connection': { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': 'somebadkey', + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + client.create_zone(zone, status=400) + + +def test_zone_bad_transfer_connection(shared_zone_test_context): + + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'connection': { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': "bad", + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + client.create_zone(zone, status=400) + + +def test_zone_transfer_connection(shared_zone_test_context): + + client = shared_zone_test_context.ok_vinyldns_client + + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + expected_connection = { + 'name': zone_name, + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + + try: + zone_change = client.create_zone(zone, status=202) + zone = zone_change['zone'] + client.wait_until_zone_exists(zone_change) + + # Check response from create + assert_that(zone['name'], is_(zone_name+'.')) + assert_that(zone['connection']['name'], is_(expected_connection['name'])) + assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) + assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + + # Check that it was internally stored correctly using GET + zone_get = client.get_zone(zone['id'])['zone'] + assert_that(zone_get['name'], is_(zone_name+'.')) + assert_that(zone['connection']['name'], is_(expected_connection['name'])) + assert_that(zone['connection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['connection']['primaryServer'], is_(expected_connection['primaryServer'])) + assert_that(zone['transferConnection']['name'], is_(expected_connection['name'])) + assert_that(zone['transferConnection']['keyName'], is_(expected_connection['keyName'])) + assert_that(zone['transferConnection']['primaryServer'], is_(expected_connection['primaryServer'])) + + finally: + if 'id' in zone: + client.abandon_zones([zone['id']], status=202) + + +def test_user_cannot_create_zone_with_nonmember_admin_group(shared_zone_test_context): + """ + Test user cannot create a zone with an admin group they are not a member of + """ + zone = { + 'name': 'one-time.', + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.dummy_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + shared_zone_test_context.ok_vinyldns_client.create_zone(zone, status=400) + + +def test_user_cannot_create_zone_with_failed_validations(shared_zone_test_context): + """ + Test that a user cannot create a zone that has invalid zone data + """ + zone = { + 'name': 'invalid-zone.', + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + + result = shared_zone_test_context.ok_vinyldns_client.create_zone(zone, status=400) + import json + print json.dumps(result, indent=4) + assert_that(result['errors'], contains_inanyorder( + contains_string("not-approved.thing.com. is not an approved name server") + )) diff --git a/modules/api/functional_test/live_tests/zones/delete_zone_test.py b/modules/api/functional_test/live_tests/zones/delete_zone_test.py new file mode 100644 index 000000000..04962cc5f --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/delete_zone_test.py @@ -0,0 +1,106 @@ +import pytest +import uuid + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + + +def test_delete_zone_success(shared_zone_test_context): + """ + Test deleting a zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + client.delete_zone(result_zone['id'], status=202) + client.wait_until_zone_deleted(result_zone['id']) + + client.get_zone(result_zone['id'], status=404) + result_zone = None + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +def test_delete_zone_twice(shared_zone_test_context): + """ + Test deleting a zone with deleted status returns 404 + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + client.delete_zone(result_zone['id'], status=202) + client.wait_until_zone_deleted(result_zone['id']) + + client.delete_zone(result_zone['id'], status=404) + result_zone = None + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +def test_delete_zone_returns_404_if_zone_not_found(shared_zone_test_context): + """ + Test deleting a zone returns a 404 if the zone was not found + """ + client = shared_zone_test_context.ok_vinyldns_client + client.delete_zone('nothere', status=404) + + +def test_delete_zone_no_authorization(shared_zone_test_context): + """ + Test deleting a zone without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + + client.delete_zone('1234', sign_request=False, status=401) diff --git a/modules/api/functional_test/live_tests/zones/get_zone_test.py b/modules/api/functional_test/live_tests/zones/get_zone_test.py new file mode 100644 index 000000000..2f6aa1e29 --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/get_zone_test.py @@ -0,0 +1,77 @@ +import pytest +import uuid + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + + +def test_get_zone_by_id(shared_zone_test_context): + """ + Test get an existing zone by id + """ + client = shared_zone_test_context.ok_vinyldns_client + + result = client.get_zone(shared_zone_test_context.system_test_zone['id'], status=200) + retrieved = result['zone'] + + assert_that(retrieved['id'], is_(shared_zone_test_context.system_test_zone['id'])) + assert_that(retrieved['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) + + +def test_get_zone_fails_without_access(shared_zone_test_context): + """ + Test get an existing zone by id without access + """ + client = shared_zone_test_context.dummy_vinyldns_client + + client.get_zone(shared_zone_test_context.ok_zone['id'], status=403) + + +def test_get_zone_returns_404_when_not_found(shared_zone_test_context): + """ + Test get an existing zone returns a 404 when the zone is not found + """ + client = shared_zone_test_context.ok_vinyldns_client + + client.get_zone(str(uuid.uuid4()), status=404) + + +def test_get_zone_by_id_no_authorization(shared_zone_test_context): + """ + Test get an existing zone by id without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + client.get_zone('123456', sign_request=False, status=401) + + +def test_get_zone_includes_acl_display_name(shared_zone_test_context): + """ + Test get an existing zone with acl rules + """ + + client = shared_zone_test_context.ok_vinyldns_client + + user_acl_rule = generate_acl_rule('Write', userId='ok', recordTypes = []) + group_acl_rule = generate_acl_rule('Write', groupId=shared_zone_test_context.ok_group['id'], recordTypes = []) + bad_acl_rule = generate_acl_rule('Write', userId='badId', recordTypes = []) + + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], user_acl_rule, status=202) + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], group_acl_rule, status=202) + client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], bad_acl_rule, status=202) + + result = client.get_zone(shared_zone_test_context.system_test_zone['id'], status=200) + retrieved = result['zone'] + + assert_that(retrieved['id'], is_(shared_zone_test_context.system_test_zone['id'])) + assert_that(retrieved['adminGroupName'], is_(shared_zone_test_context.ok_group['name'])) + + acl = retrieved['acl']['rules'] + + user_acl_rule['displayName'] = 'ok' + group_acl_rule['displayName'] = shared_zone_test_context.ok_group['name'] + + assert_that(acl, has_item(user_acl_rule)) + assert_that(acl, has_item(group_acl_rule)) + assert_that(len(acl), is_(2)) diff --git a/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py new file mode 100644 index 000000000..535757257 --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/list_zone_changes_test.py @@ -0,0 +1,134 @@ +from hamcrest import * +from utils import * +from vinyldns_python import VinylDNSClient + + +def check_zone_changes_page_accuracy(results, expected_first_change, expected_num_results): + assert_that(len(results), is_(expected_num_results)) + change_num = expected_first_change + for change in results: + change_email = 'i.changed.this.{0}.times@history-test.com'.format(change_num) + assert_that(change['zone']['email'], is_(change_email)) + # should return changes in reverse order (most recent 1st) + change_num-=1 + + +def check_zone_changes_responses(response, zoneId=True, zoneChanges=True, nextId=True, startFrom=True, maxItems=True): + assert_that(response, has_key('zoneId')) if zoneId else assert_that(response, is_not(has_key('zoneId'))) + assert_that(response, has_key('zoneChanges')) if zoneChanges else assert_that(response, is_not(has_key('zoneChanges'))) + assert_that(response, has_key('nextId')) if nextId else assert_that(response, is_not(has_key('nextId'))) + assert_that(response, has_key('startFrom')) if startFrom else assert_that(response, is_not(has_key('startFrom'))) + assert_that(response, has_key('maxItems')) if maxItems else assert_that(response, is_not(has_key('maxItems'))) + + +def test_list_zone_changes_no_authorization(shared_zone_test_context): + """ + Test that list zone changes without authorization fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + client.list_zone_changes('12345', sign_request=False, status=401) + + +def test_list_zone_changes_member_auth_success(shared_zone_test_context): + """ + Test list zone changes succeeds with membership auth for member of admin group + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.ok_zone + client.list_zone_changes(zone['id'], status=200) + + +def test_list_zone_changes_member_auth_no_access(shared_zone_test_context): + """ + Test list zone changes fails for user not in admin group with no acl rules + """ + client = shared_zone_test_context.dummy_vinyldns_client + zone = shared_zone_test_context.ok_zone + client.list_zone_changes(zone['id'], status=403) + + +def test_list_zone_changes_member_auth_with_acl(shared_zone_test_context): + """ + Test list zone changes succeeds for user with acl rules + """ + zone = shared_zone_test_context.ok_zone + acl_rule = generate_acl_rule('Write', userId='dummy') + try: + client = shared_zone_test_context.dummy_vinyldns_client + + client.list_zone_changes(zone['id'], status=403) + add_ok_acl_rules(shared_zone_test_context, [acl_rule]) + client.list_zone_changes(zone['id'], status=200) + finally: + clear_ok_acl_rules(shared_zone_test_context) + + +def test_list_zone_changes_no_start(zone_history_context): + """ + Test getting all zone changes on one page (max items will default to default value) + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + response = client.list_zone_changes(original_zone['id'], start_from=None) + + check_zone_changes_page_accuracy(response['zoneChanges'], expected_first_change=10, expected_num_results=10) + check_zone_changes_responses(response, startFrom=False, nextId=False) + + +def test_list_zone_changes_paging(zone_history_context): + """ + Test paging for zone changes can use previous nextId as start key of next page + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + response_1 = client.list_zone_changes(original_zone['id'], start_from=None, max_items=3) + response_2 = client.list_zone_changes(original_zone['id'], start_from=response_1['nextId'], max_items=3) + response_3 = client.list_zone_changes(original_zone['id'], start_from=response_2['nextId'], max_items=3) + + check_zone_changes_page_accuracy(response_1['zoneChanges'], expected_first_change=10, expected_num_results=3) + check_zone_changes_page_accuracy(response_2['zoneChanges'], expected_first_change=7, expected_num_results=3) + check_zone_changes_page_accuracy(response_3['zoneChanges'], expected_first_change=4, expected_num_results=3) + + check_zone_changes_responses(response_1, startFrom=False) + check_zone_changes_responses(response_2) + check_zone_changes_responses(response_3) + + +def test_list_zone_changes_exhausted(zone_history_context): + """ + Test next id is none when zone changes are exhausted + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + response = client.list_zone_changes(original_zone['id'], start_from=None, max_items=11) + check_zone_changes_page_accuracy(response['zoneChanges'], expected_first_change=10, expected_num_results=10) + check_zone_changes_responses(response, startFrom=False, nextId=False) + + +def test_list_zone_changes_default_max_items(zone_history_context): + """ + Test default max items is 100 + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + response = client.list_zone_changes(original_zone['id'], start_from=None, max_items=None) + assert_that(response['maxItems'], is_(100)) + check_zone_changes_responses(response, startFrom=None, nextId=None) + + +def test_list_zone_changes_max_items_boundaries(zone_history_context): + """ + Test 0 < max_items <= 100 + """ + client = zone_history_context.client + original_zone = zone_history_context.results['zone'] + + too_large = client.list_zone_changes(original_zone['id'], start_from=None, max_items=101, status=400) + too_small = client.list_zone_changes(original_zone['id'], start_from=None, max_items=0, status=400) + + assert_that(too_large, is_("maxItems was 101, maxItems must be between 0 exclusive and 100 inclusive")) + assert_that(too_small, is_("maxItems was 0, maxItems must be between 0 exclusive and 100 inclusive")) diff --git a/modules/api/functional_test/live_tests/zones/list_zones_test.py b/modules/api/functional_test/live_tests/zones/list_zones_test.py new file mode 100644 index 000000000..444d5cc25 --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/list_zones_test.py @@ -0,0 +1,292 @@ +import pytest +import uuid + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + + +class ListZonesTestContext(object): + def __init__(self): + self.client = VinylDNSClient(VinylDNSTestContext.vinyldns_url, 'listZonesAccessKey', 'listZonesSecretKey') + self.tear_down() # ensures that the environment is clean before starting + + try: + group = { + 'name': 'list-zones-group', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'list-zones-user'} ], + 'admins': [ { 'id': 'list-zones-user'} ] + } + + self.list_zones_group = self.client.create_group(group, status=200) + + search_zone_1_change = self.client.create_zone( + { + 'name': 'list-zones-test-searched-1.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.list_zones_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.search_zone_1 = search_zone_1_change['zone'] + + search_zone_2_change = self.client.create_zone( + { + 'name': 'list-zones-test-searched-2.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.list_zones_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.search_zone_2 = search_zone_2_change['zone'] + + + search_zone_3_change = self.client.create_zone( + { + 'name': 'list-zones-test-searched-3.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.list_zones_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.search_zone_3 = search_zone_3_change['zone'] + + non_search_zone_1_change = self.client.create_zone( + { + 'name': 'list-zones-test-unfiltered-1.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.list_zones_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.non_search_zone_1 = non_search_zone_1_change['zone'] + + non_search_zone_2_change = self.client.create_zone( + { + 'name': 'list-zones-test-unfiltered-2.', + 'email': 'test@test.com', + 'shared': False, + 'adminGroupId': self.list_zones_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202) + self.non_search_zone_2 = non_search_zone_2_change['zone'] + + self.zone_ids = [self.search_zone_1['id'], self.search_zone_2['id'], self.search_zone_3['id'], self.non_search_zone_1['id'], self.non_search_zone_2['id']] + zone_changes = [search_zone_1_change, search_zone_2_change, search_zone_3_change, non_search_zone_1_change, non_search_zone_2_change] + for change in zone_changes: + self.client.wait_until_zone_exists(change) + except: + # teardown if there was any issue in setup + try: + self.tear_down() + except: + pass + raise + + def tear_down(self): + clear_zones(self.client) + clear_groups(self.client) + + +@pytest.fixture(scope="module") +def list_zones_context(request): + ctx = ListZonesTestContext() + + def fin(): + ctx.tear_down() + + request.addfinalizer(fin) + + return ctx + +def test_list_zones_success(list_zones_context): + """ + Test that we can retrieve a list of zones + """ + result = list_zones_context.client.list_zones(status=200) + retrieved = result['zones'] + + assert_that(retrieved, has_length(5)) + assert_that(retrieved, has_item(has_entry('name', 'list-zones-test-searched-1.'))) + assert_that(retrieved, has_item(has_entry('adminGroupName', 'list-zones-group'))) + + +def test_list_zones_max_items_100(list_zones_context): + """ + Test that the default max items for a list zones request is 100 + """ + result = list_zones_context.client.list_zones(status=200) + assert_that(result['maxItems'], is_(100)) + + +def test_list_zones_invalid_max_items_fails(list_zones_context): + """ + Test that passing in an invalid value for max items fails + """ + errors = list_zones_context.client.list_zones(max_items=700, status=400) + assert_that(errors, contains_string("maxItems was 700, maxItems must be between 0 and 100")) + + +def test_list_zones_no_authorization(list_zones_context): + """ + Test that we cannot retrieve a list of zones without authorization + """ + list_zones_context.client.list_zones(sign_request=False, status=401) + + +def test_list_zones_no_search_first_page(list_zones_context): + """ + Test that the first page of listing zones returns correctly when no name filter is provided + """ + result = list_zones_context.client.list_zones(max_items=3) + zones = result['zones'] + + assert_that(zones, has_length(3)) + assert_that(zones[0]['name'], is_('list-zones-test-searched-1.')) + assert_that(zones[1]['name'], is_('list-zones-test-searched-2.')) + assert_that(zones[2]['name'], is_('list-zones-test-searched-3.')) + + assert_that(result['nextId'], is_(3)) + assert_that(result['maxItems'], is_(3)) + assert_that(result, is_not(has_key('startFrom'))) + assert_that(result, is_not(has_key('nameFilter'))) + + +def test_list_zones_no_search_second_page(list_zones_context): + """ + Test that the second page of listing zones returns correctly when no name filter is provided + """ + result = list_zones_context.client.list_zones(start_from=2, max_items=2, status=200) + zones = result['zones'] + + assert_that(zones, has_length(2)) + assert_that(zones[0]['name'], is_('list-zones-test-searched-3.')) + assert_that(zones[1]['name'], is_('list-zones-test-unfiltered-1.')) + + assert_that(result['nextId'], is_(4)) + assert_that(result['maxItems'], is_(2)) + assert_that(result['startFrom'], is_(2)) + assert_that(result, is_not(has_key('nameFilter'))) + + +def test_list_zones_no_search_last_page(list_zones_context): + """ + Test that the last page of listing zones returns correctly when no name filter is provided + """ + result = list_zones_context.client.list_zones(start_from=3, max_items=4, status=200) + zones = result['zones'] + + assert_that(zones, has_length(2)) + assert_that(zones[0]['name'], is_('list-zones-test-unfiltered-1.')) + assert_that(zones[1]['name'], is_('list-zones-test-unfiltered-2.')) + + assert_that(result, is_not(has_key('nextId'))) + assert_that(result['maxItems'], is_(4)) + assert_that(result['startFrom'], is_(3)) + assert_that(result, is_not(has_key('nameFilter'))) + + +def test_list_zones_with_search_first_page(list_zones_context): + """ + Test that the first page of listing zones returns correctly when a name filter is provided + """ + result = list_zones_context.client.list_zones(name_filter='searched', max_items=2, status=200) + zones = result['zones'] + + assert_that(zones, has_length(2)) + assert_that(zones[0]['name'], is_('list-zones-test-searched-1.')) + assert_that(zones[1]['name'], is_('list-zones-test-searched-2.')) + + assert_that(result['nextId'], is_(2)) + assert_that(result['maxItems'], is_(2)) + assert_that(result['nameFilter'], is_('searched')) + assert_that(result, is_not(has_key('startFrom'))) + + +def test_list_zones_with_no_results(list_zones_context): + """ + Test that the response is formed correctly when no results are found + """ + result = list_zones_context.client.list_zones(name_filter='this-wont-be-found', max_items=2, status=200) + zones = result['zones'] + + assert_that(zones, has_length(0)) + + assert_that(result['maxItems'], is_(2)) + assert_that(result['nameFilter'], is_('this-wont-be-found')) + assert_that(result, is_not(has_key('startFrom'))) + assert_that(result, is_not(has_key('nextId'))) + + +def test_list_zones_with_search_last_page(list_zones_context): + """ + Test that the second page of listing zones returns correctly when a name filter is provided + """ + result = list_zones_context.client.list_zones(name_filter='searched', start_from=2, max_items=2, status=200) + zones = result['zones'] + + assert_that(zones, has_length(1)) + assert_that(zones[0]['name'], is_('list-zones-test-searched-3.')) + + assert_that(result, is_not(has_key('nextId'))) + assert_that(result['maxItems'], is_(2)) + assert_that(result['nameFilter'], is_('searched')) + assert_that(result['startFrom'], is_(2)) diff --git a/modules/api/functional_test/live_tests/zones/sync_zone_test.py b/modules/api/functional_test/live_tests/zones/sync_zone_test.py new file mode 100644 index 000000000..34365d582 --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/sync_zone_test.py @@ -0,0 +1,206 @@ +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * +import time + +records_in_dns = [ + {'name': 'sync-test.', + 'type': 'SOA', + 'records': [{u'mname': u'172.17.42.1.', + u'rname': u'admin.test.com.', + u'retry': 3600, + u'refresh': 10800, + u'minimum': 38400, + u'expire': 604800, + u'serial': 1439234395}]}, + {'name': u'sync-test.', + 'type': u'NS', + 'records': [{u'nsdname': u'172.17.42.1.'}]}, + {'name': u'jenkins', + 'type': u'A', + 'records': [{u'address': u'10.1.1.1'}]}, + {'name': u'foo', + 'type': u'A', + 'records': [{u'address': u'2.2.2.2'}]}, + {'name': u'test', + 'type': u'A', + 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, + {'name': u'sync-test.', + 'type': u'A', + 'records': [{u'address': u'5.5.5.5'}]}, + {'name': u'already-exists', + 'type': u'A', + 'records': [{u'address': u'6.6.6.6'}]}, + {'name': u'fqdn', + 'type': u'A', + 'records': [{u'address': u'7.7.7.7'}]}, + {'name': u'_sip._tcp', + 'type': u'SRV', + 'records': [{u'priority': 10, u'weight': 60, u'port': 5060, u'target': u'foo.sync-test.'}]}, + {'name': u'existing.dotted', + 'type': u'A', + 'records': [{u'address': u'9.9.9.9'}]}] + +records_post_update = [ + {'name': 'sync-test.', + 'type': 'SOA', + 'records': [{u'mname': u'172.17.42.1.', + u'rname': u'admin.test.com.', + u'retry': 3600, + u'refresh': 10800, + u'minimum': 38400, + u'expire': 604800, + u'serial': 0}]}, + {'name': u'sync-test.', + 'type': u'NS', + 'records': [{u'nsdname': u'172.17.42.1.'}]}, + {'name': u'foo', + 'type': u'A', + 'records': [{u'address': u'1.2.3.4'}]}, + {'name': u'test', + 'type': u'A', + 'records': [{u'address': u'3.3.3.3'}, {u'address': u'4.4.4.4'}]}, + {'name': u'sync-test.', + 'type': u'A', + 'records': [{u'address': u'5.5.5.5'}]}, + {'name': u'already-exists', + 'type': u'A', + 'records': [{u'address': u'6.6.6.6'}]}, + {'name': u'newrs', + 'type': u'A', + 'records': [{u'address': u'2.3.4.5'}]}, + {'name': u'fqdn', + 'type': u'A', + 'records': [{u'address': u'7.7.7.7'}]}, + {'name': u'_sip._tcp', + 'type': u'SRV', + 'records': [{u'priority': 10, u'weight': 60, u'port': 5060, u'target': u'foo.sync-test.'}]}, + {'name': u'existing.dotted', + 'type': u'A', + 'records': [{u'address': u'9.9.9.9'}]}, + {'name': u'dott.ed', + 'type': u'A', + 'records': [{u'address': u'6.7.8.9'}]}] + + +@pytest.mark.skip_production +def test_sync_zone_success(shared_zone_test_context): + """ + Test syncing a zone + """ + client = shared_zone_test_context.ok_vinyldns_client + zone_name = 'sync-test' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + try: + zone_change = client.create_zone(zone, status=202) + zone = zone_change['zone'] + client.wait_until_zone_exists(zone_change) + client.wait_until_zone_change_status(zone_change, 'Synced') + + time.sleep(.5) + + # confirm zone has been synced + get_result = client.get_zone(zone['id']) + synced_zone = get_result['zone'] + latest_sync = synced_zone['latestSync'] + assert_that(latest_sync, is_not(none())) + + # confirm that the recordsets in DNS have been saved in vinyldns + recordsets = client.list_recordsets(zone['id'])['recordSets'] + + assert_that(len(recordsets), is_(10)) + for rs in recordsets: + small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) + small_rs['records'] = sorted(small_rs['records']) + if small_rs['type'] == 'SOA': + assert_that(small_rs['name'], is_('sync-test.')) + else: + assert_that(records_in_dns, has_item(small_rs)) + + # make changes to the dns backend + dns_update(zone, 'foo', 38400, 'A', '1.2.3.4') + dns_add(zone, 'newrs', 38400, 'A', '2.3.4.5') + dns_delete(zone, 'jenkins', 'A') + + # add unknown this should not be synced + dns_add(zone, 'dnametest', 38400, 'DNAME', 'test.com.') + + # add a dotted host, this should be synced, so we will have 10 records ( +1 ) + dns_add(zone, 'dott.ed', 38400, 'A', '6.7.8.9') + + # wait for next sync + time.sleep(10) + + # sync again + change = client.sync_zone(zone['id'], status=202) + client.wait_until_zone_change_status(change, 'Synced') + + # confirm cannot again sync without waiting + client.sync_zone(zone['id'], status=403) + + # validate zone + get_result = client.get_zone(zone['id']) + synced_zone = get_result['zone'] + assert_that(synced_zone['latestSync'], is_not(latest_sync)) + assert_that(synced_zone['status'], is_('Active')) + assert_that(synced_zone['updated'], is_not(none())) + + # confirm that the updated recordsets in DNS have been saved in vinyldns + recordsets = client.list_recordsets(zone['id'])['recordSets'] + assert_that(len(recordsets), is_(11)) + for rs in recordsets: + small_rs = dict((k, rs[k]) for k in ['name', 'type', 'records']) + small_rs['records'] = sorted(small_rs['records']) + if small_rs['type'] == 'SOA': + small_rs['records'][0]['serial'] = 0 + # records_post_update does not contain dnametest + assert_that(records_post_update, has_item(small_rs)) + + changes = client.list_recordset_changes(zone['id']) + for c in changes['recordSetChanges']: + assert_that(c['systemMessage'], is_('Change applied via zone sync')) + + for rs in recordsets: + # confirm that we cannot update the dotted host if the name is the same + if rs['name'] == 'dott.ed': + attempt_update = rs + attempt_update['ttl'] = attempt_update['ttl'] + 100 + errors = client.update_recordset(attempt_update, status=422) + assert_that(errors, is_("Record with name " + rs['name'] + " is a dotted host which is illegal " + "in this zone " + zone_name + ".")) + + # we should be able to delete the record + client.delete_recordset(rs['zoneId'], rs['id'], status=202) + client.wait_until_recordset_deleted(rs['zoneId'], rs['id']) + if rs['name'] == "example.dotted": + # confirm that we can modify the example dotted + good_update = rs + good_update['name'] = "example-dotted" + change = client.update_recordset(good_update, status=202) + client.wait_until_recordset_change_status(change, 'Complete') + + finally: + if 'id' in zone: + dns_update(zone, 'foo', 38400, 'A', '2.2.2.2') + dns_delete(zone, 'newrs', 'A') + dns_add(zone, 'jenkins', 38400, 'A', '10.1.1.1') + dns_delete(zone, 'example-dotted', 'A') + client.abandon_zones([zone['id']], status=202) diff --git a/modules/api/functional_test/live_tests/zones/update_zone_test.py b/modules/api/functional_test/live_tests/zones/update_zone_test.py new file mode 100644 index 000000000..40bb28cf1 --- /dev/null +++ b/modules/api/functional_test/live_tests/zones/update_zone_test.py @@ -0,0 +1,821 @@ +import pytest +import uuid + +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from utils import * + + +def test_update_zone_success(shared_zone_test_context): + """ + Test updating a zone + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time' + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-updated-by-updatezn', + 'userId': 'ok', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + result_zone['email'] = 'foo@bar.com' + result_zone['acl']['rules'] = [acl_rule] + update_result = client.update_zone(result_zone, status=202) + client.wait_until_zone_change_status(update_result, 'Synced') + + assert_that(update_result['changeType'], is_('Update')) + assert_that(update_result['userId'], is_('ok')) + assert_that(update_result, has_key('created')) + + get_result = client.get_zone(result_zone['id']) + + uz = get_result['zone'] + assert_that(uz['email'], is_('foo@bar.com')) + assert_that(uz['updated'], is_not(none())) + + acl = uz['acl'] + verify_acl_rule_is_present_once(acl_rule, acl) + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + +def test_update_bad_acl_fails(shared_zone_test_context): + """ + Test that updating a zone with a bad ACL rule fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.ok_zone + + acl_bad_regex = { + 'accessLevel': 'Read', + 'description': 'test-acl-updated-by-updatezn-bad', + 'userId': 'ok', + 'recordMask': '*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + + zone['acl']['rules'] = [acl_bad_regex] + + client.update_zone(zone, status=400) + + +def test_update_acl_no_group_or_user_fails(shared_zone_test_context): + """ + Test that updating a zone with an ACL with no user/group fails + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = shared_zone_test_context.ok_zone + + bad_acl = { + 'accessLevel': 'Read', + 'description': 'test-acl-updated-by-updatezn-bad-ids', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + + zone['acl']['rules'] = [bad_acl] + + client.update_zone(zone, status=400) + + + +def test_update_missing_zone_data(shared_zone_test_context): + """ + Test that updating a zone without providing necessary data returns errors and fails the update + """ + + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time.' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + update_zone = { + 'id': result_zone['id'], + 'name': result_zone['name'], + 'random_key': 'some_value', + 'another_key': 'meaningless_data' + } + + errors = client.update_zone(update_zone, status=400)['errors'] + assert_that(errors, contains_inanyorder('Missing Zone.email')) + + # Check that the failed update didn't go through + zone_get = client.get_zone(result_zone['id'])['zone'] + assert_that(zone_get['name'], is_(zone_name)) + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +def test_update_invalid_zone_data(shared_zone_test_context): + """ + Test that creating a zone with invalid data returns errors and fails the update + """ + client = shared_zone_test_context.ok_vinyldns_client + result_zone = None + try: + zone_name = 'one-time.' + + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.ok_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + result = client.create_zone(zone, status=202) + result_zone = result['zone'] + client.wait_until_zone_exists(result) + + update_zone = { + 'id': result_zone['id'], + 'name': result_zone['name'], + 'email': 'test@test.com', + 'status': 'invalid_status' + } + + errors = client.update_zone(update_zone, status=400)['errors'] + assert_that(errors, contains_inanyorder('Invalid ZoneStatus')) + + # Check that the failed update didn't go through + zone_get = client.get_zone(result_zone['id'])['zone'] + assert_that(zone_get['name'], is_(zone_name)) + + finally: + if result_zone: + client.abandon_zones([result_zone['id']], status=202) + + +def test_update_zone_returns_404_if_zone_not_found(shared_zone_test_context): + """ + Test updating a zone returns a 404 if the zone was not found + """ + client = shared_zone_test_context.ok_vinyldns_client + zone = { + 'name': 'one-time.', + 'email': 'test@test.com', + 'id': 'nothere', + 'connection': { + 'name': 'old-shared.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'old-shared.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + client.update_zone(zone, status=404) + + +def test_create_acl_group_rule_success(shared_zone_test_context): + """ + Test creating an acl rule successfully + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-group-id', + 'groupId': shared_zone_test_context.ok_group['id'], + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # This is async, we get a zone change back + acl = result['zone']['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + +def test_create_acl_user_rule_success(shared_zone_test_context): + """ + Test creating an acl rule successfully + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': 'ok', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # This is async, we get a zone change back + acl = result['zone']['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + +def test_create_acl_user_rule_invalid_regex_failure(shared_zone_test_context): + """ + Test creating an acl rule with an invalid regex mask fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': '789', + 'recordMask': 'x{5,-3}', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400) + assert_that(errors,contains_string("record mask x{5,-3} is an invalid regex")) + + +def test_create_acl_user_rule_invalid_cidr_failure(shared_zone_test_context): + """ + Test creating an acl rule with an invalid cidr mask fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': '789', + 'recordMask': '10.0.0.0/50', + 'recordTypes': ['PTR'] + } + + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) + assert_that(errors,contains_string("PTR types must have no mask or a valid CIDR mask: IPv4 mask must be between 0 and 32")) + + +def test_create_acl_user_rule_valid_cidr_success(shared_zone_test_context): + """ + Test creating an acl rule with a valid cidr mask passes + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': 'ok', + 'recordMask': '10.0.0.0/20', + 'recordTypes': ['PTR'] + } + + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=202) + + # This is async, we get a zone change back + acl = result['zone']['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + +def test_create_acl_user_rule_multiple_cidr_failure(shared_zone_test_context): + """ + Test creating an acl rule with multiple record types including PTR and a cidr mask fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': '789', + 'recordMask': '10.0.0.0/20', + 'recordTypes': ['PTR','A','AAAA'] + } + + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) + assert_that(errors,contains_string("Multiple record types including PTR must have no mask")) + + +def test_create_acl_user_rule_multiple_none_success(shared_zone_test_context): + """ + Test creating an acl rule with multiple record types and no mask passes + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': 'ok', + 'recordTypes': ['PTR','A','AAAA'] + } + + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=202) + + # This is async, we get a zone change back + acl = result['zone']['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + +def test_create_acl_user_rule_multiple_non_cidr_failure(shared_zone_test_context): + """ + Test creating an acl rule with multiple record types including PTR and non cidr mask fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-user-id', + 'userId': '789', + 'recordMask': 'www-*', + 'recordTypes': ['PTR','A','AAAA'] + } + + errors = client.add_zone_acl_rule(shared_zone_test_context.ip4_reverse_zone['id'], acl_rule, status=400) + assert_that(errors,contains_string("Multiple record types including PTR must have no mask")) + + +def test_create_acl_idempotent(shared_zone_test_context): + """ + Test creating the same acl rule multiple times results in only one rule added + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Write', + 'description': 'test-acl-idempotent', + 'userId': 'ok', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result1 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result2 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result3 = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + zone = client.get_zone(shared_zone_test_context.system_test_zone['id'])['zone'] + + acl = zone['acl'] + + # we should only have one rule that we created + verify_acl_rule_is_present_once(acl_rule, acl) + + +def test_delete_acl_group_rule_success(shared_zone_test_context): + """ + Test deleting an acl rule successfully + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-delete-group-id', + 'groupId': shared_zone_test_context.ok_group['id'], + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # delete the rule + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # make sure that our acl is not on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + verify_acl_rule_is_not_present(acl_rule, zone['acl']) + + +def test_delete_acl_user_rule_success(shared_zone_test_context): + """ + Test deleting an acl rule successfully + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-delete-user-id', + 'userId': 'ok', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # make sure that our acl rule appears on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + acl = zone['acl'] + + verify_acl_rule_is_present_once(acl_rule, acl) + + # delete the rule + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # make sure that our acl is not on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + verify_acl_rule_is_not_present(acl_rule, zone['acl']) + + +def test_delete_non_existent_acl_rule_success(shared_zone_test_context): + """ + Test deleting an acl rule that doesn't exist still returns successfully + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test-acl-delete-non-existent-user-id', + 'userId': '789', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + # delete the rule + result = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + # make sure that our acl is not on the zone + zone = client.get_zone(result['zone']['id'])['zone'] + + verify_acl_rule_is_not_present(acl_rule, zone['acl']) + + +def test_delete_acl_idempotent(shared_zone_test_context): + """ + Test deleting the same acl rule multiple times results in only one rule remomved + """ + client = shared_zone_test_context.ok_vinyldns_client + + acl_rule = { + 'accessLevel': 'Write', + 'description': 'test-delete-acl-idempotent', + 'userId': 'ok', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = client.add_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + zone = client.get_zone(shared_zone_test_context.system_test_zone['id'])['zone'] + + acl = zone['acl'] + + # we should only have one rule that we created + verify_acl_rule_is_present_once(acl_rule, acl) + + result1 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result2 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + result3 = client.delete_zone_acl_rule_with_wait(shared_zone_test_context.system_test_zone['id'], acl_rule, status=202) + + zone = client.get_zone(result['zone']['id'])['zone'] + + verify_acl_rule_is_not_present(acl_rule, zone['acl']) + + +def test_delete_acl_removes_permissions(shared_zone_test_context): + """ + Test that a user (who previously had permissions to view a zone via acl rules) can not view the zone once + the acl rule is deleted + """ + + ok_client = shared_zone_test_context.ok_vinyldns_client # ok adds and deletes acl rule + dummy_client = shared_zone_test_context.dummy_vinyldns_client # dummy should not be able to see ok_zone once acl rule is deleted + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] + + ok_view = ok_client.list_zones()['zones'] + assert_that(ok_view, has_item(ok_zone)) # ok can see ok_zone + + # verify dummy cannot see ok_zone + dummy_view = dummy_client.list_zones()['zones'] + assert_that(dummy_view, is_not(has_item(ok_zone))) # cannot view zone + + # add acl rule + acl_rule = { + 'accessLevel': 'Read', + 'description': 'test_delete_acl_removes_permissions', + 'userId': 'dummy', # give dummy permission to see ok_zone + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + result = ok_client.add_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone['id'], acl_rule, status=202) + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] + verify_acl_rule_is_present_once(acl_rule, ok_zone['acl']) + + ok_view = ok_client.list_zones()['zones'] + assert_that(ok_view, has_item(ok_zone)) # ok can still see ok_zone + + # verify dummy can see ok_zone + dummy_view = dummy_client.list_zones()['zones'] + assert_that(dummy_view, has_item(ok_zone)) # can view zone + + # delete acl rule + result = ok_client.delete_zone_acl_rule_with_wait(shared_zone_test_context.ok_zone['id'], acl_rule, status=202) + ok_zone = ok_client.get_zone(shared_zone_test_context.ok_zone['id'])['zone'] + verify_acl_rule_is_not_present(acl_rule, ok_zone['acl']) + + ok_view = ok_client.list_zones()['zones'] + assert_that(ok_view, has_item(ok_zone)) # ok can still see ok_zone + + # verify dummy can not see ok_zone + dummy_view = dummy_client.list_zones()['zones'] + assert_that(dummy_view, is_not(has_item(ok_zone))) # cannot view zone + + +def test_update_reverse_v4_zone(shared_zone_test_context): + """ + Test updating a reverse IPv4 zone + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = shared_zone_test_context.ip4_reverse_zone + zone['email'] = 'update-test@bar.com' + + import json + print json.dumps(zone, indent=4) + update_result = client.update_zone(zone, status=202) + client.wait_until_zone_change_status(update_result, 'Synced') + + assert_that(update_result['changeType'], is_('Update')) + assert_that(update_result['userId'], is_('ok')) + assert_that(update_result, has_key('created')) + + get_result = client.get_zone(zone['id']) + + uz = get_result['zone'] + assert_that(uz['email'], is_('update-test@bar.com')) + assert_that(uz['updated'], is_not(none())) + + + +def test_update_reverse_v6_zone(shared_zone_test_context): + """ + Test updating a reverse IPv6 zone + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = shared_zone_test_context.ip6_reverse_zone + zone['email'] = 'update-test@bar.com' + + update_result = client.update_zone(zone, status=202) + client.wait_until_zone_change_status(update_result, 'Synced') + + assert_that(update_result['changeType'], is_('Update')) + assert_that(update_result['userId'], is_('ok')) + assert_that(update_result, has_key('created')) + + get_result = client.get_zone(zone['id']) + + uz = get_result['zone'] + assert_that(uz['email'], is_('update-test@bar.com')) + assert_that(uz['updated'], is_not(none())) + + +def test_activate_reverse_v4_zone_with_bad_key_fails(shared_zone_test_context): + """ + Test activating a reverse IPv4 zone when using a bad tsig key fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + update = dict(shared_zone_test_context.ip4_reverse_zone) + update['connection']['key'] = 'f00sn+4G2ldMn0q1CV3vsg==' + client.update_zone(update, status=400) + + +def test_activate_reverse_v6_zone_with_bad_key_fails(shared_zone_test_context): + """ + Test activating a reverse IPv6 zone with an invalid key fails + """ + client = shared_zone_test_context.ok_vinyldns_client + + update = dict(shared_zone_test_context.ip6_reverse_zone) + update['connection']['key'] = 'f00sn+4G2ldMn0q1CV3vsg==' + client.update_zone(update, status=400) + + +def test_user_cannot_update_zone_to_nonexisting_admin_group(shared_zone_test_context): + """ + Test user cannot update a zone adminGroupId to a group that does not exist + """ + + zone_update = shared_zone_test_context.ok_zone + zone_update['adminGroupId'] = "some-bad-id" + zone_update['connection']['key'] = VinylDNSTestContext.dns_key + + shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=400) + + +def test_user_can_update_zone_to_another_admin_group(shared_zone_test_context): + """ + Test user can update a zone with an admin group they are a member of + """ + #dummy is member, not admin + + client = shared_zone_test_context.dummy_vinyldns_client + group = None + + try: + result = client.create_zone( + { + 'name': 'one-time.', + 'email': 'test@test.com', + 'adminGroupId': shared_zone_test_context.dummy_group['id'], + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + }, status=202 + ) + zone = result['zone'] + client.wait_until_zone_exists(result) + + import json + print json.dumps(zone, indent=3) + + new_joint_group = { + 'name': 'new-ok-group', + 'email': 'test@test.com', + 'description': 'this is a description', + 'members': [ { 'id': 'ok', 'id': 'dummy'} ], + 'admins': [ { 'id': 'ok'} ] + } + + group = client.create_group(new_joint_group, status=200) + + #changing the zone + zone_update = dict(zone) + zone_update['adminGroupId'] = group['id'] + + result = client.update_zone(zone_update, status=202) + client.wait_until_zone_change_status(result, 'Synced') + finally: + if zone: + client.delete_zone(zone['id'], status=202) + client.wait_until_zone_deleted(zone['id']) + if group: + shared_zone_test_context.ok_vinyldns_client.delete_group(group['id'], status=(200, 404)) + + + +def test_user_cannot_update_zone_to_nonmember_admin_group(shared_zone_test_context): + """ + Test user cannot update a zone adminGroupId to a group they are not a member of + """ + + zone_update = shared_zone_test_context.ok_zone + zone_update['adminGroupId'] = shared_zone_test_context.dummy_group['id'] + zone_update['connection']['key'] = VinylDNSTestContext.dns_key + + shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=400) + + +def test_user_cannot_update_zone_to_nonexisting_admin_group(shared_zone_test_context): + """ + Test user cannot update a zone adminGroupId to a group that does not exist + """ + + zone_update = shared_zone_test_context.ok_zone + zone_update['adminGroupId'] = "some-bad-id" + zone_update['connection']['key'] = VinylDNSTestContext.dns_key + + shared_zone_test_context.ok_vinyldns_client.update_zone(zone_update, status=400) + + +def test_acl_rule_missing_access_level(shared_zone_test_context): + """ + Tests that missing the access level when creating an acl rule returns a 400 + """ + client = shared_zone_test_context.ok_vinyldns_client + acl_rule = { + 'description': 'test-acl-no-access-level', + 'groupId': '456', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400)['errors'] + assert_that(errors, has_length(1)) + assert_that(errors, contains_inanyorder('Missing ACLRule.accessLevel')) + + +def test_acl_rule_both_user_and_group(shared_zone_test_context): + """ + Tests that including the user id and the group id when creating an acl rule returns a 400 + """ + client = shared_zone_test_context.ok_vinyldns_client + acl_rule = { + 'accessLevel': 'Read', + 'userId': '789', + 'groupId': '456', + 'description': 'test-acl-no-user-or-group-level', + 'recordMask': 'www-*', + 'recordTypes': ['A', 'AAAA', 'CNAME'] + } + errors = client.add_zone_acl_rule(shared_zone_test_context.system_test_zone['id'], acl_rule, status=400)['errors'] + assert_that(errors, has_length(1)) + assert_that(errors, contains_inanyorder('Cannot specify both a userId and a groupId')) + + +def test_update_zone_no_authorization(shared_zone_test_context): + """ + Test updating a zone without authorization + """ + client = shared_zone_test_context.ok_vinyldns_client + + zone = { + 'id': '12345', + 'name': str(uuid.uuid4()), + 'email': 'test@test.com', + } + + client.update_zone(zone, sign_request=False, status=401) diff --git a/modules/api/functional_test/perf_tests/uat_sync_test.py b/modules/api/functional_test/perf_tests/uat_sync_test.py new file mode 100644 index 000000000..9e7b90c37 --- /dev/null +++ b/modules/api/functional_test/perf_tests/uat_sync_test.py @@ -0,0 +1,63 @@ +from hamcrest import * +from vinyldns_client import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +import time + +def test_sync_zone_success(): + """ + Test syncing a zone + """ + zone_name = 'small' + client = VinylDNSClient() + + zones = client.list_zones()['zones'] + zone = [z for z in zones if z['name'] == zone_name + "."] + + lastLatestSync = [] + new = True + if zone: + zone = zone[0] + lastLatestSync = zone['latestSync'] + new = False + + else: + # create zone if it doesnt exist + zone = { + 'name': zone_name, + 'email': 'test@test.com', + 'connection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + }, + 'transferConnection': { + 'name': 'vinyldns.', + 'keyName': VinylDNSTestContext.dns_key_name, + 'key': VinylDNSTestContext.dns_key, + 'primaryServer': VinylDNSTestContext.dns_ip + } + } + zone_change = client.create_zone(zone, status=202) + zone = zone_change['zone'] + client.wait_until_zone_exists(zone_change) + + zone_id = zone['id'] + + # run sync + change = client.sync_zone(zone_id, status=202) + + # brief wait for zone status change. Can't use getZoneHistory here to check on the changeset itself, + # the action times out (presumably also querying the same record change table that the sync itself + # is interacting with) + time.sleep(0.5) + client.wait_until_zone_status(zone_id, 'Active') + + # confirm zone has been updated + get_result = client.get_zone(zone_id) + synced_zone = get_result['zone'] + latestSync = synced_zone['latestSync'] + assert_that(synced_zone['updated'], is_not(none())) + assert_that(latestSync, is_not(none())) + if not new: + assert_that(latestSync, is_not(lastLatestSync)) diff --git a/modules/api/functional_test/pytest.ini b/modules/api/functional_test/pytest.ini new file mode 100644 index 000000000..3e7692550 --- /dev/null +++ b/modules/api/functional_test/pytest.ini @@ -0,0 +1,3 @@ +[pytest] +norecursedirs=.virtualenv eggs +addopts = -rfesxX --capture=sys --junitxml=../target/pytest_reports/pytest.xml --durations=30 diff --git a/modules/api/functional_test/requirements.txt b/modules/api/functional_test/requirements.txt new file mode 100644 index 000000000..89e9b498b --- /dev/null +++ b/modules/api/functional_test/requirements.txt @@ -0,0 +1,14 @@ +# requirements.txt v1.0 +# --------------------- +# Add project specific python requirements to this file. +# Do not commit them in the project! +# Make sure they exist on our corporate PyPi server. + +pyhamcrest==1.8.0 +pytz>=2014 +pytest==2.6.4 +mock==1.0.1 +dnspython==1.14.0 +boto==2.48.0 +future==0.16.0 +requests==2.19.1 \ No newline at end of file diff --git a/modules/api/functional_test/run.py b/modules/api/functional_test/run.py new file mode 100755 index 000000000..74b8cb0f9 --- /dev/null +++ b/modules/api/functional_test/run.py @@ -0,0 +1,26 @@ +#!/usr/bin/env python +import os +import sys + +basedir = os.path.dirname(os.path.realpath(__file__)) +vedir = os.path.join(basedir, '.virtualenv') +os.system('./bootstrap.sh') + +activate_virtualenv = os.path.join(vedir, 'bin', 'activate_this.py') +print('Activating virtualenv at ' + activate_virtualenv) + +report_dir = os.path.join(os.path.dirname(os.path.realpath(__file__)), '../target/pytest_reports') +if not os.path.exists(report_dir): + os.system('mkdir -p ' + report_dir) + +execfile(activate_virtualenv, dict(__file__=activate_virtualenv)) + +import pytest + +result = 1 + +result = pytest.main(list(sys.argv[1:])) + +sys.exit(result) + + diff --git a/modules/api/functional_test/utils.py b/modules/api/functional_test/utils.py new file mode 100644 index 000000000..e0a68296f --- /dev/null +++ b/modules/api/functional_test/utils.py @@ -0,0 +1,532 @@ +import sys +import pytest +import uuid +import json +import dns.query +import dns.tsigkeyring +import dns.update + +from utils import * +from hamcrest import * +from vinyldns_python import VinylDNSClient +from vinyldns_context import VinylDNSTestContext +from test_data import TestData +from dns.resolver import * +import copy + + +def verify_recordset(actual, expected): + """ + Runs basic assertions on the recordset to ensure that actual matches the expected + """ + assert_that(actual['name'], is_(expected['name'])) + assert_that(actual['zoneId'], is_(expected['zoneId'])) + assert_that(actual['type'], is_(expected['type'])) + assert_that(actual['ttl'], is_(expected['ttl'])) + assert_that(actual, has_key('created')) + assert_that(actual['status'], is_not(none())) + assert_that(actual['id'], is_not(none())) + actual_records = [json.dumps(x) for x in actual['records']] + expected_records = [json.dumps(x) for x in expected['records']] + for expected_record in expected_records: + assert_that(actual_records, has_item(expected_record)) + + +def gen_zone(): + """ + Generates a random zone + """ + return { + 'name': str(uuid.uuid4())+'.', + 'email': 'test@test.com', + 'adminGroupId': 'test-group-id' + } + + +def verify_acl_rule_is_present_once(rule, acl): + def match(acl_rule): + # remove displayName if it exists (allows for aclRule and aclRuleInfo comparison) + acl_rule.pop('displayName', None) + return acl_rule == rule + + matches = filter(match, acl['rules']) + assert_that(matches, has_length(1), 'Did not find exactly one match for acl rule') + + +def verify_acl_rule_is_not_present(rule, acl): + def match(acl_rule): + return acl_rule != rule + + matches = filter(match, acl['rules']) + assert_that(matches, has_length(len(acl['rules'])), 'ACL Rule was found but should not have been present') + + +def rdata(dns_answers): + """ + Converts the answers from a dns python query to a sequence of string containing the rdata + :param dns_answers: the results of running the dns_resolve utility function + :return: a sequence containing the rdata sections for each record in the answers + """ + rdata_strings = [] + if dns_answers: + rdata_strings = [x['rdata'] for x in dns_answers] + + return rdata_strings + + +def dns_server_port(zone): + """ + Parses the server and port based on the connection info on the zone + :param zone: a populated zone model + :return: a tuple (host, port), port is an int + """ + name_server = zone['connection']['primaryServer'] + name_server_port = 53 + if ':' in name_server: + parts = name_server.split(':') + name_server = parts[0] + name_server_port = int(parts[1]) + + return name_server, name_server_port + + +def dns_do_command(zone, record_name, record_type, command, ttl=0, rdata=""): + """ + Helper for dns add, update, delete + """ + keyring = dns.tsigkeyring.from_text({ + zone['connection']['keyName']: VinylDNSTestContext.dns_key + }) + + name_server, name_server_port = dns_server_port(zone) + + fqdn = record_name + "." + zone['name'] + + print "updating " + fqdn + " to have data " + rdata + + update = dns.update.Update(zone['name'], keyring=keyring) + if (command == 'add'): + update.add(fqdn, ttl, record_type, rdata) + elif (command == 'update'): + update.replace(fqdn, ttl, record_type, rdata) + elif (command == 'delete'): + update.delete(fqdn, record_type) + + response = dns.query.udp(update, name_server, port=name_server_port, ignore_unexpected=True) + return response + + +def dns_update(zone, record_name, ttl, record_type, rdata): + """ + Issues a DNS update to the backend server + :param zone: a populated zone model + :param record_name: the name of the record to update + :param ttl: the ttl value of the record + :param record_type: the type of record being updated + :param rdata: the rdata string + :return: + """ + return dns_do_command(zone, record_name, record_type, "update", ttl, rdata) + + +def dns_delete(zone, record_name, record_type): + """ + Issues a DNS delete to the backend server + :param zone: a populated zone model + :param record_name: the name of the record to delete + :param record_type: the type of record being delete + :return: + """ + return dns_do_command(zone, record_name, record_type, "delete") + + +def dns_add(zone, record_name, ttl, record_type, rdata): + """ + Issues a DNS update to the backend server + :param zone: a populated zone model + :param record_name: the name of the record to add + :param ttl: the ttl value of the record + :param record_type: the type of record being added + :param rdata: the rdata string + :return: + """ + return dns_do_command(zone, record_name, record_type, "add", ttl, rdata) + + +def dns_resolve(zone, record_name, record_type): + """ + Performs a dns query to find the record name and type against the zone + :param zone: a populated zone model + :param record_name: the name of the record to lookup + :param record_type: the type of record to lookup + :return: An array of dictionaries, each dict containing fields rdata, type, name, ttl, dclass + """ + vinyldns_resolver = dns.resolver.Resolver(configure=False) + + name_server, name_server_port = dns_server_port(zone) + + vinyldns_resolver.nameservers = [name_server] + vinyldns_resolver.port = name_server_port + vinyldns_resolver.domain = zone['name'] + + fqdn = record_name + '.' + vinyldns_resolver.domain + if record_name == vinyldns_resolver.domain: + # assert that we are looking up the zone name / @ symbol + fqdn = vinyldns_resolver.domain + + print "looking up " + fqdn + + try: + answers = vinyldns_resolver.query(fqdn, record_type) + except NXDOMAIN: + print "query returned NXDOMAIN" + answers = [] + except dns.resolver.NoAnswer: + print "query returned NoAnswer" + answers = [] + + if answers: + # dns python is goofy, looks like we have to parse text + # each record in the rrset is delimited by a \n + records = str(answers.rrset).split('\n') + + # for each record, we have exactly 4 fields in order: 1 record name; 2 TTL; 3 DCLASS; 4 TYPE; 5 RDATA + # construct a simple dictionary based on that split + return map(lambda x: parse_record(x), records) + else: + return [] + + +def parse_record(record_string): + # for each record, we have exactly 4 fields in order: 1 record name; 2 TTL; 3 DCLASS; 4 TYPE; 5 RDATA + parts = record_string.split(' ') + + print "record parts" + print str(parts) + + # any parts over 4 have to be kept together + offset = record_string.find(parts[3]) + len(parts[3]) + 1 + length = len(record_string) - offset + record_data = record_string[offset:offset + length] + + record = { + 'name': parts[0], + 'ttl': int(str(parts[1])), + 'dclass': parts[2], + 'type': parts[3], + 'rdata': record_data + } + + print "parsed record:" + print str(record) + return record + + +def generate_acl_rule(access_level, **kw): + acl_rule = { + 'accessLevel': access_level, + 'description': 'some_test_rule' + } + if ('userId' in kw): + acl_rule['userId'] = kw['userId'] + if ('groupId' in kw): + acl_rule['groupId'] = kw['groupId'] + if ('recordTypes' in kw): + acl_rule['recordTypes'] = kw['recordTypes'] + if ('recordMask' in kw): + acl_rule['recordMask'] = kw['recordMask'] + + return acl_rule + + +def add_rules_to_zone(zone, new_rules): + import copy + + updated_zone = copy.deepcopy(zone) + updated_rules = updated_zone['acl']['rules'] + rules_to_add = filter(lambda x: x not in updated_rules, new_rules) + updated_rules.extend(rules_to_add) + updated_zone['acl']['rules'] = updated_rules + return updated_zone + +def remove_rules_from_zone(zone, deleted_rules): + import copy + + updated_zone = copy.deepcopy(zone) + existing_rules = updated_zone['acl']['rules'] + trimmed_rules = filter(lambda x: x in existing_rules, deleted_rules) + updated_zone['acl']['rules'] = trimmed_rules + + return updated_zone + +def add_ok_acl_rules(test_context, rules): + updated_zone = add_rules_to_zone(test_context.ok_zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def add_ip4_acl_rules(test_context, rules): + updated_zone = add_rules_to_zone(test_context.ip4_reverse_zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def add_ip6_acl_rules(test_context, rules): + updated_zone = add_rules_to_zone(test_context.ip6_reverse_zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def add_classless_acl_rules(test_context, rules): + updated_zone = add_rules_to_zone(test_context.classless_zone_delegation_zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def remove_ok_acl_rules(test_context, rules): + zone = test_context.ok_vinyldns_client.get_zone(test_context.ok_zone['id'])['zone'] + updated_zone = remove_rules_from_zone(zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def remove_ip4_acl_rules(test_context, rules): + zone = test_context.ok_vinyldns_client.get_zone(test_context.ip4_reverse_zone['id'])['zone'] + updated_zone = remove_rules_from_zone(zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def remove_ip6_acl_rules(test_context, rules): + zone = test_context.ok_vinyldns_client.get_zone(test_context.ip6_reverse_zone['id'])['zone'] + updated_zone = remove_rules_from_zone(zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def remove_classless_acl_rules(test_context, rules): + zone = test_context.ok_vinyldns_client.get_zone(test_context.classless_zone_delegation_zone['id'])['zone'] + updated_zone = remove_rules_from_zone(zone, rules) + update_change = test_context.ok_vinyldns_client.update_zone(updated_zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def clear_ok_acl_rules(test_context): + zone = test_context.ok_zone + zone['acl']['rules'] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def clear_ip4_acl_rules(test_context): + zone = test_context.ip4_reverse_zone + zone['acl']['rules'] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def clear_ip6_acl_rules(test_context): + zone = test_context.ip6_reverse_zone + zone['acl']['rules'] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def clear_classless_acl_rules(test_context): + zone = test_context.classless_zone_delegation_zone + zone['acl']['rules'] = [] + update_change = test_context.ok_vinyldns_client.update_zone(zone, status=202) + test_context.ok_vinyldns_client.wait_until_zone_change_status(update_change, 'Synced') + +def seed_text_recordset(client, record_name, zone, records=[{'text':'someText'}]): + new_rs = { + 'zoneId': zone['id'], + 'name': record_name, + 'type': 'TXT', + 'ttl': 100, + 'records': records + } + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + if client.wait_until_recordset_exists(result_rs['zoneId'], result_rs['id']): + print "\r\n!!! record set exists !!!" + else: + print "\r\n!!! record set does not exist !!!" + + return result_rs + +def seed_ptr_recordset(client, record_name, zone, records=[{'ptrdname':'foo.com.'}]): + new_rs = { + 'zoneId': zone['id'], + 'name': record_name, + 'type': 'PTR', + 'ttl': 100, + 'records': records + } + result = client.create_recordset(new_rs, status=202) + result_rs = result['recordSet'] + if client.wait_until_recordset_exists(result_rs['zoneId'], result_rs['id']): + print "\r\n!!! record set exists !!!" + else: + print "\r\n!!! record set does not exist !!!" + + return result_rs + + +def clear_zones(client): + # Get the groups for the ok user + groups = client.list_all_my_groups() + group_ids = map(lambda x: x['id'], groups) + + zones = client.list_zones()['zones'] + + import json + for zone in zones: + print "list zones found..." + print json.dumps(zone, indent=3) + + # we only want to delete zones that the ok user "owns" + zones_to_delete = filter(lambda x: (x['adminGroupId'] in group_ids) or (x['account'] in group_ids), zones) + zone_names_to_delete = map(lambda x: x['name'], zones_to_delete) + + print "zones to delete:" + for name in zone_names_to_delete: + print name + + zoneids_to_delete = map(lambda x: x['id'], zones_to_delete) + + client.abandon_zones(zoneids_to_delete) + + +def clear_groups(client, exclude=[]): + groups = client.list_all_my_groups() + group_ids = map(lambda x: x['id'], groups) + + for group_id in group_ids: + if not group_id in exclude: + client.delete_group(group_id, status=200) + +def get_change_A_AAAA_json(input_name, record_type="A", ttl=200, address="1.1.1.1", change_type="Add"): + if change_type == "Add": + json = { + "changeType": change_type, + "inputName": input_name, + "type": record_type, + "ttl": ttl, + "record": { + "address": address + } + } + else: + json = { + "changeType": "DeleteRecordSet", + "inputName": input_name, + "type": record_type + } + return json + +def get_change_CNAME_json(input_name, ttl=200, cname="test.com", change_type="Add"): + if change_type == "Add": + json = { + "changeType": change_type, + "inputName": input_name, + "type": "CNAME", + "ttl": ttl, + "record": { + "cname": cname + } + } + else: + json = { + "changeType": "DeleteRecordSet", + "inputName": input_name, + "type": "CNAME" + } + return json + +def get_change_PTR_json(ip, ttl=200, ptrdname="test.com", change_type="Add"): + if change_type == "Add": + json = { + "changeType": change_type, + "inputName": ip, + "type": "PTR", + "ttl": ttl, + "record": { + "ptrdname": ptrdname + } + } + else: + json = { + "changeType": "DeleteRecordSet", + "inputName": ip, + "type": "PTR" + } + return json + + +def get_change_TXT_json(input_name, record_type="TXT", ttl=200, text="test", change_type="Add"): + if change_type == "Add": + json = { + "changeType": change_type, + "inputName": input_name, + "type": record_type, + "ttl": ttl, + "record": { + "text": text + } + } + else: + json = { + "changeType": "DeleteRecordSet", + "inputName": input_name, + "type": record_type + } + return json + + +def get_change_MX_json(input_name, ttl=200, preference=1, exchange="foo.bar.", change_type="Add"): + if change_type == "Add": + json = { + "changeType": change_type, + "inputName": input_name, + "type": "MX", + "ttl": ttl, + "record": { + "preference": preference, + "exchange": exchange + } + } + else: + json = { + "changeType": "DeleteRecordSet", + "inputName": input_name, + "type": "MX" + } + return json + +def get_recordset_json(zone, rname, type, rdata_list, ttl=200): + json = { + "zoneId": zone['id'], + "name": rname, + "type": type, + "ttl": ttl, + "records": rdata_list + } + return json + +def clear_recordset_list(to_delete, client): + delete_changes = [] + for result_rs in to_delete: + try: + delete_result = client.delete_recordset(result_rs['zone']['id'], result_rs['recordSet']['id'], status=202) + delete_changes.append(delete_result) + except: + pass + for change in delete_changes: + try: + client.wait_until_recordset_change_status(change, 'Complete') + except: + pass + +def clear_zoneid_rsid_tuple_list(to_delete, client): + delete_changes = [] + for tup in to_delete: + try: + delete_result = client.delete_recordset(tup[0], tup[1], status=202) + delete_changes.append(delete_result) + except: + pass + for change in delete_changes: + try: + client.wait_until_recordset_change_status(change, 'Complete') + except: + pass diff --git a/modules/api/functional_test/vinyldns_context.py b/modules/api/functional_test/vinyldns_context.py new file mode 100644 index 000000000..a831277eb --- /dev/null +++ b/modules/api/functional_test/vinyldns_context.py @@ -0,0 +1,16 @@ +class VinylDNSTestContext: + dns_ip = 'localhost' + dns_zone_name = 'vinyldns.' + dns_rev_v4_zone_name = '30.172.in-addr.arpa.' + dns_rev_v6_zone_name = '1.9.e.f.c.c.7.2.9.6.d.f.ip6.arpa.' + dns_key_name = 'vinyldns.' + dns_key = 'nzisn+4G2ldMn0q1CV3vsg==' + vinyldns_url = 'http://localhost:9000' + + @staticmethod + def configure(ip, zone, key_name, key, url): + VinylDNSTestContext.dns_ip = ip + VinylDNSTestContext.dns_zone_name = zone + VinylDNSTestContext.dns_key_name = key_name + VinylDNSTestContext.dns_key = key + VinylDNSTestContext.vinyldns_url = url diff --git a/modules/api/functional_test/vinyldns_python.py b/modules/api/functional_test/vinyldns_python.py new file mode 100644 index 000000000..f546a2a8f --- /dev/null +++ b/modules/api/functional_test/vinyldns_python.py @@ -0,0 +1,857 @@ +import json +import time +import logging +import collections + +import requests +from requests.adapters import HTTPAdapter +from requests.packages.urllib3.util.retry import Retry +from hamcrest import * + +# TODO: Didn't like this boto request signer, fix when moving back +from boto_request_signer import BotoRequestSigner + +# Python 2/3 compatibility +from requests.compat import urljoin, urlparse, urlsplit +from builtins import str +from future.utils import iteritems +from future.moves.urllib.parse import parse_qs + +try: + basestring +except NameError: + basestring = str + +logger = logging.getLogger(__name__) + +__all__ = [u'VinylDNSClient', u'MAX_RETRIES', u'RETRY_WAIT'] + +MAX_RETRIES = 30 +RETRY_WAIT = 0.05 + +class VinylDNSClient(object): + + def __init__(self, url, access_key, secret_key): + self.index_url = url + self.headers = { + u'Accept': u'application/json, text/plain', + u'Content-Type': u'application/json' + } + + self.signer = BotoRequestSigner(self.index_url, + access_key, secret_key) + + self.session = self.requests_retry_session() + self.session_not_found_ok = self.requests_retry_not_found_ok_session() + + def requests_retry_not_found_ok_session(self, + retries=5, + backoff_factor=0.4, + status_forcelist=(500, 502, 504), + session=None, + ): + session = session or requests.Session() + retry = Retry( + total=retries, + read=retries, + connect=retries, + backoff_factor=backoff_factor, + status_forcelist=status_forcelist, + ) + adapter = HTTPAdapter(max_retries=retry) + session.mount(u'http://', adapter) + session.mount(u'https://', adapter) + return session + + def requests_retry_session(self, + retries=5, + backoff_factor=0.4, + status_forcelist=(500, 502, 504), + session=None, + ): + session = session or requests.Session() + retry = Retry( + total=retries, + read=retries, + connect=retries, + backoff_factor=backoff_factor, + status_forcelist=status_forcelist, + ) + adapter = HTTPAdapter(max_retries=retry) + session.mount(u'http://', adapter) + session.mount(u'https://', adapter) + return session + + def make_request(self, url, method=u'GET', headers=None, body_string=None, sign_request=True, not_found_ok=False, **kwargs): + + # pull out status or None + status_code = kwargs.pop(u'status', None) + + # remove retries arg if provided + kwargs.pop(u'retries', None) + + path = urlparse(url).path + + # we must parse the query string so we can provide it if it exists so that we can pass it to the + # build_vinyldns_request so that it can be properly included in the AWS signing... + query = parse_qs(urlsplit(url).query) + + if query: + # the problem with parse_qs is that it will return a list for ALL params, even if they are a single value + # we need to essentially flatten the params if a param has only one value + query = dict((k, v if len(v)>1 else v[0]) + for k, v in iteritems(query)) + + if sign_request: + signed_headers, signed_body = self.build_vinyldns_request(method, path, body_string, query, + with_headers=headers or {}, **kwargs) + else: + signed_headers = headers or {} + signed_body = body_string + + if not_found_ok: + response = self.session_not_found_ok.request(method, url, data=signed_body, headers=signed_headers, **kwargs) + else: + response = self.session.request(method, url, data=signed_body, headers=signed_headers, **kwargs) + + if status_code is not None: + if isinstance(status_code, collections.Iterable): + assert_that(response.status_code, is_in(status_code)) + else: + assert_that(response.status_code, is_(status_code)) + + try: + return response.status_code, response.json() + except: + return response.status_code, response.text + + def ping(self): + """ + Simple ping request + :return: the content of the response, which should be PONG + """ + url = urljoin(self.index_url, '/ping') + + response, data = self.make_request(url) + return data + + def get_status(self): + """ + Gets processing status + :return: the content of the response + """ + url = urljoin(self.index_url, '/status') + + response, data = self.make_request(url) + + return data + + def post_status(self, status): + """ + Update processing status + :return: the content of the response + """ + url = urljoin(self.index_url, '/status?processingDisabled={}'.format(status)) + response, data = self.make_request(url, 'POST', self.headers) + + return data + + def color(self): + """ + Gets the current color for the application + :return: the content of the response, which should be "blue" or "green" + """ + url = urljoin(self.index_url, '/color') + response, data = self.make_request(url) + return data + + def health(self): + """ + Checks the health of the app, asserts that a 200 should be returned, otherwise + this will fail + """ + url = urljoin(self.index_url, '/health') + self.make_request(url, sign_request=False) + + def create_group(self, group, **kwargs): + """ + Creates a new group + :param group: A group dictionary that can be serialized to json + :return: the content of the response, which should be a group json + """ + + url = urljoin(self.index_url, u'/groups') + response, data = self.make_request(url, u'POST', self.headers, json.dumps(group), **kwargs) + + return data + + def get_group(self, group_id, **kwargs): + """ + Gets a group + :param group_id: Id of the group to get + :return: the group json + """ + + url = urljoin(self.index_url, u'/groups/' + group_id) + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + + return data + + def delete_group(self, group_id, **kwargs): + """ + Deletes a group + :param group_id: Id of the group to delete + :return: the group json + """ + + url = urljoin(self.index_url, u'/groups/' + group_id) + response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + + return data + + def update_group(self, group_id, group, **kwargs): + """ + Update an existing group + :param group_id: The id of the group being updated + :param group: A group dictionary that can be serialized to json + :return: the content of the response, which should be a group json + """ + + url = urljoin(self.index_url, u'/groups/{0}'.format(group_id)) + response, data = self.make_request(url, u'PUT', self.headers, json.dumps(group), not_found_ok=True, **kwargs) + + return data + + def list_my_groups(self, group_name_filter=None, start_from=None, max_items=None, **kwargs): + """ + Retrieves my groups + :param start_from: the start key of the page + :param max_items: the page limit + :param group_name_filter: only returns groups whose names contain filter string + :return: the content of the response + """ + + args = [] + if group_name_filter: + args.append(u'groupNameFilter={0}'.format(group_name_filter)) + if start_from: + args.append(u'startFrom={0}'.format(start_from)) + if max_items is not None: + args.append(u'maxItems={0}'.format(max_items)) + + url = urljoin(self.index_url, u'/groups') + u'?' + u'&'.join(args) + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + + return data + + def list_all_my_groups(self, group_name_filter=None, **kwargs): + """ + Retrieves all my groups + :param group_name_filter: only returns groups whose names contain filter string + :return: the content of the response + """ + + groups = [] + args = [] + if group_name_filter: + args.append(u'groupNameFilter={0}'.format(group_name_filter)) + + url = urljoin(self.index_url, u'/groups') + u'?' + u'&'.join(args) + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + + groups.extend(data[u'groups']) + + while u'nextId' in data: + args = [] + + if group_name_filter: + args.append(u'groupNameFilter={0}'.format(group_name_filter)) + if u'nextId' in data: + args.append(u'startFrom={0}'.format(data[u'nextId'])) + + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + groups.extend(data[u'groups']) + + return groups + + def list_members_group(self, group_id, start_from=None, max_items=None, **kwargs): + """ + List the members of an existing group + :param group_id: the Id of an existing group + :param start_from: the Id a member of the group + :param max_items: the max number of items to be returned + :return: the json of the members + """ + if start_from is None and max_items is None: + url = urljoin(self.index_url, u'/groups/{0}/members'.format(group_id)) + elif start_from is None and max_items is not None: + url = urljoin(self.index_url, u'/groups/{0}/members?maxItems={1}'.format(group_id, max_items)) + elif start_from is not None and max_items is None: + url = urljoin(self.index_url, u'/groups/{0}/members?startFrom={1}'.format(group_id, start_from)) + elif start_from is not None and max_items is not None: + url = urljoin(self.index_url, u'/groups/{0}/members?startFrom={1}&maxItems={2}'.format(group_id, + start_from, + max_items)) + + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + + return data + + def list_group_admins(self, group_id, **kwargs): + """ + returns the group admins + :param group_id: the Id of the group + :return: the user info of the admins + """ + url = urljoin(self.index_url, u'/groups/{0}/admins'.format(group_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + + return data + + def get_group_changes(self, group_id, start_from=None, max_items=None, **kwargs): + """ + List the changes of an existing group + :param group_id: the Id of an existing group + :param start_from: the Id a group change + :param max_items: the max number of items to be returned + :return: the json of the members + """ + if start_from is None and max_items is None: + url = urljoin(self.index_url, u'/groups/{0}/activity'.format(group_id)) + elif start_from is None and max_items is not None: + url = urljoin(self.index_url, u'/groups/{0}/activity?maxItems={1}'.format(group_id, max_items)) + elif start_from is not None and max_items is None: + url = urljoin(self.index_url, u'/groups/{0}/activity?startFrom={1}'.format(group_id, start_from)) + elif start_from is not None and max_items is not None: + url = urljoin(self.index_url, u'/groups/{0}/activity?startFrom={1}&maxItems={2}'.format(group_id, + start_from, + max_items)) + + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + + return data + + def create_zone(self, zone, **kwargs): + """ + Creates a new zone with the given name and email + :param zone: the zone to be created + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones') + response, data = self.make_request(url, u'POST', self.headers, json.dumps(zone), **kwargs) + return data + + def update_zone(self, zone, **kwargs): + """ + Updates a zone + :param zone: the zone to be created + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}'.format(zone[u'id'])) + response, data = self.make_request(url, u'PUT', self.headers, json.dumps(zone), not_found_ok=True, **kwargs) + return data + + def sync_zone(self, zone_id, **kwargs): + """ + Syncs a zone + :param zone: the zone to be updated + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}/sync'.format(zone_id)) + response, data = self.make_request(url, u'POST', self.headers, not_found_ok=True, **kwargs) + + return data + + def delete_zone(self, zone_id, **kwargs): + """ + Deletes the zone for the given id + :param zone_id: the id of the zone to be deleted + :return: nothing, will fail if the status code was not expected + """ + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + + return data + + def get_zone(self, zone_id, **kwargs): + """ + Gets a zone for the given zone id + :param zone_id: the id of the zone to retrieve + :return: the zone, or will 404 if not found + """ + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + + return data + + def get_zone_history(self, zone_id, **kwargs): + """ + Gets the zone history for the given zone id + :param zone_id: the id of the zone to retrieve + :return: the zone, or will 404 if not found + """ + url = urljoin(self.index_url, u'/zones/{0}/history'.format(zone_id)) + + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + return data + + def get_zone_change(self, zone_change, **kwargs): + """ + Gets a zone change with the provided id + + Unfortunately, there is no endpoint, so we have to get all zone history and parse + """ + zone_change_id = zone_change[u'id'] + change = None + + def change_id_match(possible_match): + return possible_match[u'id'] == zone_change_id + + history = self.get_zone_history(zone_change[u'zone'][u'id']) + if u'zoneChanges' in history: + zone_changes = history[u'zoneChanges'] + matching_changes = filter(change_id_match, zone_changes) + + if len(matching_changes) > 0: + change = matching_changes[0] + + return change + + def list_zone_changes(self, zone_id, start_from=None, max_items=None, **kwargs): + """ + Gets the zone changes for the given zone id + :param zone_id: the id of the zone to retrieve + :param start_from: the start key of the page + :param max_items: the page limit + :return: the zone, or will 404 if not found + """ + args = [] + if start_from: + args.append(u'startFrom={0}'.format(start_from)) + if max_items is not None: + args.append(u'maxItems={0}'.format(max_items)) + url = urljoin(self.index_url, u'/zones/{0}/changes'.format(zone_id)) + u'?' + u'&'.join(args) + + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + return data + + def list_recordset_changes(self, zone_id, start_from=None, max_items=None, **kwargs): + """ + Gets the recordset changes for the given zone id + :param zone_id: the id of the zone to retrieve + :param start_from: the start key of the page + :param max_items: the page limit + :return: the zone, or will 404 if not found + """ + args = [] + if start_from: + args.append(u'startFrom={0}'.format(start_from)) + if max_items is not None: + args.append(u'maxItems={0}'.format(max_items)) + url = urljoin(self.index_url, u'/zones/{0}/recordsetchanges'.format(zone_id)) + u'?' + u'&'.join(args) + + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, **kwargs) + return data + + def list_zones(self, name_filter=None, start_from=None, max_items=None, **kwargs): + """ + Gets a list of zones that currently exist + :return: a list of zones + """ + url = urljoin(self.index_url, u'/zones') + + query = [] + if name_filter: + query.append(u'nameFilter=' + name_filter) + + if start_from: + query.append(u'startFrom=' + str(start_from)) + + if max_items: + query.append(u'maxItems=' + str(max_items)) + + if query: + url = url + u'?' + u'&'.join(query) + + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + return data + + def create_recordset(self, recordset, **kwargs): + """ + Creates a new recordset + :param recordset: the recordset to be created + :return: the content of the response + """ + if recordset and u'name' in recordset: + recordset[u'name'] = recordset[u'name'].replace(u'_', u'-') + + url = urljoin(self.index_url, u'/zones/{0}/recordsets'.format(recordset[u'zoneId'])) + response, data = self.make_request(url, u'POST', self.headers, json.dumps(recordset), **kwargs) + return data + + def delete_recordset(self, zone_id, rs_id, **kwargs): + """ + Deletes an existing recordset + :param zone_id: the zone id the recordset belongs to + :param rs_id: the id of the recordset to be deleted + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, rs_id)) + + response, data = self.make_request(url, u'DELETE', self.headers, not_found_ok=True, **kwargs) + return data + + def update_recordset(self, recordset, **kwargs): + """ + Deletes an existing recordset + :param recordset: the recordset to be updated + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(recordset[u'zoneId'], recordset[u'id'])) + + response, data = self.make_request(url, u'PUT', self.headers, json.dumps(recordset), not_found_ok=True, **kwargs) + return data + + def get_recordset(self, zone_id, rs_id, **kwargs): + """ + Gets an existing recordset + :param zone_id: the zone id the recordset belongs to + :param rs_id: the id of the recordset to be retrieved + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, rs_id)) + + response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + return data + + def get_recordset_change(self, zone_id, rs_id, change_id, **kwargs): + """ + Gets an existing recordset change + :param zone_id: the zone id the recordset belongs to + :param rs_id: the id of the recordset to be retrieved + :param change_id: the id of the change to be retrieved + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}/changes/{2}'.format(zone_id, rs_id, change_id)) + + response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + return data + + def list_recordsets(self, zone_id, start_from=None, max_items=None, record_name_filter=None, **kwargs): + """ + Retrieves all recordsets in a zone + :param zone_id: the zone to retrieve + :param start_from: the start key of the page + :param max_items: the page limit + :param record_name_filter: only returns recordsets whose names contain filter string + :return: the content of the response + """ + args = [] + if start_from: + args.append(u'startFrom={0}'.format(start_from)) + if max_items is not None: + args.append(u'maxItems={0}'.format(max_items)) + if record_name_filter: + args.append(u'recordNameFilter={0}'.format(record_name_filter)) + + url = urljoin(self.index_url, u'/zones/{0}/recordsets'.format(zone_id)) + u'?' + u'&'.join(args) + + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + return data + + def create_batch_change(self, batch_change_input, **kwargs): + """ + Creates a new batch change + :param batch_change_input: the batchchange to be created + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/batchrecordchanges') + response, data = self.make_request(url, u'POST', self.headers, json.dumps(batch_change_input), **kwargs) + return data + + def get_batch_change(self, batch_change_id, **kwargs): + """ + Gets an existing batch change + :param batch_change_id: the unique identifier of the batchchange + :return: the content of the response + """ + url = urljoin(self.index_url, u'/zones/batchrecordchanges/{0}'.format(batch_change_id)) + response, data = self.make_request(url, u'GET', self.headers, None, not_found_ok=True, **kwargs) + return data + + def list_batch_change_summaries(self, start_from=None, max_items=None, **kwargs): + """ + Gets list of user's batch change summaries + :return: the content of the response + """ + args = [] + if start_from: + args.append(u'startFrom={0}'.format(start_from)) + if max_items is not None: + args.append(u'maxItems={0}'.format(max_items)) + + url = urljoin(self.index_url, u'/zones/batchrecordchanges') + u'?' + u'&'.join(args) + + response, data = self.make_request(url, u'GET', self.headers, **kwargs) + return data + + def build_vinyldns_request(self, method, path, body_data, params=None, **kwargs): + + if isinstance(body_data, basestring): + body_string = body_data + else: + body_string = json.dumps(body_data) + + new_headers = {u'X-Amz-Target': u'VinylDNS'} + new_headers.update(kwargs.get(u'with_headers', dict())) + + suppress_headers = kwargs.get(u'suppress_headers', list()) + + headers = self.build_headers(new_headers, suppress_headers) + + auth_header = self.signer.build_auth_header(method, path, headers, body_string, params) + headers[u'Authorization'] = auth_header + + return headers, body_string + + @staticmethod + def build_headers(new_headers, suppressed_keys): + """Construct HTTP headers for a request.""" + + def canonical_header_name(field_name): + return u'-'.join(word.capitalize() for word in field_name.split(u'-')) + + import datetime + now = datetime.datetime.utcnow() + headers = {u'Content-Type': u'application/x-amz-json-1.0', + u'Date': now.strftime(u'%a, %d %b %Y %H:%M:%S GMT'), + u'X-Amz-Date': now.strftime(u'%Y%m%dT%H%M%SZ')} + + for k, v in iteritems(new_headers): + headers[canonical_header_name(k)] = v + + for k in map(canonical_header_name, suppressed_keys): + if k in headers: + del headers[k] + + return headers + + def add_zone_acl_rule_with_wait(self, zone_id, acl_rule, sign_request=True, **kwargs): + """ + Puts an acl rule on the zone and waits for success + :param zone_id: The id of the zone to attach the acl rule to + :param acl_rule: The acl rule contents + :param sign_request: An indicator if we should sign the request; useful for testing auth + :return: the content of the response + """ + rule = self.add_zone_acl_rule(zone_id, acl_rule, sign_request, **kwargs) + self.wait_until_zone_change_status(rule, 'Synced') + + return rule + + def add_zone_acl_rule(self, zone_id, acl_rule, sign_request=True, **kwargs): + """ + Puts an acl rule on the zone + :param zone_id: The id of the zone to attach the acl rule to + :param acl_rule: The acl rule contents + :param sign_request: An indicator if we should sign the request; useful for testing auth + :return: the content of the response + """ + url = urljoin(self.index_url, '/zones/{0}/acl/rules'.format(zone_id)) + response, data = self.make_request(url, 'PUT', self.headers, json.dumps(acl_rule), sign_request=sign_request, **kwargs) + + return data + + def delete_zone_acl_rule_with_wait(self, zone_id, acl_rule, sign_request=True, **kwargs): + """ + Deletes an acl rule from the zone and waits for success + :param zone_id: The id of the zone to remove the acl from + :param acl_rule: The acl rule to remove + :param sign_request: An indicator if we should sign the request; useful for testing auth + :return: the content of the response + """ + rule = self.delete_zone_acl_rule(zone_id, acl_rule, sign_request, **kwargs) + self.wait_until_zone_change_status(rule, 'Synced') + + return rule + + def delete_zone_acl_rule(self, zone_id, acl_rule, sign_request=True, **kwargs): + """ + Deletes an acl rule from the zone + :param zone_id: The id of the zone to remove the acl from + :param acl_rule: The acl rule to remove + :param sign_request: An indicator if we should sign the request; useful for testing auth + :return: the content of the response + """ + url = urljoin(self.index_url, '/zones/{0}/acl/rules'.format(zone_id)) + response, data = self.make_request(url, 'DELETE', self.headers, json.dumps(acl_rule), sign_request=sign_request, **kwargs) + + return data + + def wait_until_recordset_deleted(self, zone_id, record_set_id, **kwargs): + retries = MAX_RETRIES + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + while response != 404 and retries > 0: + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + retries -= 1 + time.sleep(RETRY_WAIT) + + return response == 404 + + def wait_until_zone_change_status(self, zone_change, expected_status): + """ + Waits until the zone change status matches the expected status + """ + zone_change_id = zone_change[u'id'] + + def change_id_match(change): + return change[u'id'] == zone_change_id + + change = zone_change + retries = MAX_RETRIES + while change[u'status'] != expected_status and retries > 0: + history = self.get_zone_history(zone_change[u'zone'][u'id']) + + if u'zoneChanges' in history: + zone_changes = history[u'zoneChanges'] + matching_changes = filter(change_id_match, zone_changes) + + if len(matching_changes) > 0: + change = matching_changes[0] + time.sleep(RETRY_WAIT) + retries -= 1 + + return change[u'status'] == expected_status + + def wait_until_zone_deleted(self, zone_id, **kwargs): + """ + Waits a period of time for the zone deletion to complete. + + :param zone_id: the id of the zone that has been deleted. + :param kw: Additional parameters for the http request + :return: True when the zone deletion is complete False if the timeout expires + """ + retries = MAX_RETRIES + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + while response != 404 and retries > 0: + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + retries -= 1 + time.sleep(RETRY_WAIT) + + return response == 404 + + def wait_until_zone_exists(self, zone_change, **kwargs): + """ + Waits a period of time for the zone creation to complete. + + :param zone_change: the create zone change for the zone that has been created. + :param kw: Additional parameters for the http request + :return: True when the zone creation is complete False if the timeout expires + """ + zone_id = zone_change[u'zone'][u'id'] + retries = MAX_RETRIES + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + while response != 200 and retries > 0: + url = urljoin(self.index_url, u'/zones/{0}'.format(zone_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + retries -= 1 + time.sleep(RETRY_WAIT) + + return response == 200 + + def wait_until_recordset_exists(self, zone_id, record_set_id, **kwargs): + """ + Waits a period of time for the record set creation to complete. + + :param zone_id: the id of the zone the record set lives in + :param record_set_id: the id of the recprdset that has been created. + :param kw: Additional parameters for the http request + :return: True when the recordset creation is complete False if the timeout expires + """ + retries = MAX_RETRIES + url = urljoin(self.index_url, u'/zones/{0}/recordsets/{1}'.format(zone_id, record_set_id)) + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + while response != 200 and retries > 0: + response, data = self.make_request(url, u'GET', self.headers, not_found_ok=True, status=(200, 404), **kwargs) + retries -= 1 + time.sleep(RETRY_WAIT) + + if response == 200: + return data + + return response == 200 + + def abandon_zones(self, zone_ids, **kwargs): + #delete each zone + for zone_id in zone_ids: + self.delete_zone(zone_id, status=(202, 404)) + + # Wait until each zone is gone + for zone_id in zone_ids: + success = self.wait_until_zone_deleted(zone_id) + assert_that(success, is_(True)) + + def wait_until_recordset_change_status(self, rs_change, expected_status): + """ + Waits a period of time for a recordset to be active by repeatedly fetching the recordset and testing + the recordset status + :param rs_change: The recordset change being evaluated, must include the id and the zone id + :return: The recordset change that is active, or it could still be pending if the number of retries was exhausted + """ + change = rs_change + retries = MAX_RETRIES + while change['status'] != expected_status and retries > 0: + latest_change = self.get_recordset_change(change['recordSet']['zoneId'], change['recordSet']['id'], + change['id'], status=(200,404)) + print "\r\n --- latest change is " + str(latest_change) + if "Unable to find record set change" in latest_change: + change = change + else: + change = latest_change + + time.sleep(RETRY_WAIT) + retries -= 1 + + if change['status'] != expected_status: + print 'Failed waiting for record change status' + if 'systemMessage' in change: + print 'systemMessage is ' + change['systemMessage'] + + assert_that(change['status'], is_(expected_status)) + return change + + def batch_is_completed(self, batch_change): + return batch_change['status'] in ['Complete', 'Failed', 'PartialFailure'] + + def wait_until_batch_change_completed(self, batch_change): + """ + Waits a period of time for a batch change to be complete (or failed) by repeatedly fetching the change and testing + the status + :param batch_change: The batch change being evaluated + :return: The batch change that is active, or it could still be pending if the number of retries was exhausted + """ + change = batch_change + retries = MAX_RETRIES + + while not self.batch_is_completed(change) and retries > 0: + latest_change = self.get_batch_change(change['id'], status=(200,404)) + print "\r\n --- latest change is " + str(latest_change) + if "cannot be found" in latest_change: + change = change + else: + change = latest_change + + time.sleep(RETRY_WAIT) + retries -= 1 + + if not self.batch_is_completed(change): + print 'Failed waiting for record change status' + print change + + assert_that(self.batch_is_completed(change), is_(True)) + return change diff --git a/modules/api/functional_test/zone_inject.py b/modules/api/functional_test/zone_inject.py new file mode 100644 index 000000000..111058e17 --- /dev/null +++ b/modules/api/functional_test/zone_inject.py @@ -0,0 +1,48 @@ +import requests +import json + +newzone = "http://localhost:9000/zones" + + +names = ["cap", "video", "aae", "papi", "dns-ops", "ios", "home", "android", "games", "viper", "headwaters", "xtv", "consec", "media", "accounts"]; + +records = ["10.25.3.2","155.65.10.3", "10.1.1.1", "168.82.76.5", "192.168.99.88", "FE80:0000:0000:0000:0202:B3FF:FE1E:8329", "GF77:0000:0000:0000:0411:B3DF:FE2E:4444", "CC42:0000:0000:0000:0509:B3FF:FE3E:6543", "BG50:0000:0000:0000:0203:C2EE:G3F4:9823","AA90:0000:0000:0000:0608:C2EE:FE4E:1234", "staging", "test", "admin", "assets", "admin"]; + +for x in range(0, 15): + zonename = names[x] + zoneemail = 'testuser'+ str(x) +'@example.com' + payload = {"name": zonename, "origin": "vinyldns", "email": zoneemail} + headers = {'Content-type': 'application/json'} + r = requests.post(newzone, data=json.dumps(payload),headers=headers) + print(r.text) + + +zones = requests.get(newzone) +zone_data = zones.json() + +z=0 +for i in zone_data['zones']: + if z<5: + z=z+1 + recurl = newzone + '/' + str(i['id']) + '/recordsets' + print recurl + payload = {"zoneId":i['id'],"name":"record."+i['name'],"type":"A","ttl":300,"records":[{"address":records[z-1]}]} + headers = {'Content-type': 'application/json'} + r = requests.post(recurl, data=json.dumps(payload),headers=headers) + print(r.text) + elif 4 + + + %msg%n + + + + target/test/test.log + true + + %-4relative [%thread] %-5level %logger{35} - %msg%n + + + + + + + diff --git a/modules/api/src/it/scala/vinyldns/api/domain/dns/DnsConversionsIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/domain/dns/DnsConversionsIntegrationSpec.scala new file mode 100644 index 000000000..a3cdc03fa --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/domain/dns/DnsConversionsIntegrationSpec.scala @@ -0,0 +1,63 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain.dns + +import org.scalatest.{BeforeAndAfterAll, Matchers, WordSpec} +import org.xbill.DNS +import vinyldns.api.{ResultHelpers, VinylDNSTestData} +import vinyldns.api.domain.dns.DnsProtocol.{DnsResponse, NoError} +import vinyldns.api.domain.record.RecordSetChange +import vinyldns.api.domain.zone.{Zone, ZoneConnection, ZoneStatus} + +class DnsConversionsIntegrationSpec + extends WordSpec + with Matchers + with BeforeAndAfterAll + with VinylDNSTestData + with ResultHelpers { + + private val zoneName = "vinyldns." + private var testZone: Zone = _ + + override protected def beforeAll(): Unit = + testZone = Zone( + zoneName, + "test@test.com", + ZoneStatus.Active, + connection = Some( + ZoneConnection("vinyldns.", "vinyldns.", "nzisn+4G2ldMn0q1CV3vsg==", "127.0.0.1:19001")), + transferConnection = Some( + ZoneConnection("vinyldns.", "vinyldns.", "nzisn+4G2ldMn0q1CV3vsg==", "127.0.0.1:19001")) + ) + + "Obscuring Dns Messages" should { + "remove the tsig key value during an update" in { + val testRecord = aaaa.copy(zoneId = testZone.id) + val conn = DnsConnection(testZone.connection.get) + val result: DnsResponse = + rightResultOf(conn.addRecord(RecordSetChange.forAdd(testRecord, testZone)).run) + + result shouldBe a[NoError] + val resultingMessage = result.asInstanceOf[NoError].message + resultingMessage.getSectionArray(DNS.Section.ADDITIONAL) shouldBe empty + + val resultingMessageString = resultingMessage.toString + + resultingMessageString should not contain "TSIG" + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/domain/record/RecordSetServiceIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/domain/record/RecordSetServiceIntegrationSpec.scala new file mode 100644 index 000000000..bfbf945c8 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/domain/record/RecordSetServiceIntegrationSpec.scala @@ -0,0 +1,303 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain.record + +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.Matchers +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.mockito.MockitoSugar +import org.scalatest.time.{Seconds, Span} +import scalaz.\/ +import vinyldns.api.domain.AccessValidations +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.membership.{Group, User, UserRepository} +import vinyldns.api.domain.record.RecordType._ +import vinyldns.api.domain.zone.{RecordSetAlreadyExists, Zone, ZoneRepository, ZoneStatus} +import vinyldns.api.engine.sqs.TestSqsService +import vinyldns.api.repository.dynamodb.{DynamoDBIntegrationSpec, DynamoDBRecordSetRepository} +import vinyldns.api.repository.mysql.VinylDNSJDBC + +import scala.concurrent.ExecutionContext.Implicits.global +import scala.concurrent.duration._ +import scala.concurrent.{Await, Future} + +class RecordSetServiceIntegrationSpec + extends DynamoDBIntegrationSpec + with MockitoSugar + with Matchers { + + private val recordSetTable = "recordSetTest" + + private val liveTestConfig = ConfigFactory.parseString(s""" + | recordSet { + | # use the dummy store, this should only be used local + | dummy = true + | + | dynamo { + | tableName = "$recordSetTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + | } + """.stripMargin) + + private val recordSetStoreConfig = liveTestConfig.getConfig("recordSet") + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + private var recordSetRepo: DynamoDBRecordSetRepository = _ + private var zoneRepo: ZoneRepository = _ + + private var testRecordSetService: RecordSetServiceAlgebra = _ + + private val user = User("live-test-user", "key", "secret") + private val group = Group(s"test-group", "test@test.com", adminUserIds = Set(user.id)) + private val auth = AuthPrincipal(user, Seq(group.id)) + + private val zone = Zone( + s"live-zone-test.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection, + adminGroupId = group.id) + private val apexTestRecordA = RecordSet( + zone.id, + "live-zone-test", + A, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AData("10.1.1.1"))) + private val apexTestRecordAAAA = RecordSet( + zone.id, + "live-zone-test", + AAAA, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AAAAData("fd69:27cc:fe91::60"))) + private val subTestRecordA = RecordSet( + zone.id, + "a-record", + A, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AData("10.1.1.1"))) + private val subTestRecordAAAA = RecordSet( + zone.id, + "aaaa-record", + AAAA, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AAAAData("fd69:27cc:fe91::60"))) + private val subTestRecordNS = RecordSet( + zone.id, + "ns-record", + NS, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(NSData("172.17.42.1."))) + + private val zoneTestNameConflicts = Zone( + s"zone-test-name-conflicts.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection, + adminGroupId = group.id) + private val apexTestRecordNameConflict = RecordSet( + zoneTestNameConflicts.id, + "zone-test-name-conflicts.", + A, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AData("10.1.1.1"))) + private val subTestRecordNameConflict = RecordSet( + zoneTestNameConflicts.id, + "relative-name-conflict", + A, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AData("10.1.1.1"))) + + private val zoneTestAddRecords = Zone( + s"zone-test-add-records.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection, + adminGroupId = group.id) + + def setup(): Unit = { + recordSetRepo = new DynamoDBRecordSetRepository(recordSetStoreConfig, dynamoDBHelper) + zoneRepo = VinylDNSJDBC.instance.zoneRepository + + List(zone, zoneTestNameConflicts, zoneTestAddRecords).map(z => waitForSuccess(zoneRepo.save(z))) + + // Seeding records in DB + val records = List( + apexTestRecordA, + apexTestRecordAAAA, + subTestRecordA, + subTestRecordAAAA, + subTestRecordNS, + apexTestRecordNameConflict, + subTestRecordNameConflict) + records.map(record => waitForSuccess(recordSetRepo.putRecordSet(record))) + + testRecordSetService = new RecordSetService( + zoneRepo, + recordSetRepo, + mock[RecordChangeRepository], + mock[UserRepository], + TestSqsService, + new AccessValidations()) + } + + def tearDown(): Unit = () + + "DynamoDBRecordSetRepository" should { + "not alter record name when seeding database for tests" in { + val originalRecord = testRecordSetService + .getRecordSet(apexTestRecordA.id, apexTestRecordA.zoneId, auth) + .run + .mapTo[Throwable \/ RecordSet] + whenReady(originalRecord, timeout) { out => + rightValue(out).name shouldBe "live-zone-test" + } + } + } + + "RecordSetService" should { + "create apex record without trailing dot and save record name with trailing dot" in { + val newRecord = RecordSet( + zoneTestAddRecords.id, + "zone-test-add-records", + A, + 38400, + RecordSetStatus.Active, + DateTime.now, + None, + List(AData("10.1.1.1"))) + val result = + testRecordSetService.addRecordSet(newRecord, auth).run.mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + rightValue(out).recordSet.name shouldBe "zone-test-add-records." + } + } + + "update apex A record and add trailing dot" in { + val newRecord = apexTestRecordA.copy(ttl = 200) + val result = testRecordSetService + .updateRecordSet(newRecord, auth) + .run + .mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + val change = rightValue(out) + change.recordSet.name shouldBe "live-zone-test." + change.recordSet.ttl shouldBe 200 + } + } + + "update apex AAAA record and add trailing dot" in { + val newRecord = apexTestRecordAAAA.copy(ttl = 200) + val result = testRecordSetService + .updateRecordSet(newRecord, auth) + .run + .mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + val change = rightValue(out) + change.recordSet.name shouldBe "live-zone-test." + change.recordSet.ttl shouldBe 200 + } + } + + "update relative A record without adding trailing dot" in { + val newRecord = subTestRecordA.copy(ttl = 200) + val result = testRecordSetService + .updateRecordSet(newRecord, auth) + .run + .mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + val change = rightValue(out) + change.recordSet.name shouldBe "a-record" + change.recordSet.ttl shouldBe 200 + } + } + + "update relative AAAA without adding trailing dot" in { + val newRecord = subTestRecordAAAA.copy(ttl = 200) + val result = testRecordSetService + .updateRecordSet(newRecord, auth) + .run + .mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + val change = rightValue(out) + change.recordSet.name shouldBe "aaaa-record" + change.recordSet.ttl shouldBe 200 + } + } + + "update relative NS record without trailing dot" in { + val newRecord = subTestRecordNS.copy(ttl = 200) + val superAuth = AuthPrincipal(okGroupAuth.signedInUser.copy(isSuper = true), Seq.empty) + val result = testRecordSetService + .updateRecordSet(newRecord, superAuth) + .run + .mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + val change = rightValue(out) + change.recordSet.name shouldBe "ns-record" + change.recordSet.ttl shouldBe 200 + } + } + + "fail to add relative record if apex record with same name already exists" in { + val newRecord = apexTestRecordNameConflict.copy(name = "zone-test-name-conflicts") + val result = + testRecordSetService.addRecordSet(newRecord, auth).run.mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + leftValue(out) shouldBe a[RecordSetAlreadyExists] + } + } + + "fail to add apex record if relative record with same name already exists" in { + val newRecord = subTestRecordNameConflict.copy(name = "relative-name-conflict.") + val result = + testRecordSetService.addRecordSet(newRecord, auth).run.mapTo[Throwable \/ RecordSetChange] + whenReady(result, timeout) { out => + leftValue(out) shouldBe a[RecordSetAlreadyExists] + } + } + } + + private def waitForSuccess[T](f: => Future[T]): T = { + val waiting = f.recover { case _ => Thread.sleep(2000); waitForSuccess(f) } + Await.result[T](waiting, 15.seconds) + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneServiceIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneServiceIntegrationSpec.scala new file mode 100644 index 000000000..c98aae1af --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/domain/zone/ZoneServiceIntegrationSpec.scala @@ -0,0 +1,156 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain.zone + +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.mockito.MockitoSugar +import org.scalatest.time.{Seconds, Span} +import scalaz.\/ +import vinyldns.api.domain.AccessValidations +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.membership.{Group, GroupRepository, User, UserRepository} +import vinyldns.api.domain.record._ +import vinyldns.api.engine.sqs.TestSqsService +import vinyldns.api.repository.dynamodb.{DynamoDBIntegrationSpec, DynamoDBRecordSetRepository} +import vinyldns.api.repository.mysql.VinylDNSJDBC + +import scala.concurrent.ExecutionContext.Implicits.global +import scala.concurrent.duration._ +import scala.concurrent.{Await, Future} + +class ZoneServiceIntegrationSpec extends DynamoDBIntegrationSpec with MockitoSugar { + + private val recordSetTable = "recordSetTest" + + private val liveTestConfig = ConfigFactory.parseString(s""" + | recordSet { + | # use the dummy store, this should only be used local + | dummy = true + | + | dynamo { + | tableName = "$recordSetTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + | } + """.stripMargin) + + private val recordSetStoreConfig = liveTestConfig.getConfig("recordSet") + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + private var recordSetRepo: RecordSetRepository = _ + private var zoneRepo: ZoneRepository = _ + + private var testZoneService: ZoneServiceAlgebra = _ + + private val user = User(s"live-test-user", "key", "secret") + private val group = Group(s"test-group", "test@test.com", adminUserIds = Set(user.id)) + private val auth = AuthPrincipal(user, Seq(group.id)) + private val badAuth = AuthPrincipal(user, Seq()) + private val zone = Zone( + s"live-test-zone.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection, + adminGroupId = group.id) + + private val testRecordSOA = RecordSet( + zoneId = zone.id, + name = "vinyldns", + typ = RecordType.SOA, + ttl = 38400, + status = RecordSetStatus.Active, + created = DateTime.now, + records = + List(SOAData("172.17.42.1.", "admin.test.com.", 1439234395, 10800, 3600, 604800, 38400)) + ) + private val testRecordNS = RecordSet( + zoneId = zone.id, + name = "vinyldns", + typ = RecordType.NS, + ttl = 38400, + status = RecordSetStatus.Active, + created = DateTime.now, + records = List(NSData("172.17.42.1."))) + private val testRecordA = RecordSet( + zoneId = zone.id, + name = "jenkins", + typ = RecordType.A, + ttl = 38400, + status = RecordSetStatus.Active, + created = DateTime.now, + records = List(AData("10.1.1.1"))) + + private val changeSetSOA = ChangeSet(RecordSetChange.forAdd(testRecordSOA, zone)) + private val changeSetNS = ChangeSet(RecordSetChange.forAdd(testRecordNS, zone)) + private val changeSetA = ChangeSet(RecordSetChange.forAdd(testRecordA, zone)) + + def setup(): Unit = { + recordSetRepo = new DynamoDBRecordSetRepository(recordSetStoreConfig, dynamoDBHelper) + zoneRepo = VinylDNSJDBC.instance.zoneRepository + + waitForSuccess(zoneRepo.save(zone)) + // Seeding records in DB + waitForSuccess(recordSetRepo.apply(changeSetSOA)) + waitForSuccess(recordSetRepo.apply(changeSetNS)) + waitForSuccess(recordSetRepo.apply(changeSetA)) + + testZoneService = new ZoneService( + zoneRepo, + mock[GroupRepository], + mock[UserRepository], + mock[ZoneChangeRepository], + mock[ZoneConnectionValidator], + TestSqsService, + new ZoneValidations(1000), + new AccessValidations() + ) + } + + def tearDown(): Unit = () + + "ZoneEntity" should { + "reject a DeleteZone with bad auth" in { + val result = + testZoneService.deleteZone(zone.id, badAuth).run.mapTo[Throwable \/ ZoneCommandResult] + whenReady(result, timeout) { _ => + val error = leftResultOf(result) + error shouldBe a[NotAuthorizedError] + } + } + "accept a DeleteZone" in { + val removeARecord = ChangeSet(RecordSetChange.forDelete(testRecordA, zone)) + waitForSuccess(recordSetRepo.apply(removeARecord)) + + val result = testZoneService.deleteZone(zone.id, auth).run.mapTo[Throwable \/ ZoneChange] + whenReady(result, timeout) { out => + out.isRight shouldBe true + val change = out.toOption.get + change.zone.id shouldBe zone.id + change.changeType shouldBe ZoneChangeType.Delete + } + } + } + + private def waitForSuccess[T](f: => Future[T]): T = { + val waiting = f.recover { case _ => Thread.sleep(2000); waitForSuccess(f) } + Await.result[T](waiting, 15.seconds) + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/engine/ZoneCommandHandlerIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/engine/ZoneCommandHandlerIntegrationSpec.scala new file mode 100644 index 000000000..d3b4f620b --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/engine/ZoneCommandHandlerIntegrationSpec.scala @@ -0,0 +1,238 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.engine + +import java.util.concurrent.Executors + +import cats.effect.IO +import com.typesafe.config.ConfigFactory +import fs2.{Scheduler, Stream} +import org.joda.time.DateTime +import org.scalatest.concurrent.Eventually +import org.scalatest.time.{Millis, Seconds, Span} +import vinyldns.api.domain.batch.BatchChangeRepository +import vinyldns.api.domain.record._ +import vinyldns.api.domain.zone._ +import vinyldns.api.engine.sqs.SqsConnection +import vinyldns.api.repository.dynamodb.{ + DynamoDBIntegrationSpec, + DynamoDBRecordChangeRepository, + DynamoDBRecordSetRepository, + DynamoDBZoneChangeRepository +} +import vinyldns.api.repository.mysql.VinylDNSJDBC + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class ZoneCommandHandlerIntegrationSpec extends DynamoDBIntegrationSpec with Eventually { + + import vinyldns.api.engine.sqs.SqsConverters._ + + private implicit val sched: Scheduler = + Scheduler.fromScheduledExecutorService(Executors.newScheduledThreadPool(2)) + + private val zoneName = "vinyldns." + private val zoneChangeTable = "zoneChangesTest" + private val recordSetTable = "recordSetTest" + private val recordChangeTable = "recordChangeTest" + + private val liveTestConfig = ConfigFactory.parseString(s""" + | zoneChanges { + | # use the dummy store, this should only be used local + | dummy = true + | + | dynamo { + | tableName = "$zoneChangeTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + | } + | recordSet { + | # use the dummy store, this should only be used local + | dummy = true + | + | dynamo { + | tableName = "$recordSetTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + | } + | recordChange { + | # use the dummy store, this should only be used local + | dummy = true + | + | dynamo { + | tableName = "$recordChangeTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + | } + """.stripMargin) + + private val zoneChangeStoreConfig = liveTestConfig.getConfig("zoneChanges") + private val recordSetStoreConfig = liveTestConfig.getConfig("recordSet") + private val recordChangeStoreConfig = liveTestConfig.getConfig("recordChange") + + private implicit val defaultPatience: PatienceConfig = + PatienceConfig(timeout = Span(5, Seconds), interval = Span(500, Millis)) + private implicit val ec: ExecutionContext = scala.concurrent.ExecutionContext.global + + private var recordChangeRepo: RecordChangeRepository = _ + private var recordSetRepo: RecordSetRepository = _ + private var zoneChangeRepo: ZoneChangeRepository = _ + private var zoneRepo: ZoneRepository = _ + private var batchChangeRepo: BatchChangeRepository = _ + private var sqsConn: SqsConnection = _ + private var str: Stream[IO, Unit] = _ + private val stopSignal = fs2.async.signalOf[IO, Boolean](false).unsafeRunSync() + + // Items to seed in DB + private val testZone = Zone( + zoneName, + "test@test.com", + ZoneStatus.Active, + connection = + Some(ZoneConnection("vinyldns.", "vinyldns.", "nzisn+4G2ldMn0q1CV3vsg==", "127.0.0.1:19001")), + transferConnection = + Some(ZoneConnection("vinyldns.", "vinyldns.", "nzisn+4G2ldMn0q1CV3vsg==", "127.0.0.1:19001")) + ) + private val inDbRecordSet = RecordSet( + zoneId = testZone.id, + name = "inDb", + typ = RecordType.A, + ttl = 38400, + status = RecordSetStatus.Active, + created = DateTime.now, + records = List(AData("1.2.3.4"))) + private val inDbRecordChange = ChangeSet(RecordSetChange.forSyncAdd(inDbRecordSet, testZone)) + private val inDbZoneChange = + ZoneChange.forUpdate(testZone.copy(email = "new@test.com"), testZone, okUserAuth) + + private val inDbRecordSetForSyncTest = RecordSet( + zoneId = testZone.id, + name = "vinyldns", + typ = RecordType.A, + ttl = 38400, + status = RecordSetStatus.Active, + created = DateTime.now, + records = List(AData("5.5.5.5"))) + private val inDbRecordChangeForSyncTest = ChangeSet( + RecordSetChange( + testZone, + inDbRecordSetForSyncTest, + okUserAuth.signedInUser.id, + RecordSetChangeType.Create, + RecordSetChangeStatus.Pending)) + + override def anonymize(recordSet: RecordSet): RecordSet = { + val fakeTime = new DateTime(2010, 1, 1, 0, 0) + recordSet.copy(id = "a", created = fakeTime, updated = None) + } + + def setup(): Unit = { + recordChangeRepo = new DynamoDBRecordChangeRepository(recordChangeStoreConfig, dynamoDBHelper) + recordSetRepo = new DynamoDBRecordSetRepository(recordSetStoreConfig, dynamoDBHelper) + zoneChangeRepo = new DynamoDBZoneChangeRepository(zoneChangeStoreConfig, dynamoDBHelper) + zoneRepo = VinylDNSJDBC.instance.zoneRepository + batchChangeRepo = VinylDNSJDBC.instance.batchChangeRepository + sqsConn = SqsConnection() + + //seed items database + waitForSuccess(zoneRepo.save(testZone)) + waitForSuccess(recordChangeRepo.save(inDbRecordChange)) + waitForSuccess(recordChangeRepo.save(inDbRecordChangeForSyncTest)) + waitForSuccess(recordSetRepo.apply(inDbRecordChange)) + waitForSuccess(recordSetRepo.apply(inDbRecordChangeForSyncTest)) + waitForSuccess(zoneChangeRepo.save(inDbZoneChange)) + // Run a noop query to make sure recordSetRepo is up + waitForSuccess(recordSetRepo.listRecordSets("1", None, None, None)) + + str = ZoneCommandHandler.mainFlow( + zoneRepo, + zoneChangeRepo, + recordSetRepo, + recordChangeRepo, + batchChangeRepo, + sqsConn, + 100.millis, + stopSignal) + str.compile.drain.unsafeRunAsync { _ => + () + } + } + + def tearDown(): Unit = { + stopSignal.set(true).unsafeRunSync() + Thread.sleep(2000) + } + + "ZoneCommandHandler" should { + "process a zone change" in { + val change = + ZoneChange.forUpdate(testZone.copy(email = "updated@test.com"), testZone, okUserAuth) + + sendCommand(change, sqsConn).unsafeRunSync() + eventually { + val getZone = zoneRepo.getZone(testZone.id) + whenReady(getZone) { zn => + zn.get.email shouldBe "updated@test.com" + } + } + } + + "process a recordset change" in { + val change = + RecordSetChange.forUpdate(inDbRecordSet, inDbRecordSet.copy(ttl = 1234), testZone) + sendCommand(change, sqsConn).unsafeRunSync() + eventually { + val getRs = recordSetRepo.getRecordSet(testZone.id, inDbRecordSet.id) + whenReady(getRs) { rs => + rs.get.ttl shouldBe 1234 + } + } + } + "process a zone sync" in { + val change = ZoneChange.forSync(testZone, okUserAuth) + + sendCommand(change, sqsConn).unsafeRunSync() + eventually { + val validatingQueries = for { + rs <- recordSetRepo.getRecordSet(testZone.id, inDbRecordSetForSyncTest.id) + ch <- recordChangeRepo.listRecordSetChanges(testZone.id) + } yield (rs, ch) + + whenReady(validatingQueries) { data => + val rs = data._1 + rs.get.name shouldBe "vinyldns." + + val updates = data._2 + val forThisRecord = updates.items.filter(_.recordSet.id == inDbRecordSetForSyncTest.id) + + forThisRecord.length shouldBe 2 + forThisRecord.exists(_.changeType == RecordSetChangeType.Create) shouldBe true + forThisRecord.exists(_.changeType == RecordSetChangeType.Update) shouldBe true + } + } + } + } + + private def waitForSuccess[T](f: => Future[T]): T = { + val waiting = f.recover { case _ => Thread.sleep(2000); waitForSuccess(f) } + Await.result[T](waiting, 15.seconds) + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupChangeRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupChangeRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..75b9a1a09 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupChangeRepositoryIntegrationSpec.scala @@ -0,0 +1,226 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util +import java.util.Collections + +import com.amazonaws.services.dynamodbv2.model._ +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.concurrent.{Eventually, PatienceConfiguration} +import org.scalatest.time.{Seconds, Span} + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class DynamoDBGroupChangeRepositoryIntegrationSpec extends DynamoDBIntegrationSpec with Eventually { + private implicit def dateTimeOrdering: Ordering[DateTime] = Ordering.fromLessThan(_.isAfter(_)) + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + + private val GROUP_CHANGES_TABLE = "group-changes-live" + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$GROUP_CHANGES_TABLE" + | provisionedReads=30 + | provisionedWrites=30 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBGroupChangeRepository = _ + + private val groupChanges = Seq(okGroupChange, okGroupChangeUpdate, okGroupChangeDelete) ++ + listOfDummyGroupChanges ++ listOfRandomTimeGroupChanges + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBGroupChangeRepository(tableConfig, dynamoDBHelper) + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready(repo.getGroupChange("any"), 5.seconds) + notReady = result.value.get.isFailure + Thread.sleep(2000) + } + + clearGroupChanges() + + // Create all the changes + val savedGroupChanges = Future.sequence(groupChanges.map(repo.save)) + + // Wait until all of the changes are done + Await.result(savedGroupChanges, 5.minutes) + } + + def tearDown(): Unit = { + + val request = new DeleteTableRequest().withTableName(GROUP_CHANGES_TABLE) + val deleteTables = dynamoDBHelper.deleteTable(request) + Await.ready(deleteTables, 100.seconds) + } + + private def clearGroupChanges(): Unit = { + + import scala.collection.JavaConverters._ + + val scanRequest = new ScanRequest().withTableName(GROUP_CHANGES_TABLE) + + val allGroupChanges = dynamoClient.scan(scanRequest).getItems.asScala.map(repo.fromItem) + + val batchWrites = allGroupChanges + .map { groupChange => + val key = new util.HashMap[String, AttributeValue]() + key.put("group_change_id", new AttributeValue(groupChange.id)) + new WriteRequest().withDeleteRequest(new DeleteRequest().withKey(key)) + } + .grouped(25) + .map { deleteRequests => + new BatchWriteItemRequest() + .withRequestItems(Collections.singletonMap(GROUP_CHANGES_TABLE, deleteRequests.asJava)) + } + .toList + + batchWrites.foreach { batch => + dynamoClient.batchWriteItem(batch) + } + } + + "DynamoDBGroupChangeRepository" should { + "get a group change by id" in { + val targetGroupChange = okGroupChange + whenReady(repo.getGroupChange(targetGroupChange.id), timeout) { retrieved => + retrieved shouldBe Some(targetGroupChange) + } + } + + "return none when no matching id is found" in { + whenReady(repo.getGroupChange("NotFound"), timeout) { retrieved => + retrieved shouldBe None + } + } + + "save a group change with oldGroup = None" in { + val targetGroupChange = okGroupChange + + val test = + for { + saved <- repo.save(targetGroupChange) + retrieved <- repo.getGroupChange(saved.id) + } yield retrieved + + whenReady(test, timeout) { saved => + saved shouldBe Some(targetGroupChange) + } + } + + "save a group change with oldGroup set" in { + val targetGroupChange = okGroupChangeUpdate + + val test = + for { + saved <- repo.save(targetGroupChange) + retrieved <- repo.getGroupChange(saved.id) + } yield retrieved + + whenReady(test, timeout) { saved => + saved shouldBe Some(targetGroupChange) + } + } + + "getGroupChanges should return the recent changes and the correct last key" in { + whenReady(repo.getGroupChanges(oneUserDummyGroup.id, None, 100), timeout) { retrieved => + retrieved.changes should contain theSameElementsAs listOfDummyGroupChanges.slice(0, 100) + retrieved.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(99).created.getMillis.toString) + } + } + + "getGroupChanges should start using the time startFrom" in { + whenReady( + repo.getGroupChanges( + oneUserDummyGroup.id, + Some(listOfDummyGroupChanges(50).created.getMillis.toString), + 100), + timeout) { retrieved => + retrieved.changes should contain theSameElementsAs listOfDummyGroupChanges.slice(51, 151) + retrieved.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(150).created.getMillis.toString) + } + } + + "getGroupChanges returns entire page and nextId = None if there are less than maxItems left" in { + whenReady( + repo.getGroupChanges( + oneUserDummyGroup.id, + Some(listOfDummyGroupChanges(200).created.getMillis.toString), + 100), + timeout) { retrieved => + retrieved.changes should contain theSameElementsAs listOfDummyGroupChanges.slice(201, 300) + retrieved.lastEvaluatedTimeStamp shouldBe None + } + } + + "getGroupChanges returns 3 pages of items" in { + val test = + for { + page1 <- repo.getGroupChanges(oneUserDummyGroup.id, None, 100) + page2 <- repo.getGroupChanges(oneUserDummyGroup.id, page1.lastEvaluatedTimeStamp, 100) + page3 <- repo.getGroupChanges(oneUserDummyGroup.id, page2.lastEvaluatedTimeStamp, 100) + page4 <- repo.getGroupChanges(oneUserDummyGroup.id, page3.lastEvaluatedTimeStamp, 100) + } yield (page1, page2, page3, page4) + whenReady(test, timeout) { retrieved => + retrieved._1.changes should contain theSameElementsAs listOfDummyGroupChanges.slice(0, 100) + retrieved._1.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(99).created.getMillis.toString) + retrieved._2.changes should contain theSameElementsAs listOfDummyGroupChanges.slice( + 100, + 200) + retrieved._2.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(199).created.getMillis.toString) + retrieved._3.changes should contain theSameElementsAs listOfDummyGroupChanges.slice( + 200, + 300) + retrieved._3.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(299).created.getMillis.toString) // the limit was reached before the end of list + retrieved._4.changes should contain theSameElementsAs List() // no matches found in the rest of the list + retrieved._4.lastEvaluatedTimeStamp shouldBe None + } + } + + "getGroupChanges should return `maxItem` items" in { + whenReady(repo.getGroupChanges(oneUserDummyGroup.id, None, 5), timeout) { retrieved => + retrieved.changes should contain theSameElementsAs listOfDummyGroupChanges.slice(0, 5) + retrieved.lastEvaluatedTimeStamp shouldBe Some( + listOfDummyGroupChanges(4).created.getMillis.toString) + } + } + + "getGroupChanges should handle changes inserted in random order" in { + // group changes have a random time stamp and inserted in random order + eventually(timeout) { + whenReady(repo.getGroupChanges(randomTimeGroup.id, None, 100), timeout) { retrieved => + val sorted = listOfRandomTimeGroupChanges.sortBy(_.created) + retrieved.changes should contain theSameElementsAs sorted.slice(0, 100) + retrieved.lastEvaluatedTimeStamp shouldBe Some(sorted(99).created.getMillis.toString) + } + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..09d4695cb --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBGroupRepositoryIntegrationSpec.scala @@ -0,0 +1,238 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util +import java.util.Collections + +import com.amazonaws.services.dynamodbv2.model._ +import com.typesafe.config.ConfigFactory +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.time.{Seconds, Span} +import vinyldns.api.domain.membership.{Group, GroupStatus} + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class DynamoDBGroupRepositoryIntegrationSpec extends DynamoDBIntegrationSpec { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + + private val GROUP_TABLE = "groups-live" + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$GROUP_TABLE" + | provisionedReads=30 + | provisionedWrites=30 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBGroupRepository = _ + + private val activeGroups = + for (i <- 1 to 10) + yield + Group( + s"live-test-group$i", + s"test$i@test.com", + Some(s"description$i"), + memberIds = Set(s"member$i", s"member2$i"), + adminUserIds = Set(s"member$i", s"member2$i"), + id = "id-%03d".format(i) + ) + + private val inDbDeletedGroup = Group( + s"live-test-group-deleted", + s"test@test.com", + Some(s"description"), + memberIds = Set("member1"), + adminUserIds = Set("member1"), + id = "id-deleted-group", + status = GroupStatus.Deleted + ) + private val groups = activeGroups ++ List(inDbDeletedGroup) + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBGroupRepository(tableConfig, dynamoDBHelper) + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready(repo.getGroup("any"), 5.seconds) + notReady = result.value.get.isFailure + Thread.sleep(2000) + } + + clearGroups() + + // Create all the zones + val savedGroups = Future.sequence(groups.map(repo.save)) + + // Wait until all of the zones are done + Await.result(savedGroups, 5.minutes) + } + + def tearDown(): Unit = { + val request = new DeleteTableRequest().withTableName(GROUP_TABLE) + val deleteTables = dynamoDBHelper.deleteTable(request) + Await.ready(deleteTables, 100.seconds) + } + + private def clearGroups(): Unit = { + + import scala.collection.JavaConverters._ + + val scanRequest = new ScanRequest().withTableName(GROUP_TABLE) + + val allGroups = dynamoClient.scan(scanRequest).getItems.asScala.map(repo.fromItem) + + val batchWrites = allGroups + .map { group => + val key = new util.HashMap[String, AttributeValue]() + key.put("group_id", new AttributeValue(group.id)) + new WriteRequest().withDeleteRequest(new DeleteRequest().withKey(key)) + } + .grouped(25) + .map { deleteRequests => + new BatchWriteItemRequest() + .withRequestItems(Collections.singletonMap(GROUP_TABLE, deleteRequests.asJava)) + } + .toList + + batchWrites.foreach { batch => + dynamoClient.batchWriteItem(batch) + } + } + + "DynamoDBGroupRepository" should { + "get a group by id" in { + val targetGroup = groups.head + whenReady(repo.getGroup(targetGroup.id), timeout) { retrieved => + retrieved.get shouldBe targetGroup + } + } + + "get all active groups" in { + whenReady(repo.getAllGroups(), timeout) { retrieved => + retrieved shouldBe activeGroups.toSet + } + } + + "not return a deleted group when getting group by id" in { + val deleted = deletedGroup.copy(memberIds = Set("foo"), adminUserIds = Set("foo")) + val f = + for { + _ <- repo.save(deleted) + retrieved <- repo.getGroup(deleted.id) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe None + } + } + + "not return a deleted group when getting group by name" in { + val deleted = deletedGroup.copy(memberIds = Set("foo"), adminUserIds = Set("foo")) + val f = + for { + _ <- repo.save(deleted) + retrieved <- repo.getGroupByName(deleted.name) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe None + } + } + + "get groups should omit non existing groups" in { + val f = repo.getGroups(Set(activeGroups.head.id, "thisdoesnotexist")) + whenReady(f, timeout) { retrieved => + retrieved.map(_.id) should contain theSameElementsAs Set(activeGroups.head.id) + } + } + + "returns all the groups" in { + val f = repo.getGroups(groups.map(_.id).toSet) + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs activeGroups + } + } + + "only return requested groups" in { + val evenGroups = activeGroups.filter(_.id.takeRight(1).toInt % 2 == 0) + val f = repo.getGroups(evenGroups.map(_.id).toSet) + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs evenGroups + } + } + + "return an Empty set if nothing found" in { + val f = repo.getGroups(Set("notFound")) + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs Set() + } + } + + "not return deleted groups" in { + val deleted = deletedGroup.copy( + id = "test-deleted-group-get-groups", + memberIds = Set("foo"), + adminUserIds = Set("foo")) + val f = + for { + _ <- repo.save(deleted) + retrieved <- repo.getGroups(Set(deleted.id, groups.head.id)) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved.map(_.id) shouldBe Set(groups.head.id) + } + } + + "get a group by name" in { + val targetGroup = groups.head + whenReady(repo.getGroupByName(targetGroup.name), timeout) { retrieved => + retrieved.get shouldBe targetGroup + } + } + + "save a group with no description" in { + val group = Group( + "null-description", + "test@test.com", + None, + memberIds = Set("foo"), + adminUserIds = Set("bar")) + + val test = + for { + saved <- repo.save(group) + retrieved <- repo.getGroup(saved.id) + } yield retrieved + + whenReady(test, timeout) { saved => + saved.get.description shouldBe None + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBIntegrationSpec.scala new file mode 100644 index 000000000..5bde7c604 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBIntegrationSpec.scala @@ -0,0 +1,69 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util.UUID + +import com.amazonaws.services.dynamodbv2.AmazonDynamoDBClient +import com.typesafe.config.{Config, ConfigFactory} +import org.scalatest._ +import org.scalatest.concurrent.ScalaFutures +import org.slf4j.LoggerFactory +import vinyldns.api.domain.dns.DnsConversions +import vinyldns.api.{GroupTestData, ResultHelpers, VinylDNSTestData} + +trait DynamoDBIntegrationSpec + extends WordSpec + with BeforeAndAfterAll + with DnsConversions + with VinylDNSTestData + with GroupTestData + with ResultHelpers + with BeforeAndAfterEach + with Matchers + with ScalaFutures + with Inspectors { + + // this is defined in the docker/docker-compose.yml file for dynamodb + val port: Int = 19000 + val endpoint: String = s"http://localhost:$port" + + val dynamoConfig: Config = ConfigFactory.parseString(s""" + | key = "vinyldnsTest" + | secret = "notNeededForDynamoDbLocal" + | endpoint="$endpoint", + | region="us-east-1" + """.stripMargin) + val dynamoClient: AmazonDynamoDBClient = DynamoDBClient(dynamoConfig) + val dynamoDBHelper: DynamoDBHelper = + new DynamoDBHelper(dynamoClient, LoggerFactory.getLogger("DynamoDBIntegrationSpec")) + + override protected def beforeAll(): Unit = + setup() + + override protected def afterAll(): Unit = + tearDown() + + /* Allows a spec to initialize the database */ + def setup(): Unit + + /* Allows a spec to clean up */ + def tearDown(): Unit + + /* Generates a random string useful to avoid data collision */ + def genString: String = UUID.randomUUID().toString +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBMembershipRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBMembershipRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..2cb8507f2 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBMembershipRepositoryIntegrationSpec.scala @@ -0,0 +1,179 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import com.typesafe.config.ConfigFactory +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.time.{Seconds, Span} + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class DynamoDBMembershipRepositoryIntegrationSpec extends DynamoDBIntegrationSpec { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + private val membershipTable = "membership-live" + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$membershipTable" + | provisionedReads=100 + | provisionedWrites=100 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBMembershipRepository = _ + + private val testUserIds = for (i <- 0 to 5) yield s"test-user-$i" + private val testGroupIds = for (i <- 0 to 5) yield s"test-group-$i" + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBMembershipRepository(tableConfig, dynamoDBHelper) + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready(repo.getGroupsForUser("any"), 5.seconds) + notReady = result.value.get.isFailure + Thread.sleep(2000) + } + + // Create all the items + val results = Future.sequence(testGroupIds.map(repo.addMembers(_, testUserIds.toSet))) + + // Wait until all of the data is stored + Await.result(results, 5.minutes) + } + + def tearDown(): Unit = { + val results = Future.sequence(testGroupIds.map(repo.removeMembers(_, testUserIds.toSet))) + + // Wait until all of the data is stored + Await.result(results, 5.minutes) + } + + "DynamoDBMembershipRepository" should { + val groupId = genString + val user1 = genString + val user2 = genString + "add members successfully" in { + whenReady(repo.addMembers(groupId, Set(user1, user2)), timeout) { memberIds => + memberIds should contain theSameElementsAs Set(user1, user2) + } + } + + "add a group to an existing user" in { + val group1 = genString + val group2 = genString + val user1 = genString + val f = + for { + _ <- repo.addMembers(group1, Set(user1)) + _ <- repo.addMembers(group2, Set(user1)) + userGroups <- repo.getGroupsForUser(user1) + } yield userGroups + + whenReady(f, timeout) { userGroups => + userGroups should contain theSameElementsAs Set(group1, group2) + } + } + + "return an empty set when getting groups for a user that does not exist" in { + whenReady(repo.getGroupsForUser("notHere"), timeout) { groupIds => + groupIds shouldBe empty + } + } + + "remove members successfully" in { + val group1 = genString + val group2 = genString + val user1 = genString + val f = + for { + _ <- repo.addMembers(group1, Set(user1)) + _ <- repo.addMembers(group2, Set(user1)) + _ <- repo.removeMembers(group1, Set(user1)) + userGroups <- repo.getGroupsForUser(user1) + } yield userGroups + + whenReady(f, timeout) { userGroups => + userGroups should contain theSameElementsAs Set(group2) + } + } + + "remove members not in group" in { + val group1 = genString + val user1 = genString + val user2 = genString + val f = + for { + _ <- repo.addMembers(group1, Set(user1)) + _ <- repo.removeMembers(group1, Set(user2)) + userGroups <- repo.getGroupsForUser(user2) + } yield userGroups + + whenReady(f, timeout) { userGroups => + userGroups shouldBe empty + } + } + + "remove all groups for user" in { + val group1 = genString + val group2 = genString + val group3 = genString + val user1 = genString + val f = + for { + _ <- repo.addMembers(group1, Set(user1)) + _ <- repo.addMembers(group2, Set(user1)) + _ <- repo.addMembers(group3, Set(user1)) + _ <- repo.removeMembers(group1, Set(user1)) + _ <- repo.removeMembers(group2, Set(user1)) + _ <- repo.removeMembers(group3, Set(user1)) + userGroups <- repo.getGroupsForUser(user1) + } yield userGroups + + whenReady(f, timeout) { userGroups => + userGroups shouldBe empty + } + } + + "retrieve all of the groups for a user" in { + val f = repo.getGroupsForUser(testUserIds.head) + + whenReady(f, timeout) { retrieved => + testGroupIds.foreach(groupId => retrieved should contain(groupId)) + } + } + + "remove members from a group" in { + val membersToRemove = testUserIds.toList.sorted.take(2).toSet + val groupsRemoved = testGroupIds.toList.sorted.take(2) + + val f = Future.sequence(groupsRemoved.map(repo.removeMembers(_, membersToRemove))) + + Await.result(f, 5.minutes) + + whenReady(repo.getGroupsForUser(membersToRemove.head), timeout) { groupsRetrieved => + forAll(groupsRetrieved) { groupId => + groupsRemoved should not contain groupId + } + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordChangeRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordChangeRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..fb2711183 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordChangeRepositoryIntegrationSpec.scala @@ -0,0 +1,378 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util +import java.util.UUID + +import com.amazonaws.services.dynamodbv2.model.{AttributeValue, DeleteItemRequest, ScanRequest} +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.concurrent.{Eventually, PatienceConfiguration} +import org.scalatest.time.{Seconds, Span} +import vinyldns.api.domain.record.{ChangeSet, ChangeSetStatus, RecordSetChange} +import vinyldns.api.domain.zone.{Zone, ZoneStatus} +import vinyldns.api.domain.{record, zone} + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext} + +class DynamoDBRecordChangeRepositoryIntegrationSpec + extends DynamoDBIntegrationSpec + with Eventually { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + private val recordChangeTable = "record-change-live" + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$recordChangeTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBRecordChangeRepository = _ + + private val user = abcAuth.signedInUser.userName + private val auth = abcAuth + + private val zoneA = Zone( + s"live-test-$user.zone-small.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection) + private val zoneB = zone.Zone( + s"live-test-$user.zone-large.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection) + + private val recordSetA = + for { + rsTemplate <- Seq(rsOk, aaaa, cname) + } yield + rsTemplate.copy( + zoneId = zoneA.id, + name = s"${rsTemplate.typ.toString}-${zoneA.account}.", + ttl = 100, + created = DateTime.now(), + id = UUID.randomUUID().toString + ) + + private val recordSetB = + for { + i <- 1 to 3 + } yield + rsOk.copy( + zoneId = zoneB.id, + name = s"${rsOk.typ.toString}-${zoneB.account}-$i.", + ttl = 100, + created = DateTime.now(), + id = UUID.randomUUID().toString + ) + + private val updateRecordSetA = + for { + rsTemplate <- Seq(rsOk, aaaa, cname) + } yield + rsTemplate.copy( + zoneId = zoneA.id, + name = s"${rsTemplate.typ.toString}-${zoneA.account}.", + ttl = 1000, + created = DateTime.now(), + id = UUID.randomUUID().toString + ) + + private val recordSetChangesA = { + for { + rs <- recordSetA + } yield RecordSetChange.forAdd(rs, zoneA, auth) + }.sortBy(_.id) + + private val recordSetChangesB = { + for { + rs <- recordSetB + } yield RecordSetChange.forAdd(rs, zoneB, auth) + }.sortBy(_.id) + + private val recordSetChangesC = { + for { + rs <- recordSetA + } yield RecordSetChange.forDelete(rs, zoneA, auth) + }.sortBy(_.id) + + private val recordSetChangesD = { + for { + rs <- recordSetA + updateRs <- updateRecordSetA + } yield RecordSetChange.forUpdate(rs, updateRs, zoneA) + }.sortBy(_.id) + + private val changeSetA = ChangeSet(recordSetChangesA) + private val changeSetB = record.ChangeSet(recordSetChangesB) + private val changeSetC = + record.ChangeSet(recordSetChangesC).copy(status = ChangeSetStatus.Applied) + private val changeSetD = record + .ChangeSet(recordSetChangesD) + .copy(createdTimestamp = changeSetA.createdTimestamp + 1000) // make sure D is created AFTER A + private val changeSets = List(changeSetA, changeSetB, changeSetC, changeSetD) + + //This zone is to test listing record changes in correct order + private val zoneC = zone.Zone( + s"live-test-$user.record-changes.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection) + private val baseTime = DateTime.now() + private val timeOrder = List( + baseTime.minusSeconds(8000), + baseTime.minusSeconds(7000), + baseTime.minusSeconds(6000), + baseTime.minusSeconds(5000), + baseTime.minusSeconds(4000), + baseTime.minusSeconds(3000), + baseTime.minusSeconds(2000), + baseTime.minusSeconds(1000), + baseTime + ) + + private val recordSetsC = + for { + rsTemplate <- Seq(rsOk, aaaa, cname) + } yield + rsTemplate.copy( + zoneId = zoneC.id, + name = s"${rsTemplate.typ.toString}-${zoneC.account}.", + ttl = 100, + id = UUID.randomUUID().toString + ) + + private val updateRecordSetsC = + for { + rsTemplate <- Seq(rsOk, aaaa, cname) + } yield + rsTemplate.copy( + zoneId = zoneC.id, + name = s"${rsTemplate.typ.toString}-${zoneC.account}.", + ttl = 1000, + id = UUID.randomUUID().toString + ) + + private val recordSetChangesCreateC = { + for { + (rs, index) <- recordSetsC.zipWithIndex + } yield RecordSetChange.forAdd(rs, zoneC, auth).copy(created = timeOrder(index)) + } + + private val recordSetChangesUpdateC = { + for { + (rs, index) <- recordSetsC.zipWithIndex + } yield + RecordSetChange + .forUpdate(rs, updateRecordSetsC(index), zoneC) + .copy(created = timeOrder(index + 3)) + } + + private val recordSetChangesDeleteC = { + for { + (rs, index) <- recordSetsC.zipWithIndex + } yield RecordSetChange.forDelete(rs, zoneC, auth).copy(created = timeOrder(index + 6)) + } + + private val changeSetCreateC = record.ChangeSet(recordSetChangesCreateC) + private val changeSetUpdateC = record.ChangeSet(recordSetChangesUpdateC) + private val changeSetDeleteC = record.ChangeSet(recordSetChangesDeleteC) + private val changeSetsC = List(changeSetCreateC, changeSetUpdateC, changeSetDeleteC) + private val recordSetChanges: List[RecordSetChange] = + (recordSetChangesCreateC ++ recordSetChangesUpdateC ++ recordSetChangesDeleteC) + .sortBy(_.created.getMillis) + .toList + .reverse // Changes are retrieved by time stamp in decending order + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBRecordChangeRepository(tableConfig, dynamoDBHelper) + + var notReady = true + while (notReady) { + val result = Await.ready(repo.getRecordSetChange("any", "any"), 5.seconds) + notReady = result.value.get.isFailure + } + + // Clear the table just in case there is some lagging test data + clearTable() + + changeSets.foreach { changeSet => + // Save the change set + val savedChangeSet = repo.save(changeSet) + + // Wait until all of the change sets are saved + Await.result(savedChangeSet, 5.minutes) + } + + changeSetsC.foreach { changeSet => + // Save the change set + val savedChangeSet = repo.save(changeSet) + + // Wait until all of the change sets are saved + Await.result(savedChangeSet, 5.minutes) + } + } + + def tearDown(): Unit = + clearTable() + + private def clearTable(): Unit = { + + import scala.collection.JavaConverters._ + + // clear the table that we work with here + // NOTE: This is brute force and could be cleaner + val scanRequest = new ScanRequest() + .withTableName(recordChangeTable) + + val result = + dynamoClient.scan(scanRequest).getItems.asScala.map(_.get(repo.RECORD_SET_CHANGE_ID).getS()) + + result.foreach(deleteItem) + } + + private def deleteItem(recordSetChangeId: String): Unit = { + val key = new util.HashMap[String, AttributeValue]() + key.put(repo.RECORD_SET_CHANGE_ID, new AttributeValue(recordSetChangeId)) + val request = new DeleteItemRequest().withTableName(recordChangeTable).withKey(key) + try { + dynamoClient.deleteItem(request) + } catch { + case ex: Throwable => + throw new UnexpectedDynamoResponseException(ex.getMessage, ex) + } + } + + "DynamoDBRepository" should { + "get a record change set by id" in { + val testRecordSetChange = pendingCreateAAAA.copy(id = genString) + + val f = + for { + saved <- repo.save(ChangeSet(Seq(testRecordSetChange))) + retrieved <- repo.getRecordSetChange(saved.zoneId, testRecordSetChange.id) + } yield retrieved + + whenReady(f, timeout) { result => + result shouldBe Some(testRecordSetChange) + } + } + + "get changes by zone id" in { + val f = repo.getChanges(zoneA.id) + whenReady(f, timeout) { result => + val sortedResults = result.map { changeSet => + changeSet.copy(changes = changeSet.changes.sortBy(_.id)) + } + sortedResults.size shouldBe 3 + sortedResults should contain(changeSetA) + sortedResults should contain(changeSetC) + sortedResults should contain(changeSetD) + } + } + + "get pending changes by zone id are sorted by earliest created timestamp" in { + val f = repo.getPendingChangeSets(zoneA.id) + whenReady(f, timeout) { result => + val sortedResults = result.map { changeSet => + changeSet.copy(changes = changeSet.changes.sortBy(_.id)) + } + sortedResults.size shouldBe 2 + sortedResults should contain(changeSetA) + sortedResults should contain(changeSetD) + sortedResults should not contain changeSetC + result.head.id should equal(changeSetA.id) + result(1).id should equal(changeSetD.id) + } + } + + "list all record set changes in zone C" in { + eventually { + val testFuture = repo.listRecordSetChanges(zoneC.id) + whenReady(testFuture, timeout) { result => + result.items shouldBe recordSetChanges + } + } + } + + "list record set changes with a page size of one" in { + val testFuture = repo.listRecordSetChanges(zoneC.id, maxItems = 1) + whenReady(testFuture, timeout) { result => + { + result.items shouldBe recordSetChanges.take(1) + } + } + } + + "list record set changes with page size of one and reuse key to get another page with size of two" in { + val testFuture = repo.listRecordSetChanges(zoneC.id, maxItems = 1) + whenReady(testFuture, timeout) { result => + { + val key = result.nextId + val testFuture2 = repo.listRecordSetChanges(zoneC.id, startFrom = key, maxItems = 2) + whenReady(testFuture2, timeout) { result => + { + val page2 = result.items + page2 shouldBe recordSetChanges.slice(1, 3) + } + } + } + } + } + + "return an empty list and nextId of None when passing last record as start" in { + val testFuture = repo.listRecordSetChanges(zoneC.id, maxItems = 9) + whenReady(testFuture, timeout) { result => + { + val key = result.nextId + val testFuture2 = repo.listRecordSetChanges(zoneC.id, startFrom = key) + whenReady(testFuture2, timeout) { result => + { + result.nextId shouldBe None + result.items shouldBe List() + } + } + } + } + } + + "have nextId of None when exhausting record changes" in { + val testFuture = repo.listRecordSetChanges(zoneC.id, maxItems = 10) + whenReady(testFuture, timeout) { result => + result.nextId shouldBe None + } + } + + "return empty list with startFrom of zero" in { + val testFuture = repo.listRecordSetChanges(zoneC.id, startFrom = Some("0")) + whenReady(testFuture, timeout) { result => + { + result.nextId shouldBe None + result.items shouldBe List() + } + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordSetRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordSetRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..951be6ca2 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBRecordSetRepositoryIntegrationSpec.scala @@ -0,0 +1,488 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util.UUID + +import com.amazonaws.services.dynamodbv2.model.{ScanRequest, ScanResult} +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.time.{Seconds, Span} +import vinyldns.api.domain.membership.User +import vinyldns.api.domain.record +import vinyldns.api.domain.record.{ChangeSet, ListRecordSetResults, RecordSet, RecordSetChange} +import vinyldns.api.domain.zone.{Zone, ZoneStatus} + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class DynamoDBRecordSetRepositoryIntegrationSpec + extends DynamoDBIntegrationSpec + with DynamoDBRecordSetConversions { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + private val recordSetTable = "record-sets-live" + private[repository] val recordSetTableName: String = recordSetTable + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$recordSetTable" + | provisionedReads=50 + | provisionedWrites=50 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + import dynamoDBHelper._ + + private val repo = new DynamoDBRecordSetRepository(tableConfig, dynamoDBHelper) + + private val users = for (i <- 1 to 3) + yield User(s"live-test-acct$i", "key", "secret") + + private val zones = + for { + acct <- users + i <- 1 to 3 + } yield + Zone( + s"live-test-${acct.userName}.zone$i.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection) + + private val rsTemplates = Seq(rsOk, aaaa, cname) + + private val rsQualifiedStatus = Seq("-dotless", "-dotted.") + + private val recordSets = + for { + zone <- zones + rsTemplate <- rsTemplates + rsQualifiedStatus <- rsQualifiedStatus + } yield + rsTemplate.copy( + zoneId = zone.id, + name = s"${rsTemplate.typ.toString}-${zone.account}$rsQualifiedStatus", + ttl = 100, + created = DateTime.now(), + id = UUID.randomUUID().toString + ) + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready( + repo.listRecordSets( + zoneId = "any", + startFrom = None, + maxItems = None, + recordNameFilter = None), + 5.seconds) + notReady = result.value.get.isFailure + Thread.sleep(1000) + } + + // Clear the zone just in case there is some lagging test data + clearTable() + + // Create all the zones + val savedRecordSets = Future.sequence(recordSets.map(repo.putRecordSet)) + + // Wait until all of the zones are done + Await.result(savedRecordSets, 5.minutes) + } + + def tearDown(): Unit = + clearTable() + + private def clearTable(): Unit = { + + import scala.collection.JavaConverters._ + + // clear all the zones from the table that we work with here + val scanRequest = new ScanRequest().withTableName(recordSetTable) + + val scanResult = dynamoClient.scan(scanRequest) + + var counter = 0 + + def delete(r: ScanResult) { + val result = r.getItems.asScala.grouped(25) + + // recurse over the results of the scan, convert each group to a BatchWriteItem with Deletes, and then delete + // using a blocking call + result.foreach { group => + val recordSetIds = group.map(_.get(DynamoDBRecordSetRepository.RECORD_SET_ID).getS) + val deletes = recordSetIds.map(deleteRecordSetFromTable) + val batchDelete = toBatchWriteItemRequest(deletes, recordSetTable) + + dynamoClient.batchWriteItem(batchDelete) + + counter = counter + 25 + } + + if (r.getLastEvaluatedKey != null && !r.getLastEvaluatedKey.isEmpty) { + val nextScan = new ScanRequest().withTableName(recordSetTable) + nextScan.setExclusiveStartKey(scanResult.getLastEvaluatedKey) + val nextScanResult = dynamoClient.scan(scanRequest) + delete(nextScanResult) + } + } + + delete(scanResult) + } + + "DynamoDBRepository" should { + "get a record set by id" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = None, + recordNameFilter = None) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet.recordSets should contain(testRecordSet) + } + } + + "get a record set count" in { + val testRecordSet = recordSets.head + val expected = 6 + val testFuture = repo.getRecordSetCount(testRecordSet.zoneId) + whenReady(testFuture, timeout) { foundRecordSetCount => + foundRecordSetCount shouldBe expected + } + } + + "get a record set by record set id and zone id" in { + val testRecordSet = recordSets.head + val testFuture = repo.getRecordSet(testRecordSet.zoneId, testRecordSet.id) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe Some(testRecordSet) + } + } + + "get a record set by zone id, name, type" in { + val testRecordSet = recordSets.head + val testFuture = + repo.getRecordSets(testRecordSet.zoneId, testRecordSet.name, testRecordSet.typ) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a record set by zone id, case-insensitive name, type" in { + val testRecordSet = recordSets.head + val testFuture = repo.getRecordSets( + testRecordSet.zoneId, + testRecordSet.name.toUpperCase(), + testRecordSet.typ) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a fully qualified record set by zone id, trailing dot-insensitive name, type" in { + val testRecordSet = recordSets.find(_.name.endsWith(".")).get + val testFuture = + repo.getRecordSets(testRecordSet.zoneId, testRecordSet.name.dropRight(1), testRecordSet.typ) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a relative record set by zone id, trailing dot-insensitive name, type" in { + val testRecordSet = recordSets.find(_.name.endsWith("dotless")).get + val testFuture = + repo.getRecordSets(testRecordSet.zoneId, testRecordSet.name.concat("."), testRecordSet.typ) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a record set by zone id, name" in { + val testRecordSet = recordSets.head + val testFuture = repo.getRecordSetsByName(testRecordSet.zoneId, testRecordSet.name) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a record set by zone id, case-insensitive name" in { + val testRecordSet = recordSets.head + val testFuture = + repo.getRecordSetsByName(testRecordSet.zoneId, testRecordSet.name.toUpperCase()) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a fully qualified record set by zone id, trailing dot-insensitive name" in { + val testRecordSet = recordSets.find(_.name.endsWith(".")).get + val testFuture = + repo.getRecordSetsByName(testRecordSet.zoneId, testRecordSet.name.dropRight(1)) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "get a relative record set by zone id, trailing dot-insensitive name" in { + val testRecordSet = recordSets.find(_.name.endsWith("dotless")).get + val testFuture = + repo.getRecordSetsByName(testRecordSet.zoneId, testRecordSet.name.concat(".")) + whenReady(testFuture, timeout) { foundRecordSet => + foundRecordSet shouldBe List(testRecordSet) + } + } + + "list record sets with page size of 1 returns recordSets[0] only" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = Some(1), + recordNameFilter = None) + whenReady(testFuture, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets shouldNot contain(recordSets(1)) + foundRecordSet.nextId.get.split('~')(2) shouldBe recordSets(0).id + } + } + } + + "list record sets with page size of 1 reusing key with page size of 1 returns recordSets[0] and recordSets[1]" in { + val testRecordSet = recordSets.head + val testFutureOne = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = Some(1), + recordNameFilter = None) + whenReady(testFutureOne, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets shouldNot contain(recordSets(1)) + val key = foundRecordSet.nextId + val testFutureTwo = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = key, + maxItems = Some(1), + recordNameFilter = None) + whenReady(testFutureTwo, timeout) { foundRecordSet => + { + foundRecordSet.recordSets shouldNot contain(recordSets(0)) + foundRecordSet.recordSets should contain(recordSets(1)) + foundRecordSet.recordSets shouldNot contain(recordSets(2)) + foundRecordSet.nextId.get.split('~')(2) shouldBe recordSets(1).id + } + } + } + } + } + + "list record sets page size of 1 then reusing key with page size of 2 returns recordSets[0], recordSets[1,2]" in { + val testRecordSet = recordSets.head + val testFutureOne = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = Some(1), + recordNameFilter = None) + whenReady(testFutureOne, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets shouldNot contain(recordSets(1)) + val key = foundRecordSet.nextId + val testFutureTwo = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = key, + maxItems = Some(2), + recordNameFilter = None) + whenReady(testFutureTwo, timeout) { foundRecordSet => + { + foundRecordSet.recordSets shouldNot contain(recordSets(0)) + foundRecordSet.recordSets should contain(recordSets(1)) + foundRecordSet.recordSets should contain(recordSets(2)) + foundRecordSet.nextId.get.split('~')(2) shouldBe recordSets(2).id + } + } + } + } + } + + "return an empty list and nextId of None when passing last record as start" in { + val testRecordSet = recordSets.head + val testFutureOne = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = Some(6), + recordNameFilter = None) + whenReady(testFutureOne, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets should contain(recordSets(1)) + foundRecordSet.recordSets should contain(recordSets(2)) + foundRecordSet.recordSets should contain(recordSets(3)) + foundRecordSet.recordSets should contain(recordSets(4)) + foundRecordSet.recordSets should contain(recordSets(5)) + val key = foundRecordSet.nextId + val testFutureTwo = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = key, + maxItems = Some(6), + recordNameFilter = None) + whenReady(testFutureTwo, timeout) { foundRecordSet => + { + foundRecordSet.recordSets shouldBe List() + foundRecordSet.nextId shouldBe None + } + } + } + } + } + + "have nextId of None when exhausting recordSets" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = Some(7), + recordNameFilter = None) + whenReady(testFuture, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets should contain(recordSets(1)) + foundRecordSet.recordSets should contain(recordSets(2)) + foundRecordSet.recordSets should contain(recordSets(3)) + foundRecordSet.recordSets should contain(recordSets(4)) + foundRecordSet.recordSets should contain(recordSets(5)) + foundRecordSet.nextId shouldBe None + } + } + } + + "only retrieve recordSet with name containing 'AAAA'" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = None, + recordNameFilter = Some("AAAA")) + whenReady(testFuture, timeout) { foundRecordSet => + { + foundRecordSet.recordSets shouldNot contain(recordSets(0)) + foundRecordSet.recordSets shouldNot contain(recordSets(1)) + foundRecordSet.recordSets should contain(recordSets(2)) + foundRecordSet.recordSets should contain(recordSets(3)) + } + } + } + + "retrieve all recordSets with names containing 'A'" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = None, + recordNameFilter = Some("A")) + whenReady(testFuture, timeout) { foundRecordSet => + { + foundRecordSet.recordSets should contain(recordSets(0)) + foundRecordSet.recordSets should contain(recordSets(1)) + foundRecordSet.recordSets should contain(recordSets(2)) + foundRecordSet.recordSets should contain(recordSets(3)) + foundRecordSet.recordSets should contain(recordSets(4)) + foundRecordSet.recordSets should contain(recordSets(5)) + } + } + } + + "return an empty list if recordName filter had no match" in { + val testRecordSet = recordSets.head + val testFuture = repo.listRecordSets( + zoneId = testRecordSet.zoneId, + startFrom = None, + maxItems = None, + recordNameFilter = Some("Dummy")) + whenReady(testFuture, timeout) { foundRecordSet => + { + foundRecordSet.recordSets shouldBe List() + } + } + } + + "apply a change set" in { + val newRecordSets = + for { + i <- 1 to 1000 + } yield + aaaa.copy( + zoneId = "big-apply-zone", + name = s"$i.apply.test.", + id = UUID.randomUUID().toString) + + val pendingChanges = newRecordSets.map(RecordSetChange.forAdd(_, zones.head, okAuth)) + val bigPendingChangeSet = ChangeSet(pendingChanges) + + try { + val f = repo.apply(bigPendingChangeSet) + Await.result(f, 1500.seconds) + + // let's fail half of them + val split = pendingChanges.grouped(pendingChanges.length / 2).toSeq + val halfSuccess = split.head.map(_.successful) + val halfFailed = split(1).map(_.failed()) + val halfFailedChangeSet = record.ChangeSet(halfSuccess ++ halfFailed) + + val nextUp = repo.apply(halfFailedChangeSet) + Await.result(nextUp, 1500.seconds) + + // let's run our query and see how long until we succeed(which will determine + // how long it takes DYNAMO to update its index) + var querySuccessful = false + var retries = 1 + var recordSetsResult: List[RecordSet] = Nil + while (!querySuccessful && retries <= 10) { + // if we query now, we should get half that failed + val rsQuery = repo.listRecordSets( + zoneId = "big-apply-zone", + startFrom = None, + maxItems = None, + recordNameFilter = None) + recordSetsResult = Await.result[ListRecordSetResults](rsQuery, 30.seconds).recordSets + querySuccessful = recordSetsResult.length == halfSuccess.length + retries += 1 + Thread.sleep(100) + } + + querySuccessful shouldBe true + + // the result of the query should be the same as those pending that succeeded + val expected = halfSuccess.map(_.recordSet) + recordSetsResult should contain theSameElementsAs expected + } catch { + case e: Throwable => + e.printStackTrace() + fail("encountered error running apply test") + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBUserRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBUserRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..2f9dedbda --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBUserRepositoryIntegrationSpec.scala @@ -0,0 +1,189 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import com.amazonaws.services.dynamodbv2.model.DeleteTableRequest +import com.typesafe.config.ConfigFactory +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.time.{Seconds, Span} +import vinyldns.api.domain.membership.User + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} + +class DynamoDBUserRepositoryIntegrationSpec extends DynamoDBIntegrationSpec { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + + private val userTable = "users-live" + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$userTable" + | provisionedReads=100 + | provisionedWrites=100 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBUserRepository = _ + + private val testUserIds = (for { i <- 0 to 100 } yield s"test-user-$i").toList.sorted + private val users = testUserIds.map { id => + User(id = id, userName = "name" + id, accessKey = s"abc$id", secretKey = "123") + } + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBUserRepository(tableConfig, dynamoDBHelper) + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready(repo.getUser("any"), 5.seconds) + notReady = result.value.get.isFailure + Thread.sleep(2000) + } + + // Create all the items + val results = Future.sequence(users.map(repo.save(_))) + + // Wait until all of the data is stored + Await.result(results, 5.minutes) + } + + def tearDown(): Unit = { + val request = new DeleteTableRequest().withTableName(userTable) + val deleteTables = dynamoDBHelper.deleteTable(request) + Await.ready(deleteTables, 100.seconds) + } + + "DynamoDBUserRepository" should { + "retrieve a user" in { + val f = repo.getUser(testUserIds.head) + + whenReady(f, timeout) { retrieved => + retrieved shouldBe Some(users.head) + } + } + "returns None when the user does not exist" in { + val f = repo.getUser("does not exists") + + whenReady(f, timeout) { retrieved => + retrieved shouldBe None + } + } + "getUsers omits all non existing users" in { + val getUsers = + for { + result <- repo.getUsers(Set("notFound", testUserIds.head), None, Some(100)) + } yield result + whenReady(getUsers, timeout) { result => + result.users.map(_.id) should contain theSameElementsAs Set(testUserIds.head) + result.users.map(_.id) should not contain "notFound" + } + } + "returns all the users" in { + val f = repo.getUsers(testUserIds.toSet, None, None) + + whenReady(f, timeout) { retrieved => + retrieved.users should contain theSameElementsAs users + retrieved.lastEvaluatedId shouldBe None + } + } + "only return requested users" in { + val evenUsers = users.filter(_.id.takeRight(1).toInt % 2 == 0) + val f = repo.getUsers(evenUsers.map(_.id).toSet, None, None) + + whenReady(f, timeout) { retrieved => + retrieved.users should contain theSameElementsAs evenUsers + retrieved.lastEvaluatedId shouldBe None + } + } + "start at the exclusive start key" in { + val f = repo.getUsers(testUserIds.toSet, Some(testUserIds(5)), None) + + whenReady(f, timeout) { retrieved => + retrieved.users should not contain users(5) //start key is exclusive + retrieved.users should contain theSameElementsAs users.slice(6, users.length) + retrieved.lastEvaluatedId shouldBe None + } + } + "only return the number of items equal to the limit" in { + val f = repo.getUsers(testUserIds.toSet, None, Some(5)) + + whenReady(f, timeout) { retrieved => + retrieved.users.size shouldBe 5 + retrieved.users should contain theSameElementsAs users.take(5) + } + } + "returns the correct lastEvaluatedKey" in { + val f = repo.getUsers(testUserIds.toSet, None, Some(5)) + + whenReady(f, timeout) { retrieved => + retrieved.lastEvaluatedId shouldBe Some(users(4).id) // base 0 + retrieved.users should contain theSameElementsAs users.take(5) + } + } + "return the user if the matching access key" in { + val f = repo.getUserByAccessKey(users.head.accessKey) + + whenReady(f, timeout) { retrieved => + retrieved shouldBe Some(users.head) + } + } + "returns None not user has a matching access key" in { + val f = repo.getUserByAccessKey("does not exists") + + whenReady(f, timeout) { retrieved => + retrieved shouldBe None + } + } + "returns the super user flag when true" in { + val testUser = User( + userName = "testSuper", + accessKey = "testSuper", + secretKey = "testUser", + isSuper = true) + + val f = + for { + saved <- repo.save(testUser) + result <- repo.getUser(saved.id) + } yield result + + whenReady(f, timeout) { saved => + saved shouldBe Some(testUser) + saved.get.isSuper shouldBe true + } + } + "returns the super user flag when false" in { + val testUser = User(userName = "testSuper", accessKey = "testSuper", secretKey = "testUser") + + val f = + for { + saved <- repo.save(testUser) + result <- repo.getUser(saved.id) + } yield result + + whenReady(f, timeout) { saved => + saved shouldBe Some(testUser) + saved.get.isSuper shouldBe false + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBZoneChangeRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBZoneChangeRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..9b45013e1 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/dynamodb/DynamoDBZoneChangeRepositoryIntegrationSpec.scala @@ -0,0 +1,225 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.dynamodb + +import java.util + +import com.amazonaws.services.dynamodbv2.model.{AttributeValue, DeleteItemRequest, ScanRequest} +import com.typesafe.config.ConfigFactory +import org.joda.time.DateTime +import org.scalatest.concurrent.PatienceConfiguration +import org.scalatest.time.{Seconds, Span} +import vinyldns.api.domain.membership.User +import vinyldns.api.domain.zone._ + +import scala.concurrent.duration._ +import scala.concurrent.{Await, ExecutionContext, Future} +import scala.util.Random + +class DynamoDBZoneChangeRepositoryIntegrationSpec extends DynamoDBIntegrationSpec { + + private implicit val executionContext: ExecutionContext = scala.concurrent.ExecutionContext.global + + private val zoneChangeTable = "zone-changes-live" + + private val tableConfig = ConfigFactory.parseString(s""" + | dynamo { + | tableName = "$zoneChangeTable" + | provisionedReads=30 + | provisionedWrites=30 + | } + """.stripMargin).withFallback(ConfigFactory.load()) + + private var repo: DynamoDBZoneChangeRepository = _ + + private val goodUser = User(s"live-test-acct", "key", "secret") + + private val okZones = for { i <- 1 to 3 } yield + Zone( + s"${goodUser.userName}.zone$i.", + "test@test.com", + status = ZoneStatus.Active, + connection = testConnection) + + private val zones = okZones + + private val statuses = { + import vinyldns.api.domain.zone.ZoneChangeStatus._ + Pending :: Complete :: Failed :: Synced :: Nil + } + private val changes = for { zone <- zones; status <- statuses } yield + ZoneChange( + zone, + zone.account, + ZoneChangeType.Update, + status, + created = now.minusSeconds(Random.nextInt(1000))) + + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + def setup(): Unit = { + repo = new DynamoDBZoneChangeRepository(tableConfig, dynamoDBHelper) + + // wait until the repo is ready, could take time if the table has to be created + var notReady = true + while (notReady) { + val result = Await.ready(repo.listZoneChanges("any"), 5.seconds) + notReady = result.value.get.isFailure + } + + // Clear the zone just in case there is some lagging test data + clearChanges() + + // Create all the zones + val savedChanges = Future.sequence(changes.map(repo.save)) + + // Wait until all of the zones are done + Await.result(savedChanges, 5.minutes) + } + + def tearDown(): Unit = + clearChanges() + + private def clearChanges(): Unit = { + + import scala.collection.JavaConverters._ + + // clear all the zones from the table that we work with here + // NOTE: This is brute force and could be cleaner + val scanRequest = new ScanRequest() + .withTableName(zoneChangeTable) + + val result = dynamoClient + .scan(scanRequest) + .getItems + .asScala + .map(i => (i.get("zone_id").getS, i.get("change_id").getS)) + + result.foreach(Function.tupled(deleteZoneChange)) + } + + private def deleteZoneChange(zoneId: String, changeId: String): Unit = { + val key = new util.HashMap[String, AttributeValue]() + key.put("zone_id", new AttributeValue(zoneId)) + key.put("change_id", new AttributeValue(changeId)) + val request = new DeleteItemRequest().withTableName(zoneChangeTable).withKey(key) + try { + dynamoClient.deleteItem(request) + } catch { + case ex: Throwable => + throw new UnexpectedDynamoResponseException(ex.getMessage, ex) + } + } + + "DynamoDBRepository" should { + + implicit def dateTimeOrdering: Ordering[DateTime] = Ordering.fromLessThan(_.isAfter(_)) + + "get all changes for a zone" in { + val testFuture = repo.listZoneChanges(okZones(1).id) + whenReady(testFuture, timeout) { retrieved => + val expectedChanges = changes.filter(_.zoneId == okZones(1).id).sortBy(_.created) + retrieved.items should equal(expectedChanges) + } + } + + "get pending and complete changes for a zone" in { + val testFuture = repo.getPending(okZones(1).id) + whenReady(testFuture, timeout) { retrieved => + val expectedChangeIds = changes + .filter(c => + c.zoneId == okZones(1).id + && (c.status == ZoneChangeStatus.Pending || c.status == ZoneChangeStatus.Complete)) + .map(_.id) + .toSet + + retrieved.map(_.id).toSet should contain theSameElementsAs expectedChangeIds + retrieved.sortBy(_.created.getMillis) should equal( + changes + .filter(c => + c.zoneId == okZones(1).id && + (c.status == ZoneChangeStatus.Pending || c.status == ZoneChangeStatus.Complete)) + .sortBy(_.created.getMillis)) + } + } + + "get zone changes with a page size of one" in { + val testFuture = repo.listZoneChanges(zoneId = okZones(1).id, startFrom = None, maxItems = 1) + whenReady(testFuture, timeout) { retrieved => + { + val result = retrieved.items + val expectedChanges = changes.filter(_.zoneId == okZones(1).id) + result.size shouldBe 1 + expectedChanges should contain(result.head) + } + } + } + + "get zone changes with page size of one and reuse key to get another page with size of two" in { + val testFuture = repo.listZoneChanges(zoneId = okZones(1).id, startFrom = None, maxItems = 1) + whenReady(testFuture, timeout) { retrieved => + { + val result1 = retrieved.items.map(_.id).toSet + val key = retrieved.nextId + val testFuture2 = + repo.listZoneChanges(zoneId = okZones(1).id, startFrom = key, maxItems = 2) + whenReady(testFuture2, timeout) { retrieved => + { + val result2 = retrieved.items + val expectedChanges = + changes.filter(_.zoneId == okZones(1).id).sortBy(_.created).slice(1, 3) + + result2.size shouldBe 2 + result2 should equal(expectedChanges) + result2 shouldNot contain(result1.head) + } + } + } + } + } + + "return an empty list and nextId of None when passing last record as start" in { + val testFuture = repo.listZoneChanges(zoneId = okZones(1).id, startFrom = None, maxItems = 4) + whenReady(testFuture, timeout) { retrieved => + { + val key = retrieved.nextId + val testFuture2 = repo.listZoneChanges(zoneId = okZones(1).id, startFrom = key) + whenReady(testFuture2, timeout) { retrieved => + { + val result2 = retrieved.items + result2 shouldBe List() + retrieved.nextId shouldBe None + } + } + } + } + } + + "have nextId of None when exhausting record changes" in { + val testFuture = repo.listZoneChanges(zoneId = okZones(1).id, startFrom = None, maxItems = 10) + whenReady(testFuture, timeout) { retrieved => + { + val result = retrieved.items + val expectedChanges = changes.filter(_.zoneId == okZones(1).id).sortBy(_.created) + result.size shouldBe 4 + result should equal(expectedChanges) + retrieved.nextId shouldBe None + } + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcBatchChangeRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcBatchChangeRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..4f939fee9 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcBatchChangeRepositoryIntegrationSpec.scala @@ -0,0 +1,500 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.mysql + +import java.util.UUID + +import org.joda.time.DateTime +import org.scalatest._ +import org.scalatest.concurrent.{PatienceConfiguration, ScalaFutures} +import org.scalatest.time.{Seconds, Span} +import scalikejdbc.DB +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.batch._ +import vinyldns.api.domain.dns.DnsConversions +import vinyldns.api.domain.record.{AAAAData, AData} +import vinyldns.api.{GroupTestData, ResultHelpers, VinylDNSTestData} + +import scala.concurrent.{ExecutionContext, Future} + +class JdbcBatchChangeRepositoryIntegrationSpec + extends WordSpec + with BeforeAndAfterAll + with DnsConversions + with VinylDNSTestData + with GroupTestData + with ResultHelpers + with BeforeAndAfterEach + with Matchers + with ScalaFutures + with Inspectors + with OptionValues { + + private implicit val ec: ExecutionContext = scala.concurrent.ExecutionContext.global + private var repo: JdbcBatchChangeRepository = _ + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + import SingleChangeStatus._ + import vinyldns.api.domain.record.RecordType._ + + object TestData { + + val okAuth: AuthPrincipal = okGroupAuth + val notAuth: AuthPrincipal = dummyUserAuth + + val zoneID: String = "someZoneId" + val zoneName: String = "somezone.com." + + val sc1: SingleAddChange = + SingleAddChange( + zoneID, + zoneName, + "test", + "test.somezone.com.", + A, + 3600, + AData("1.2.3.4"), + Pending, + None, + None, + None) + + val sc2: SingleAddChange = + SingleAddChange( + zoneID, + zoneName, + "test", + "test.somezone.com.", + A, + 3600, + AData("1.2.3.40"), + Pending, + None, + None, + None) + + val sc3: SingleAddChange = + SingleAddChange( + zoneID, + zoneName, + "test", + "test.somezone.com.", + AAAA, + 300, + AAAAData("2001:558:feed:beef:0:0:0:1"), + Pending, + None, + None, + None) + + val deleteChange: SingleDeleteChange = + SingleDeleteChange( + zoneID, + zoneName, + "delete", + "delete.somezone.com.", + A, + Pending, + None, + None, + None) + + def randomBatchChange: BatchChange = BatchChange( + okAuth.userId, + okAuth.signedInUser.userName, + Some("description"), + DateTime.now, + List( + sc1.copy(id = UUID.randomUUID().toString), + sc2.copy(id = UUID.randomUUID().toString), + sc3.copy(id = UUID.randomUUID().toString), + deleteChange.copy(id = UUID.randomUUID().toString) + ) + ) + + val bcARecords: BatchChange = randomBatchChange + + def randomBatchChangeWithList(singlechanges: List[SingleChange]): BatchChange = + bcARecords.copy(id = UUID.randomUUID().toString, changes = singlechanges) + + val pendingBatchChange: BatchChange = randomBatchChange.copy(createdTimestamp = DateTime.now) + + val completeBatchChange: BatchChange = randomBatchChangeWithList( + randomBatchChange.changes.map(_.complete("recordChangeId", "recordSetId"))) + .copy(createdTimestamp = DateTime.now.plusMillis(1000)) + + val failedBatchChange: BatchChange = + randomBatchChangeWithList(randomBatchChange.changes.map(_.withFailureMessage("failed"))) + .copy(createdTimestamp = DateTime.now.plusMillis(100000)) + + val partialFailureBatchChange: BatchChange = randomBatchChangeWithList( + randomBatchChange.changes.take(2).map(_.complete("recordChangeId", "recordSetId")) + ++ randomBatchChange.changes.drop(2).map(_.withFailureMessage("failed")) + ).copy(createdTimestamp = DateTime.now.plusMillis(1000000)) + } + + import TestData._ + + override protected def beforeAll(): Unit = + repo = VinylDNSJDBC.instance.batchChangeRepository + + override protected def beforeEach(): Unit = + DB.localTx { s => + s.executeUpdate("DELETE FROM batch_change") + s.executeUpdate("DELETE FROM single_change") + } + + private def areSame(a: Option[BatchChange], e: Option[BatchChange]): Assertion = { + a shouldBe defined + e shouldBe defined + + val actual = a.get + val expected = e.get + + areSame(actual, expected) + } + + /* have to account for the database being different granularity than the JVM for DateTime */ + private def areSame(actual: BatchChange, expected: BatchChange): Assertion = { + (actual.changes should contain).theSameElementsInOrderAs(expected.changes) + actual.comments shouldBe expected.comments + actual.id shouldBe expected.id + actual.status shouldBe expected.status + actual.userId shouldBe expected.userId + actual.userName shouldBe expected.userId + actual.createdTimestamp.getMillis shouldBe expected.createdTimestamp.getMillis +- 2000 + } + + private def areSame(actual: BatchChangeSummary, expected: BatchChangeSummary): Assertion = { + actual.comments shouldBe expected.comments + actual.id shouldBe expected.id + actual.status shouldBe expected.status + actual.userId shouldBe expected.userId + actual.userName shouldBe expected.userId + actual.createdTimestamp.getMillis shouldBe expected.createdTimestamp.getMillis +- 2000 + } + + private def areSame( + actual: BatchChangeSummaryList, + expected: BatchChangeSummaryList): Assertion = { + forAll(actual.batchChanges.zip(expected.batchChanges)) { case (a, e) => areSame(a, e) } + actual.batchChanges.length shouldBe expected.batchChanges.length + actual.startFrom shouldBe expected.startFrom + actual.nextId shouldBe expected.nextId + actual.maxItems shouldBe expected.maxItems + } + + "JdbcBatchChangeRepository" should { + "save batch changes and single changes" in { + val f = repo.save(bcARecords) + whenReady(f, timeout) { saved => + saved shouldBe bcARecords + } + } + + "get a batchchange by id" in { + val f = + for { + _ <- repo.save(bcARecords) + retrieved <- repo.getBatchChange(bcARecords.id) + } yield retrieved + + whenReady(f, timeout) { retrieved => + areSame(retrieved, Some(bcARecords)) + } + } + + "return none if a batchchange is not found by id" in { + whenReady(repo.getBatchChange("doesnotexist"), timeout) { retrieved => + retrieved shouldBe empty + } + } + + "get singlechanges by list of id" in { + val f = + for { + _ <- repo.save(bcARecords) + retrieved <- repo.getSingleChanges(bcARecords.changes.map(_.id)) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe bcARecords.changes + } + } + + "not fail on get empty list of singlechanges" in { + val f = repo.getSingleChanges(List()) + + whenReady(f, timeout) { retrieved => + retrieved shouldBe List() + } + } + + "get single changes should match order from batch changes" in { + val batchChange = randomBatchChange + val f = + for { + _ <- repo.save(batchChange) + retrieved <- repo.getBatchChange(batchChange.id) + singleChanges <- retrieved + .map { r => + repo.getSingleChanges(r.changes.map(_.id).reverse) + } + .getOrElse(Future.successful[List[SingleChange]](Nil)) + } yield (retrieved, singleChanges) + + whenReady(f, timeout) { + case (maybeBatchChange, singleChanges) => + maybeBatchChange.value.changes shouldBe singleChanges + } + } + + "update singlechanges" in { + val batchChange = randomBatchChange + val completed = batchChange.changes.map(_.complete("aaa", "bbb")) + val f = + for { + _ <- repo.save(batchChange) + _ <- repo.updateSingleChanges(completed) + retrieved <- repo.getSingleChanges(completed.map(_.id)) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe completed + } + } + + "not fail on empty update singlechanges" in { + val f = repo.updateSingleChanges(List()) + + whenReady(f, timeout) { retrieved => + retrieved shouldBe List() + } + } + + "update some changes in a batch" in { + val batchChange = randomBatchChange + val completed = batchChange.changes.take(2).map(_.complete("recordChangeId", "recordSetId")) + val incomplete = batchChange.changes.drop(2) + val f = + for { + _ <- repo.save(batchChange) + _ <- repo.updateSingleChanges(completed) + retrieved <- repo.getSingleChanges(batchChange.changes.map(_.id)) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe completed ++ incomplete + } + } + + "get batchchange summary by user id" in { + val change_one = pendingBatchChange.copy(createdTimestamp = DateTime.now) + val change_two = completeBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(1000)) + val otherUserBatchChange = + randomBatchChange.copy(userId = "Other", createdTimestamp = DateTime.now.plusMillis(50000)) + val change_three = failedBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(100000)) + val change_four = + partialFailureBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(1000000)) + + val f = + for { + _ <- repo.save(change_one) + _ <- repo.save(change_two) + _ <- repo.save(change_three) + _ <- repo.save(change_four) + _ <- repo.save(otherUserBatchChange) + + retrieved <- repo.getBatchChangeSummariesByUserId(pendingBatchChange.userId) + } yield retrieved + + // from most recent descending + val expectedChanges = BatchChangeSummaryList( + List( + BatchChangeSummary(change_four), + BatchChangeSummary(change_three), + BatchChangeSummary(change_two), + BatchChangeSummary(change_one)) + ) + + whenReady(f, timeout) { retrieved => + areSame(retrieved, expectedChanges) + } + } + + "get batchchange summary by user id with maxItems" in { + val change_one = pendingBatchChange.copy(createdTimestamp = DateTime.now) + val change_two = completeBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(1000)) + val otherUserBatchChange = + randomBatchChange.copy(userId = "Other", createdTimestamp = DateTime.now.plusMillis(50000)) + val change_three = failedBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(100000)) + val change_four = + partialFailureBatchChange.copy(createdTimestamp = DateTime.now.plusMillis(1000000)) + + val f = + for { + _ <- repo.save(change_one) + _ <- repo.save(change_two) + _ <- repo.save(change_three) + _ <- repo.save(change_four) + _ <- repo.save(otherUserBatchChange) + + retrieved <- repo.getBatchChangeSummariesByUserId(pendingBatchChange.userId, maxItems = 3) + } yield retrieved + + // from most recent descending + val expectedChanges = BatchChangeSummaryList( + List( + BatchChangeSummary(change_four), + BatchChangeSummary(change_three), + BatchChangeSummary(change_two)), + None, + Some(3), + 3 + ) + + whenReady(f, timeout) { retrieved => + areSame(retrieved, expectedChanges) + } + } + + "get batchchange summary by user id with explicit startFrom" in { + val timeBase = DateTime.now + val change_one = pendingBatchChange.copy(createdTimestamp = timeBase) + val change_two = completeBatchChange.copy(createdTimestamp = timeBase.plus(1000)) + val otherUserBatchChange = + randomBatchChange.copy(userId = "Other", createdTimestamp = timeBase.plus(50000)) + val change_three = failedBatchChange.copy(createdTimestamp = timeBase.plus(100000)) + val change_four = partialFailureBatchChange.copy(createdTimestamp = timeBase.plus(1000000)) + + val f = + for { + _ <- repo.save(change_one) + _ <- repo.save(change_two) + _ <- repo.save(change_three) + _ <- repo.save(change_four) + _ <- repo.save(otherUserBatchChange) + + retrieved <- repo.getBatchChangeSummariesByUserId( + pendingBatchChange.userId, + startFrom = Some(1), + maxItems = 3) + } yield retrieved + + // sorted from most recent descending. startFrom uses zero-based indexing. + // Expect to get only the second batch change, change_3. + // No nextId because the maxItems (3) equals the number of batch changes the user has after the offset (3) + val expectedChanges = BatchChangeSummaryList( + List( + BatchChangeSummary(change_three), + BatchChangeSummary(change_two), + BatchChangeSummary(change_one)), + Some(1), + None, + 3 + ) + + whenReady(f, timeout) { retrieved => + areSame(retrieved, expectedChanges) + } + } + + "get batchchange summary by user id with explicit startFrom and maxItems" in { + val timeBase = DateTime.now + val change_one = pendingBatchChange.copy(createdTimestamp = timeBase) + val change_two = completeBatchChange.copy(createdTimestamp = timeBase.plus(1000)) + val otherUserBatchChange = + randomBatchChange.copy(userId = "Other", createdTimestamp = timeBase.plus(50000)) + val change_three = failedBatchChange.copy(createdTimestamp = timeBase.plus(100000)) + val change_four = partialFailureBatchChange.copy(createdTimestamp = timeBase.plus(1000000)) + + val f = + for { + _ <- repo.save(change_one) + _ <- repo.save(change_two) + _ <- repo.save(change_three) + _ <- repo.save(change_four) + _ <- repo.save(otherUserBatchChange) + + retrieved <- repo.getBatchChangeSummariesByUserId( + pendingBatchChange.userId, + startFrom = Some(1), + maxItems = 1) + } yield retrieved + + // sorted from most recent descending. startFrom uses zero-based indexing. + // Expect to get only the second batch change, change_3. + // Expect the ID of the next batch change to be 2. + val expectedChanges = + BatchChangeSummaryList(List(BatchChangeSummary(change_three)), Some(1), Some(2), 1) + + whenReady(f, timeout) { retrieved => + areSame(retrieved, expectedChanges) + } + } + + "get second page of batchchange summaries by user id" in { + val timeBase = DateTime.now + val change_one = pendingBatchChange.copy(createdTimestamp = timeBase) + val change_two = completeBatchChange.copy(createdTimestamp = timeBase.plus(1000)) + val otherUserBatchChange = + randomBatchChange.copy(userId = "Other", createdTimestamp = timeBase.plus(50000)) + val change_three = failedBatchChange.copy(createdTimestamp = timeBase.plus(100000)) + val change_four = partialFailureBatchChange.copy(createdTimestamp = timeBase.plus(1000000)) + + val f = + for { + _ <- repo.save(change_one) + _ <- repo.save(change_two) + _ <- repo.save(change_three) + _ <- repo.save(change_four) + _ <- repo.save(otherUserBatchChange) + + retrieved1 <- repo.getBatchChangeSummariesByUserId( + pendingBatchChange.userId, + maxItems = 1) + retrieved2 <- repo.getBatchChangeSummariesByUserId( + pendingBatchChange.userId, + startFrom = retrieved1.nextId) + } yield (retrieved1, retrieved2) + + val expectedChanges = + BatchChangeSummaryList(List(BatchChangeSummary(change_four)), None, Some(1), 1) + + val secondPageExpectedChanges = BatchChangeSummaryList( + List( + BatchChangeSummary(change_three), + BatchChangeSummary(change_two), + BatchChangeSummary(change_one)), + Some(1), + None, + 100 + ) + + whenReady(f, timeout) { retrieved => + areSame(retrieved._1, expectedChanges) + areSame(retrieved._2, secondPageExpectedChanges) + } + } + + "return empty list if a batchchange summary is not found by user id" in { + whenReady(repo.getBatchChangeSummariesByUserId("doesnotexist"), timeout) { retrieved => + retrieved.batchChanges shouldBe empty + } + } + } +} diff --git a/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcZoneRepositoryIntegrationSpec.scala b/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcZoneRepositoryIntegrationSpec.scala new file mode 100644 index 000000000..a322e6881 --- /dev/null +++ b/modules/api/src/it/scala/vinyldns/api/repository/mysql/JdbcZoneRepositoryIntegrationSpec.scala @@ -0,0 +1,559 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.repository.mysql + +import java.util.UUID + +import org.scalatest._ +import org.scalatest.concurrent.{PatienceConfiguration, ScalaFutures} +import org.scalatest.time.{Seconds, Span} +import scalikejdbc.DB +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.dns.DnsConversions +import vinyldns.api.domain.membership.User +import vinyldns.api.domain.zone._ +import vinyldns.api.{GroupTestData, ResultHelpers, VinylDNSTestData} + +import scala.concurrent.{ExecutionContext, Future} + +class JdbcZoneRepositoryIntegrationSpec + extends WordSpec + with BeforeAndAfterAll + with DnsConversions + with VinylDNSTestData + with GroupTestData + with ResultHelpers + with BeforeAndAfterEach + with Matchers + with ScalaFutures + with Inspectors { + + private implicit val ec: ExecutionContext = scala.concurrent.ExecutionContext.global + private var repo: JdbcZoneRepository = _ + private val timeout = PatienceConfiguration.Timeout(Span(10, Seconds)) + + override protected def beforeAll(): Unit = + repo = VinylDNSJDBC.instance.zoneRepository + + override protected def beforeEach(): Unit = + DB.localTx { s => + s.executeUpdate("DELETE FROM zone") + } + + private val groups = (0 until 10) + .map(num => okGroup.copy(name = num.toString, id = UUID.randomUUID().toString)) + .toList + + // We will add the dummy acl rule to only the first zone + private val dummyAclRule = + ACLRule( + accessLevel = AccessLevel.Read, + groupId = Some(dummyGroup.id) + ) + + // generate some ACLs + private val groupAclRules = groups.map( + g => + ACLRule( + accessLevel = AccessLevel.Read, + groupId = Some(g.id) + )) + + private val userOnlyAclRule = + ACLRule( + accessLevel = AccessLevel.Read, + userId = Some(okUser.id) + ) + + // the zone acl rule will have the user rule and all of the group rules + private val testZoneAcl = ZoneACL( + rules = Set(userOnlyAclRule) ++ groupAclRules + ) + + private val testZoneAdminGroupId = "foo" + + /** + * each zone will have an admin group id that doesn't exist, but have the ACL we generated above + * The okUser therefore should have access to all of the zones + */ + private val testZones = (1 until 10).map { num => + val z = + okZone.copy( + name = num.toString + ".", + id = UUID.randomUUID().toString, + adminGroupId = testZoneAdminGroupId, + acl = testZoneAcl + ) + + // add the dummy acl rule to the first zone + if (num == 1) z.addACLRule(dummyAclRule) else z + } + + private val superUserAuth = AuthPrincipal(dummyUser.copy(isSuper = true), Seq()) + + private def testZone(name: String, adminGroupId: String = testZoneAdminGroupId) = + okZone.copy(name = name, id = UUID.randomUUID().toString, adminGroupId = adminGroupId) + + private def saveZones(zones: Seq[Zone]): Future[Unit] = + zones.foldLeft(Future.successful(())) { + case (acc, cur) => + acc.flatMap { _ => + repo.save(cur).map(_ => ()) + } + } + + "JdbcZoneRepository" should { + "return the zone when it is saved" in { + whenReady(repo.save(okZone), timeout) { retrieved => + retrieved shouldBe okZone + } + } + + "get a zone by id" in { + val f = + for { + _ <- repo.save(okZone) + retrieved <- repo.getZone(okZone.id) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe Some(okZone) + } + } + + "return none if a zone is not found by id" in { + whenReady(repo.getZone("doesnotexist"), timeout) { retrieved => + retrieved shouldBe empty + } + } + + "get a zone by name" in { + val f = + for { + _ <- repo.save(okZone) + retrieved <- repo.getZoneByName(okZone.name) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved shouldBe Some(okZone) + } + } + + "return none if a zone is not found by name" in { + whenReady(repo.getZoneByName("doesnotexist"), timeout) { retrieved => + retrieved shouldBe empty + } + } + + "get a list of zones by names" in { + val f = saveZones(testZones) + val testZonesList1 = testZones.toList.take(3) + val testZonesList2 = testZones.toList.takeRight(5) + val names1 = testZonesList1.map(zone => zone.name) + val names2 = testZonesList2.map(zone => zone.name) + + whenReady(f, timeout) { _ => + whenReady(repo.getZonesByNames(names1.toSet), timeout) { retrieved => + retrieved should contain theSameElementsAs testZonesList1 + } + whenReady(repo.getZonesByNames(names2.toSet), timeout) { retrieved => + retrieved should contain theSameElementsAs testZonesList2 + } + } + } + + "return empty list if zones are not found by names" in { + whenReady( + repo.getZonesByNames(Set("doesnotexist", "doesnotexist2", "reallydoesnotexist")), + timeout) { retrieved => + retrieved shouldBe empty + } + } + + "get a list of reverse zones by zone names filters" in { + val testZones = Seq( + testZone("0/67.345.12.in-addr.arpa."), + testZone("67.345.12.in-addr.arpa."), + testZone("anotherZone.in-addr.arpa."), + testZone("extraZone.in-addr.arpa.") + ) + + val expectedZones = List(testZones(0), testZones(1), testZones(3)) + val f = saveZones(testZones) + + whenReady(f, timeout) { _ => + whenReady(repo.getZonesByFilters(Set("67.345.12.in-addr.arpa.", "extraZone")), timeout) { + retrieved => + retrieved should contain theSameElementsAs expectedZones + } + } + } + + "get authorized zones" in { + // store all of the zones + + val f = saveZones(testZones) + + // query for all zones for the ok user, he should have access to all of the zones + val okUserAuth = AuthPrincipal( + signedInUser = okUser, + memberGroupIds = groups.map(_.id) + ) + + whenReady(f, timeout) { _ => + whenReady(repo.listZones(okUserAuth), timeout) { retrieved => + retrieved should contain theSameElementsAs testZones + } + + // dummy user only has access to one zone + whenReady(repo.listZones(dummyUserAuth), timeout) { dummyZones => + (dummyZones should contain).only(testZones.head) + } + } + } + + "get zones that are accessible by everyone" in { + + //user and group id being set to None implies EVERYONE access + val allAccess = okZone.copy( + name = "all-access.", + id = UUID.randomUUID().toString, + acl = ZoneACL( + rules = Set( + ACLRule( + accessLevel = AccessLevel.Read, + userId = None, + groupId = None + ) + ) + ) + ) + + val noAccess = okZone.copy( + name = "no-access.", + id = UUID.randomUUID().toString, + adminGroupId = testZoneAdminGroupId, + acl = ZoneACL() + ) + + val testZones = Seq(allAccess, noAccess) + + val f = + for { + saved <- saveZones(testZones) + everyoneZones <- repo.listZones(dummyUserAuth) + } yield everyoneZones + + whenReady(f, timeout) { retrieved => + (retrieved should contain).only(allAccess) + } + } + + "not return deleted zones" in { + val zoneToDelete = okZone.copy( + name = "zone-to-delete.", + id = UUID.randomUUID().toString, + acl = ZoneACL( + rules = Set( + ACLRule( + accessLevel = AccessLevel.Read, + userId = None, + groupId = None + ) + ) + ) + ) + + // save it and make sure it is saved first by immediately getting it + val f = + for { + _ <- repo.save(zoneToDelete) + retrieved <- repo.getZone(zoneToDelete.id) + } yield retrieved + + whenReady(f, timeout) { saved => + // delete the zone, set the status to Deleted + val deleted = saved.map(_.copy(status = ZoneStatus.Deleted)).get + val del = + for { + _ <- repo.save(deleted) + retrieved <- repo.getZone(deleted.id) + } yield retrieved + + // the result should be None + whenReady(del, timeout) { retrieved => + retrieved shouldBe empty + } + } + } + + "return an empty list of zones if the user is not authorized to any" in { + val unauthorized = AuthPrincipal( + signedInUser = User("not-authorized", "not-authorized", "not-authorized"), + memberGroupIds = Seq.empty + ) + + val f = + for { + _ <- saveZones(testZones) + zones <- repo.listZones(unauthorized) + } yield zones + + whenReady(f, timeout) { retrieved => + retrieved shouldBe empty + } + } + + "not return zones when access is revoked" in { + // ok user can access both zones, dummy can only access first zone + val zones = testZones.take(2) + val addACL = saveZones(zones) + + val okUserAuth = AuthPrincipal( + signedInUser = okUser, + memberGroupIds = groups.map(_.id) + ) + + whenReady(addACL, timeout) { _ => + whenReady(repo.listZones(okUserAuth), timeout) { retrieved => + retrieved should contain theSameElementsAs zones + } + + // dummy user only has access to first zone + whenReady(repo.listZones(dummyUserAuth), timeout) { dummyZones => + (dummyZones should contain).only(zones.head) + } + + // revoke the access for the dummy user + val revoked = zones(0).deleteACLRule(dummyAclRule) + val revokeACL = repo.save(revoked) + + whenReady(revokeACL, timeout) { _ => + // ok user can still access zones + whenReady(repo.listZones(okUserAuth), timeout) { retrieved => + val expected = Seq(revoked, zones(1)) + retrieved should contain theSameElementsAs expected + } + + // dummy user can not access the revoked zone + whenReady(repo.listZones(dummyUserAuth), timeout) { dummyZones => + dummyZones shouldBe empty + } + } + } + } + + "omit zones for groups if the user has more than 30 groups" in { + + /** + * Somewhat complex setup. We only support 30 accessors right now, or 29 groups as the max + * number of groups a user belongs to. + * + * When we query for zones, we will truncate any groups over 29. + * + * So the test setup here creates 40 groups along with 40 zones, where the group id + * is the admin group of each zone. + * + * When we query, we should only get back 29 zones (the user id is always considered as an accessor id) + */ + val groups = (1 to 40).map { num => + val groupName = "%02d".format(num) + okGroup.copy( + name = groupName, + id = UUID.randomUUID().toString + ) + } + + val zones = groups.map { group => + val zoneName = group.name + "." + okZone.copy( + name = zoneName, + id = UUID.randomUUID().toString, + adminGroupId = group.id, + acl = ZoneACL() + ) + } + + val auth = AuthPrincipal(okUser, groups.map(_.id)) + + val f = + for { + _ <- saveZones(zones) + retrieved <- repo.listZones(auth) + } yield retrieved + + whenReady(f, timeout) { retrieved => + // we should not have more than 29 zones + retrieved.length shouldBe 29 + retrieved.headOption.map(_.name) shouldBe Some("01.") + retrieved.lastOption.map(_.name) shouldBe Some("29.") + } + } + + "return all zones if the user is a super user" in { + + val f = + for { + _ <- saveZones(testZones) + retrieved <- repo.listZones(superUserAuth) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs testZones + } + } + + "apply the zone filter as a super user" in { + + val testZones = Seq( + testZone("system-test"), + testZone("system-temp"), + testZone("no-match") + ) + + val expectedZones = Seq(testZones(0), testZones(1)) + + val f = + for { + _ <- saveZones(testZones) + retrieved <- repo.listZones(superUserAuth, zoneNameFilter = Some("system")) + } yield retrieved + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs expectedZones + } + } + + "apply the zone filter as a normal user" in { + + val testZones = Seq( + testZone("system-test", adminGroupId = "foo"), + testZone("system-temp", adminGroupId = "foo"), + testZone("system-nomatch", adminGroupId = "bar") + ) + + val expectedZones = Seq(testZones(0), testZones(1)).sortBy(_.name) + + val auth = AuthPrincipal(dummyUser, Seq("foo")) + + val f = + for { + _ <- saveZones(testZones) + retrieved <- repo.listZones(auth, zoneNameFilter = Some("system")) + } yield retrieved + + whenReady(f, timeout) { retrieved => + (retrieved should contain).theSameElementsInOrderAs(expectedZones) + } + } + + "apply paging when searching as a super user" in { + // we have 10 zones in test zones, let's page through and check + val sorted = testZones.sortBy(_.name) + val expectedFirstPage = sorted.take(4) + val expectedSecondPage = sorted.drop(4).take(4) + val expectedThirdPage = sorted.drop(8).take(4) + + whenReady(saveZones(testZones), timeout) { _ => + whenReady(repo.listZones(superUserAuth, offset = None, pageSize = 4), timeout) { + firstPage => + (firstPage should contain).theSameElementsInOrderAs(expectedFirstPage) + } + + whenReady(repo.listZones(superUserAuth, offset = Some(4), pageSize = 4), timeout) { + secondPage => + (secondPage should contain).theSameElementsInOrderAs(expectedSecondPage) + } + + whenReady(repo.listZones(superUserAuth, offset = Some(8), pageSize = 4), timeout) { + thirdPage => + (thirdPage should contain).theSameElementsInOrderAs(expectedThirdPage) + } + } + } + + "apply paging when doing an authorized zone search" in { + // create 10 zones, but our user should only have access to 5 of them + val differentAdminGroupId = UUID.randomUUID().toString + + val testZones = (0 until 10).map { num => + val z = + okZone.copy( + name = num.toString + ".", + id = UUID.randomUUID().toString, + adminGroupId = testZoneAdminGroupId, + acl = ZoneACL() + ) + + // we are going to have 5 zones that havea different admin group id + if (num % 2 == 0) z.copy(adminGroupId = differentAdminGroupId) else z + } + + val sorted = testZones.sortBy(_.name) + val filtered = sorted.filter(_.adminGroupId == testZoneAdminGroupId) + val expectedFirstPage = filtered.take(2) + val expectedSecondPage = filtered.drop(2).take(2) + val expectedThirdPage = filtered.drop(4).take(2) + + // make sure our auth is a member of the testZoneAdminGroup + val auth = AuthPrincipal(dummyUser, Seq(testZoneAdminGroupId)) + + whenReady(saveZones(testZones), timeout) { _ => + whenReady(repo.listZones(auth, offset = None, pageSize = 2), timeout) { firstPage => + (firstPage should contain).theSameElementsInOrderAs(expectedFirstPage) + } + + whenReady(repo.listZones(auth, offset = Some(2), pageSize = 2), timeout) { secondPage => + (secondPage should contain).theSameElementsInOrderAs(expectedSecondPage) + } + + whenReady(repo.listZones(auth, offset = Some(4), pageSize = 2), timeout) { thirdPage => + (thirdPage should contain).theSameElementsInOrderAs(expectedThirdPage) + } + } + } + + "get zones by admin group" in { + val differentAdminGroupId = UUID.randomUUID().toString + + val testZones = (1 until 10).map { num => + val z = + okZone.copy( + name = num.toString + ".", + id = UUID.randomUUID().toString, + adminGroupId = testZoneAdminGroupId, + acl = testZoneAcl + ) + + // we are going to have 5 zones that have a different admin group id + if (num % 2 == 0) z.copy(adminGroupId = differentAdminGroupId) else z + } + + val expectedZones = testZones.filter(_.adminGroupId == differentAdminGroupId) + + val f = + for { + _ <- saveZones(testZones) + zones <- repo.getZonesByAdminGroupId(differentAdminGroupId) + } yield zones + + whenReady(f, timeout) { retrieved => + retrieved should contain theSameElementsAs expectedZones + } + } + } +} diff --git a/modules/api/src/main/protobuf/VinylDNSProto.proto b/modules/api/src/main/protobuf/VinylDNSProto.proto new file mode 100644 index 000000000..963da3bd3 --- /dev/null +++ b/modules/api/src/main/protobuf/VinylDNSProto.proto @@ -0,0 +1,180 @@ +// VinylDNSProto.proto +option java_package = "vinyldns.proto"; +option optimize_for = SPEED; + +message ZoneConnection { + required string name = 1; + required string keyName = 2; + required string key = 3; + required string primaryServer = 4; +} + +message ACLRule { + required string accessLevel = 1; + optional string description = 2; + optional string userId = 3; + optional string groupId = 4; + optional string recordMask = 5; + repeated string recordTypes = 6; +} + +message ZoneACL { + repeated ACLRule rules = 1; +} + +message Zone { + required string id = 1; + required string name = 2; + required string email = 3; + required string status = 4; + required int64 created = 5; + optional int64 updated = 6; + optional ZoneConnection connection = 7; + required string account = 8; + optional bool shared = 9 [default = false]; + optional ZoneConnection transferConnection = 10; + optional ZoneACL acl = 11; + optional string adminGroupId = 12 [default = "system"]; + optional int64 latestSync = 13; +} + +message AData { + required string address = 1; +} + +message AAAAData { + required string address = 1; +} + +message CNAMEData { + required string cname = 1; +} + +message MXData { + required int32 preference = 1; + required string exchange = 2; +} + +message NSData { + required string nsdname = 1; +} + +message PTRData { + required string ptrdname = 1; +} + +message SOAData { + required string mname = 1; + required string rname = 2; + required int64 serial = 3; + required int64 refresh = 4; + required int64 retry = 5; + required int64 expire = 6; + required int64 minimum = 7; +} + +message SPFData { + required string text = 1; +} + +message SRVData { + required int32 priority = 1; + required int32 weight = 2; + required int32 port = 3; + required string target = 4; +} + +message SSHFPData { + required int32 algorithm = 1; + required int32 typ = 2; + required string fingerPrint = 3; +} + +message TXTData { + required string text = 1; +} + +message RecordData { + required bytes data = 1; +} + +message RecordSet { + required string zoneId = 1; + required string id = 2; + required string name = 3; + required string typ = 4; + required int64 ttl = 5; + required string status = 6; + required int64 created = 7; + optional int64 updated = 8; + repeated RecordData record = 9; + required string account = 10; +} + +message RecordSetChange { + required string id = 1; + required Zone zone = 2; + required RecordSet recordSet = 3; + required string userId = 4; + required string typ = 5; + required string status = 6; + required int64 created = 7; + optional string systemMessage = 8; + optional RecordSet updates = 9; + repeated string singleBatchChangeIds = 10; +} + +message ZoneChange { + required string id = 1; + required string userId = 2; + required string typ = 3; + required string status = 4; + required int64 created = 5; + required Zone zone = 6; + optional string systemMessage = 7; +} + +message Group { + required string id = 1; + required string name = 2; + required string email = 3; + required int64 created = 4; + required string status = 5; + repeated string memberIds = 6; + repeated string adminUserIds = 7; + optional string description = 8; +} + +message GroupChange { + required string groupChangeId = 1; + required string groupId = 2; + required string changeType = 3; + required string userId = 4; + required int64 created = 5; + required Group newGroup = 6; + optional Group oldGroup = 7; +} + +message SingleAddChange { + required int64 ttl = 1; + required RecordData recordData = 2; +} + +message SingleChangeData { + required bytes data = 1; +} + +message SingleChange { + required string id = 1; + required string status = 2; + required string zoneId = 3; + required string recordName = 4; + required string changeType = 5; + required string inputName = 6; + required string zoneName = 7; + required string recordType = 8; + optional string systemMessage = 9; + optional string recordChangeId = 10; + optional string recordSetId = 11; + optional SingleChangeData changeData = 12; +} diff --git a/modules/api/src/main/resources/application.conf b/modules/api/src/main/resources/application.conf new file mode 100644 index 000000000..431fc4669 --- /dev/null +++ b/modules/api/src/main/resources/application.conf @@ -0,0 +1,66 @@ +################################################################################################################ +# This configuration is used primarily when running re-start or starting Vinyll locally. The configuration +# presumes a stand-alone Vinyll server with no backend services. +################################################################################################################ +akka { + loglevel = "ERROR" + + # The following settings are required to have Akka logging output to SLF4J and logback; without + # these, akka will output to STDOUT + loggers = ["akka.event.slf4j.Slf4jLogger"] + logging-filter = "akka.event.slf4j.Slf4jLoggingFilter" + logger-startup-timeout = 30s + + actor { + provider = "akka.actor.LocalActorRefProvider" + } +} + +akka.http { + server { + # The time period within which the TCP binding process must be completed. + # Set to `infinite` to disable. + bind-timeout = 5s + + # Show verbose error messages back to the client + verbose-error-messages = on + } + + parsing { + # Spray doesn't like the AWS4 headers + illegal-header-warnings = on + } +} + +vinyldns { + sqs { + access-key = "x" + secret-key = "x" + signing-region = "x" + service-endpoint = "http://localhost:9324/" + queue-url = "http://localhost:9324/queue/vinyldns-zones" // this is in the docker/elasticmq/custom.conf file + } + + sync-delay = 10000 # 10 second delay for resyncing zone + + db { + local-mode = true # indicates that we should run migrations as we are running in memory + } + + batch-change-limit = 20 # Max change limit per batch request + + # this key is used in order to encrypt/decrypt DNS TSIG keys. We use this dummy one for test purposes, this + # should be overridden with a real value that is hidden for production deployment + crypto { + type = "vinyldns.core.crypto.NoOpCrypto" + } + + monitoring { + logging-interval = 3600s + } + + # log prometheus metrics to logger factory + metrics { + log-to-console = false + } +} diff --git a/modules/api/src/main/resources/db-migrations.conf b/modules/api/src/main/resources/db-migrations.conf new file mode 100644 index 000000000..7e5914570 --- /dev/null +++ b/modules/api/src/main/resources/db-migrations.conf @@ -0,0 +1,119 @@ +################################################################################################################ +# The configuration used when running migrations +# To use this config, specify -Dconfig.resource=db-migrations.conf +################################################################################################################ +akka { + loglevel = "INFO" + log-dead-letters-during-shutdown = off + log-dead-letters = 0 + + actor { + provider = "akka.actor.LocalActorRefProvider" + } + + persistence { + journal.plugin = "inmemory-journal" + snapshot-store.plugin = "inmemory-snapshot-store" + } +} + +vinyldns { + rest { + host = "localhost" + port = 9002 + } + + db { + local-mode = false + default { + driver = "org.mariadb.jdbc.Driver" + + # requires these as environment variables, will fail if not present + migrationUrl = ${JDBC_MIGRATION_URL} + user = ${JDBC_USER} + password = ${JDBC_PASSWORD} + + poolInitialSize = 10 + poolMaxSize = 20 + connectionTimeoutMillis=5000 + maxLifeTime = 600000 + } + } + + dynamo { + tablePrefix = ${DYNAMO_TABLE_PREFIX} + key = ${DYNAMO_KEY} + secret = ${DYNAMO_SECRET} + endpoint = "https://dynamodb.us-east-1.amazonaws.com" + endpoint = ${?DYNAMO_ENDPOINT} + } + + zoneChanges { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"zoneChange" + provisionedReads=100 + provisionedWrites=100 + } + } + recordSet { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"recordSet" + provisionedReads=100 + provisionedWrites=100 + } + } + + recordChange { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"recordChange" + provisionedReads=100 + provisionedWrites=100 + } + } + + users { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"users" + provisionedReads=100 + provisionedWrites=100 + } + } + + groups { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"groups" + provisionedReads=100 + provisionedWrites=100 + } + } + + membership { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"membership" + provisionedReads=100 + provisionedWrites=100 + } + } + + groupChanges { + dummy = false + + dynamo { + tableName = ${vinyldns.dynamo.tablePrefix}"groupChanges" + provisionedReads=100 + provisionedWrites=100 + } + } +} diff --git a/modules/api/src/main/resources/db/migration/V1__Zones.sql b/modules/api/src/main/resources/db/migration/V1__Zones.sql new file mode 100644 index 000000000..25b87e2fd --- /dev/null +++ b/modules/api/src/main/resources/db/migration/V1__Zones.sql @@ -0,0 +1,33 @@ +CREATE SCHEMA IF NOT EXISTS ${dbName}; + +USE ${dbName}; + +/* +Create the Zone table We are not storing the shared flag or the account here as the new Zone repo +is not planned on being backward compatible, and we would have data in the table that we do not need +*/ +CREATE TABLE zone ( + id CHAR(36) NOT NULL, + name VARCHAR(256) NOT NULL, + admin_group_id CHAR(36) NOT NULL, + data BLOB NOT NULL, + PRIMARY KEY (id), + INDEX zone_name_index (name), + INDEX zone_admin_group_id_index (admin_group_id) +); + +/* +The Zone Access table provides a lookup to easily find zones that an individual user has access to. +The accessor_id is either a group id OR a user id +The zone_id is the zone_id for the zone +*/ +CREATE TABLE zone_access ( + accessor_id CHAR(36) NOT NULL, + zone_id CHAR(36) NOT NULL, + PRIMARY KEY (accessor_id, zone_id), + CONSTRAINT fk_zone_access FOREIGN KEY (zone_id) + REFERENCES zone(id) + ON DELETE CASCADE, + INDEX user_id_index (accessor_id), + INDEX zone_id_index (zone_id) +); diff --git a/modules/api/src/main/resources/db/migration/V2__BatchChanges.sql b/modules/api/src/main/resources/db/migration/V2__BatchChanges.sql new file mode 100644 index 000000000..7823eda17 --- /dev/null +++ b/modules/api/src/main/resources/db/migration/V2__BatchChanges.sql @@ -0,0 +1,44 @@ +CREATE SCHEMA IF NOT EXISTS ${dbName}; + +USE ${dbName}; + +/* +Create the batch_change table. This table stores the metadata of a batch change. +It supports easy query by batch change ID, user_id, and combination of user_id & created_time. +*/ +CREATE TABLE batch_change ( + id CHAR(36) NOT NULL, + user_id CHAR(36) NOT NULL, + user_name VARCHAR(45) NOT NULL, + created_time DATETIME NOT NULL, + comments VARCHAR(1024) NULL, + PRIMARY KEY (id), + INDEX batch_change_user_id_index (user_id ASC), + INDEX batch_change_user_id_created_time_index (user_id ASC, created_time ASC)); + +/* +Create the single_change table. This table stores the single changes and associated them with batch change via foreign key. +It stores single change data as encoded protobuf in the data BLOLB. Whenever any column in the table is updated, the data column must be updated too. +Just reading from the data column and decode the protobuf format can get all the data for a single change. +It also stores other IDs to associate with zone, record set and record set change. These IDs allow getting additional data from the dynamodb where they're stored. +*/ +CREATE TABLE single_change ( + id CHAR(36) NOT NULL, + seq_num SMALLINT NOT NULL, + input_name VARCHAR(45) NOT NULL, + change_type VARCHAR(20) NOT NULL, + data BLOB NOT NULL, + status VARCHAR(10) NOT NULL, + batch_change_id CHAR(36) NOT NULL, + record_set_change_id CHAR(36) NULL, + record_set_id CHAR(36) NULL, + zone_id CHAR(36) NOT NULL, + PRIMARY KEY (id), + INDEX batch_change_id_index (batch_change_id ASC), + INDEX record_set_change_id_index (record_set_change_id ASC), + CONSTRAINT fk_single_change_batch_change1 + FOREIGN KEY (batch_change_id) + REFERENCES ${dbName}.batch_change (id) + ON DELETE CASCADE); + + diff --git a/modules/api/src/main/resources/logback.xml b/modules/api/src/main/resources/logback.xml new file mode 100644 index 000000000..a28642adc --- /dev/null +++ b/modules/api/src/main/resources/logback.xml @@ -0,0 +1,24 @@ + + + + + %d [test] %-5p | \(%logger{4}:%line\) | %msg %n + + + + + + + + + + + + + + + + + + + diff --git a/modules/api/src/main/resources/reference.conf b/modules/api/src/main/resources/reference.conf new file mode 100644 index 000000000..9bbb790e3 --- /dev/null +++ b/modules/api/src/main/resources/reference.conf @@ -0,0 +1,148 @@ +################################################################################################################ +# The default configuration values for Vinyll. All configuration values that we use and process in Vinyl +# MUST have a corresponding value in here in the event that the application is not configured, otherwise +# a ConfigurationMissing exception will be thrown by the typesafe config +################################################################################################################ +vinyldns { + + # if we should start up polling for change requests, set this to false for the inactive cluster + processing-disabled = false + + sqs { + polling-interval = 250millis + } + + # approved name servers that are allowable, default to our internal name servers for test + approved-name-servers = [ + "172.17.42.1.", + "ns1.parent.com." + ] + + # approved admin groups that are allowed to manage ns recordsets + approved-ns-groups = [ + "ok-group", + "ok" + ] + + # color should be green or blue, used in order to do blue/green deployment + color = "green" + + # version of vinyldns + version = "unknown" + + # time users have to wait to resync a zone + sync-delay = 600000 + + # we log our endpoint statistics to SLF4J on a period. This allows us to monitor the stats in SPLUNK + # this should be set to a reasonable duration; by default it is 60 seconds; we may want this to be very + # long in a test environment so we do not see stats at all + monitoring { + logging-interval = 60s + } + + # the host and port that the vinyldns service binds to + rest { + host = "127.0.0.1" + port = 9000 + } + + # JDBC Settings, these are all values in scalikejdbc-config, not our own + # these must be overridden to use MYSQL for production use + # assumes a docker or mysql instance running locally + db { + name = "vinyldns" + local-mode = false + default { + driver = "org.mariadb.jdbc.Driver" + migrationUrl = "jdbc:mariadb://localhost:3306/?user=root&password=pass" + url = "jdbc:mariadb://localhost:3306/vinyldns?user=root&password=pass" + user = "root" + password = "pass" + poolInitialSize = 10 + poolMaxSize = 20 + connectionTimeoutMillis = 1000 + maxLifeTime = 600000 + } + } + + dynamo { + key = "vinyldnsTest" + secret = "notNeededForDynamoDbLocal" + endpoint = "http://127.0.0.1:19000" + region = "us-east-1" # note: we are always in us-east-1, but this can be overridden + } + + zoneChanges { + dynamo { + tableName = "zoneChanges" + provisionedReads=30 + provisionedWrites=30 + } + } + recordSet { + dynamo { + tableName = "recordSet" + provisionedReads=30 + provisionedWrites=30 + } + } + recordChange { + dynamo { + tableName = "recordChange" + provisionedReads=30 + provisionedWrites=30 + } + } + users { + dynamo { + tableName = "users" + provisionedReads=30 + provisionedWrites=30 + } + } + groups { + dynamo { + tableName = "groups" + provisionedReads=30 + provisionedWrites=30 + } + } + groupChanges { + dynamo { + tableName = "groupChanges" + provisionedReads=30 + provisionedWrites=30 + } + } + membership { + dynamo { + tableName = "membership" + provisionedReads=30 + provisionedWrites=30 + } + } + + defaultZoneConnection { + name = "vinyldns." + keyName = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primaryServer = "127.0.0.1:19001" + } + + defaultTransferConnection { + name = "vinyldns." + keyName = "vinyldns." + key = "nzisn+4G2ldMn0q1CV3vsg==" + primaryServer = "127.0.0.1:19001" + } + + batch-change-limit = 20 + + # whether user secrets are expected to be encrypted or not + encrypt-user-secrets = false + + # log prometheus metrics to logger factory + metrics { + log-to-console = true + } +} diff --git a/modules/api/src/main/resources/test/logback.xml b/modules/api/src/main/resources/test/logback.xml new file mode 100644 index 000000000..a28642adc --- /dev/null +++ b/modules/api/src/main/resources/test/logback.xml @@ -0,0 +1,24 @@ + + + + + %d [test] %-5p | \(%logger{4}:%line\) | %msg %n + + + + + + + + + + + + + + + + + + + diff --git a/modules/api/src/main/resources/vinyldns-ascii.txt b/modules/api/src/main/resources/vinyldns-ascii.txt new file mode 100644 index 000000000..bb10a843a --- /dev/null +++ b/modules/api/src/main/resources/vinyldns-ascii.txt @@ -0,0 +1,6 @@ + .__ .__ .___ +___ _|__| ____ ___.__.| | __| _/____ ______ +\ \/ / |/ < | || | / __ |/ \ / ___/ + \ /| | | \___ || |__/ /_/ | | \\___ \ + \_/ |__|___| / ____||____/\____ |___| /____ > + \/\/ \/ \/ \/ diff --git a/modules/api/src/main/scala/db/migration/MigrationRunner.scala b/modules/api/src/main/scala/db/migration/MigrationRunner.scala new file mode 100644 index 000000000..734b3cb43 --- /dev/null +++ b/modules/api/src/main/scala/db/migration/MigrationRunner.scala @@ -0,0 +1,58 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package db.migration + +import org.flywaydb.core.Flyway +import org.flywaydb.core.api.FlywayException +import org.slf4j.LoggerFactory +import vinyldns.api.repository.mysql.VinylDNSJDBC +import scala.collection.JavaConverters._ + +object MigrationRunner { + + private val logger = LoggerFactory.getLogger("MigrationRunner") + + def main(args: Array[String]): Unit = { + + logger.info("Running migrations...") + + val migration = new Flyway() + val dbName = VinylDNSJDBC.config.getString("name") + + // Must use the classpath to pull in both scala and sql migrations + migration.setLocations("classpath:db/migration") + migration.setDataSource(VinylDNSJDBC.instance.migrationDataSource) + migration.setSchemas(dbName) + val placeholders = Map("dbName" -> dbName) + migration.setPlaceholders(placeholders.asJava) + + // Runs ALL flyway migrations including SQL and scala + try { + migration.migrate() + logger.info("migrations complete") + System.exit(0) + } catch { + case fe: FlywayException => + logger.error("migrations failed!", fe) + + // Repair will fix meta data issues (if any) in the flyway database table. Recommended when + // a catastrophic failure occurs + migration.repair() + System.exit(1) + } + } +} diff --git a/modules/api/src/main/scala/vinyldns/api/Boot.scala b/modules/api/src/main/scala/vinyldns/api/Boot.scala new file mode 100644 index 000000000..e58fc78e4 --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/Boot.scala @@ -0,0 +1,189 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api + +import akka.actor.ActorSystem +import akka.http.scaladsl.Http +import akka.stream.{ActorMaterializer, Materializer} +import cats.effect.IO +import io.prometheus.client.CollectorRegistry +import io.prometheus.client.dropwizard.DropwizardExports +import io.prometheus.client.hotspot.DefaultExports +import org.slf4j.LoggerFactory +import vinyldns.api.domain.AccessValidations +import vinyldns.api.domain.batch.{ + BatchChangeConverter, + BatchChangeRepository, + BatchChangeService, + BatchChangeValidations +} +import vinyldns.api.domain.membership._ +import vinyldns.api.domain.record.{RecordChangeRepository, RecordSetRepository, RecordSetService} +import vinyldns.api.domain.zone._ +import vinyldns.api.engine.ProductionZoneCommandHandler +import vinyldns.api.engine.sqs.{SqsCommandBus, SqsConnection} +import vinyldns.api.repository.mysql.VinylDNSJDBC +import vinyldns.api.route.{HealthService, VinylDNSService} +import vinyldns.core.crypto.Crypto + +import scala.collection.JavaConverters._ +import scala.concurrent.{ExecutionContext, Future} +import scala.io.{Codec, Source} + +object Boot extends App { + + private val logger = LoggerFactory.getLogger("Boot") + private implicit val system: ActorSystem = VinylDNSConfig.system + private implicit val materializer: Materializer = ActorMaterializer() + private implicit val ec: ExecutionContext = scala.concurrent.ExecutionContext.global + + def vinyldnsBanner(): IO[String] = IO { + val stream = getClass.getResourceAsStream("/vinyldns-ascii.txt") + val vinyldnsBannerText = "\n" + Source.fromInputStream(stream)(Codec.UTF8).mkString + "\n" + stream.close() + vinyldnsBannerText + } + + /* Boot straps the entire application, if anything fails, we all fail! */ + def runApp(): IO[Future[Http.ServerBinding]] = { + def getNSApprovedGroupIds( + allGroups: Future[Set[Group]], + approved: List[String]): IO[Set[String]] = { + val ids = allGroups.map { + _.collect { + case grp if approved.contains(grp.name) => grp.id + } + } + IO.fromFuture(IO(ids)) + } + + // Use an effect type to lift anything that can fail into the effect type. This ensures + // that if anything fails, the app does not start! + for { + banner <- vinyldnsBanner() + _ <- Crypto.loadCrypto(VinylDNSConfig.cryptoConfig) // load crypto + _ <- IO(VinylDNSJDBC.instance) // initializes our JDBC repositories + userRepo <- IO(UserRepository()) + groupRepo <- IO(GroupRepository()) + membershipRepo <- IO(MembershipRepository()) + zoneRepo <- IO(ZoneRepository()) + groupChangeRepo <- IO(GroupChangeRepository()) + recordSetRepo <- IO(RecordSetRepository()) + recordChangeRepo <- IO(RecordChangeRepository()) + zoneChangeRepo <- IO(ZoneChangeRepository()) + batchChangeRepo <- IO(BatchChangeRepository()) + sqsConfig <- IO(VinylDNSConfig.sqsConfig) + sqsConnection <- IO(SqsConnection(sqsConfig)) + processingDisabled <- IO(VinylDNSConfig.vinyldnsConfig.getBoolean("processing-disabled")) + processingSignal <- fs2.async.signalOf[IO, Boolean](processingDisabled) + restHost <- IO(VinylDNSConfig.restConfig.getString("host")) + restPort <- IO(VinylDNSConfig.restConfig.getInt("port")) + approvedNsGroupNames <- IO( + VinylDNSConfig.vinyldnsConfig.getStringList("approved-ns-groups").asScala.toList) + approvedNsGroupIds <- getNSApprovedGroupIds(groupRepo.getAllGroups(), approvedNsGroupNames) + batchChangeLimit <- IO(VinylDNSConfig.vinyldnsConfig.getInt("batch-change-limit")) + syncDelay <- IO(VinylDNSConfig.vinyldnsConfig.getInt("sync-delay")) + _ <- fs2.async.start( + ProductionZoneCommandHandler.run( + sqsConnection, + processingSignal, + zoneRepo, + zoneChangeRepo, + recordChangeRepo, + recordSetRepo, + batchChangeRepo, + sqsConfig)) + } yield { + val zoneValidations = new ZoneValidations(syncDelay) + val accessValidations = new AccessValidations(approvedNsGroupIds) + val batchChangeValidations = new BatchChangeValidations(batchChangeLimit, accessValidations) + val commandBus = new SqsCommandBus(sqsConnection) + val membershipService = + new MembershipService(groupRepo, userRepo, membershipRepo, zoneRepo, groupChangeRepo) + val connectionValidator = + new ZoneConnectionValidator(VinylDNSConfig.defaultZoneConnection, system.scheduler) + val recordSetService = new RecordSetService( + zoneRepo, + recordSetRepo, + recordChangeRepo, + userRepo, + commandBus, + accessValidations) + val zoneService = new ZoneService( + zoneRepo, + groupRepo, + userRepo, + zoneChangeRepo, + connectionValidator, + commandBus, + zoneValidations, + accessValidations) + val healthService = new HealthService(zoneRepo) + val batchChangeConverter = new BatchChangeConverter(batchChangeRepo, commandBus) + val batchChangeService = new BatchChangeService( + zoneRepo, + recordSetRepo, + batchChangeValidations, + batchChangeRepo, + batchChangeConverter) + val collectorRegistry = CollectorRegistry.defaultRegistry + val vinyldnsService = new VinylDNSService( + membershipService, + processingSignal, + zoneService, + healthService, + recordSetService, + batchChangeService, + collectorRegistry) + + DefaultExports.initialize() + collectorRegistry.register(new DropwizardExports(VinylDNSMetrics.metricsRegistry)) + + // Need to register a jvm shut down hook to make sure everything is cleaned up, especially important for + // running locally. + sys.ShutdownHookThread { + logger.error("STOPPING VINYLDNS SERVER...") + + // shutdown sqs gracefully + sqsConnection.shutdown() + + // exit JVM when ActorSystem has been terminated + system.registerOnTermination(System.exit(0)) + + // shut down ActorSystem + system.terminate() + + () + } + + logger.error(s"STARTING VINYLDNS SERVER ON $restHost:$restPort") + logger.error(banner) + + // Starts up our http server + Http().bindAndHandle(vinyldnsService.routes, restHost, restPort) + } + } + + // runApp gives us a Task, we actually have to run it! Running it will yield a Future, which is our app! + runApp().unsafeRunAsync { + case Right(_) => + logger.error("VINYLDNS SERVER STARTED SUCCESSFULLY!!") + case Left(startupFailure) => + logger.error(s"VINYLDNS SERVER UNABLE TO START $startupFailure") + startupFailure.printStackTrace() + } +} diff --git a/modules/api/src/main/scala/vinyldns/api/Instrumented.scala b/modules/api/src/main/scala/vinyldns/api/Instrumented.scala new file mode 100644 index 000000000..709b131e0 --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/Instrumented.scala @@ -0,0 +1,56 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api + +import java.util.concurrent.TimeUnit + +import com.codahale.metrics.{JmxReporter, MetricRegistry, Slf4jReporter} +import nl.grons.metrics.scala.InstrumentedBuilder +import org.slf4j.LoggerFactory + +object VinylDNSMetrics { + + val metricsRegistry: MetricRegistry = new MetricRegistry + + // Output all VinylDNS metrics as jmx under the "vinyldns.api" domain as milliseconds + JmxReporter + .forRegistry(metricsRegistry) + .inDomain("vinyldns.api") + .build() + .start() + + val logReporter: Slf4jReporter = + Slf4jReporter + .forRegistry(metricsRegistry) + .outputTo(LoggerFactory.getLogger("vinyldns.api.metrics")) + .build() + + val logMetrics = VinylDNSConfig.vinyldnsConfig.getBoolean("metrics.log-to-console") + if (logMetrics) { + // Record metrics once per minute + logReporter.start(1, TimeUnit.MINUTES) + } +} + +/** + * Guidance from the scala-metrics library we are using, this is to be included in classes to help out with + * metric recording + */ +trait Instrumented extends InstrumentedBuilder { + + val metricRegistry: MetricRegistry = VinylDNSMetrics.metricsRegistry +} diff --git a/modules/api/src/main/scala/vinyldns/api/Interfaces.scala b/modules/api/src/main/scala/vinyldns/api/Interfaces.scala new file mode 100644 index 000000000..5c1448868 --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/Interfaces.scala @@ -0,0 +1,124 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api + +import akka.actor.Scheduler +import akka.pattern.after + +import scala.concurrent.duration.FiniteDuration +import scala.concurrent.{ExecutionContext, Future} +import scalaz._ +import scalaz.syntax.ToEitherOps + +object Interfaces extends ToEitherOps { + + /** + * the type returned from the ZoneActor is a ScalaZ disjunction \/, EitherT extends that to support + * Future[ZoneError \/ ZoneEvt] this makes it easy to use results in for comprehensions among other things + */ + type Result[A] = EitherT[Future, Throwable, A] + + /* Transforms a disjunction to a Result */ + def result[A](either: => Throwable \/ A): Result[A] = Result(Future.successful(either)) + + /* Transforms any value at all into a positive result */ + def result[A](a: A): Result[A] = Result(Future.successful(a.right)) + + /* Transforms an error into a Result with a left disjunction */ + def result[A](error: Throwable): Result[A] = Result(Future.successful(\/.left(error))) + + def ensuring(onError: => Throwable)(check: => Boolean): Disjunction[Throwable, Unit] = + if (check) ().right else onError.left + + /** + * If the future is a disjunction already, return the disjunction; otherwise return the successful value + * as a disjunction + */ + def result[A](fut: Future[_])(implicit ec: ExecutionContext): Result[A] = + Result( + fut + .map { + case disj: Disjunction[_, _] => disj + case e: Throwable => e.left + case a => a.right + } + .recover { + case e: Throwable => e.left + } + .mapTo[Throwable \/ A] + ) + + def withTimeout[A]( + theFuture: => Future[A], + duration: FiniteDuration, + error: Throwable, + scheduler: Scheduler)(implicit ec: ExecutionContext): Result[A] = result[A] { + val timeOut = after(duration = duration, using = scheduler)(Future.failed(error)) + + Future.firstCompletedOf(Seq(theFuture, timeOut)) + } + + /* Pimps futures to easily lift the future to a Result */ + implicit class FutureResultImprovements(fut: Future[_])(implicit ec: ExecutionContext) { + + /* Lifts a future into a Result */ + def toResult[A]: Result[A] = result[A](fut) + } + + /*Convenience operations for working with Future of Option*/ + implicit class FutureOptionImprovements[A](fut: Future[Option[A]])( + implicit ec: ExecutionContext) { + + /* If the result of the future is None, then fail with the provided parameter `ifNone` */ + def orFail(ifNone: => Throwable): Future[Throwable \/ A] = fut.map { + case Some(a) => a.right + case None => ifNone.left + } + } + + /* Pimps any value to easily lift the class to a Result */ + implicit class AnyResultImprovements[A](a: A)(implicit ec: ExecutionContext) { + def toResult: Result[A] = result[A](a) + } + + /* Pimps any existing Disjunction to easily lift the class to a Result */ + implicit class DisjunctionImprovements[A](disj: Throwable \/ A)(implicit ec: ExecutionContext) { + def toResult: Result[A] = result[A](disj) + } + + implicit class BooleanImprovements(bool: Boolean)(implicit ec: ExecutionContext) { + /* If false, then fail with the provided parameter `ifFalse` */ + def failWith(ifFalse: Throwable): Result[Unit] = + if (bool) result(()) + else result[Unit](ifFalse) + } + + implicit class OptionImprovements[A](opt: Option[A])(implicit ec: ExecutionContext) { + + /* If the result of the future is None, then fail with the provided parameter `ifNone` */ + def orFail(ifNone: Throwable): Result[A] = opt match { + case Some(a) => result(a) + case None => result[A](ifNone) + } + } + +} + +object Result { + + def apply[A](f: => Future[Throwable \/ A]): Interfaces.Result[A] = EitherT(f) +} diff --git a/modules/api/src/main/scala/vinyldns/api/VinylDNSConfig.scala b/modules/api/src/main/scala/vinyldns/api/VinylDNSConfig.scala new file mode 100644 index 000000000..41b17f808 --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/VinylDNSConfig.scala @@ -0,0 +1,61 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api + +import akka.actor.ActorSystem +import com.typesafe.config.{Config, ConfigFactory} +import vinyldns.api.domain.zone.ZoneConnection + +object VinylDNSConfig { + + lazy val config: Config = ConfigFactory.load() + lazy val vinyldnsConfig: Config = config.getConfig("vinyldns") + lazy val dynamoConfig: Config = vinyldnsConfig.getConfig("dynamo") + lazy val restConfig: Config = vinyldnsConfig.getConfig("rest") + lazy val monitoringConfig: Config = vinyldnsConfig.getConfig("monitoring") + lazy val accountStoreConfig: Config = vinyldnsConfig.getConfig("accounts") + lazy val zoneChangeStoreConfig: Config = vinyldnsConfig.getConfig("zoneChanges") + lazy val recordSetStoreConfig: Config = vinyldnsConfig.getConfig("recordSet") + lazy val recordChangeStoreConfig: Config = vinyldnsConfig.getConfig("recordChange") + lazy val usersStoreConfig: Config = vinyldnsConfig.getConfig("users") + lazy val groupsStoreConfig: Config = vinyldnsConfig.getConfig("groups") + lazy val groupChangesStoreConfig: Config = vinyldnsConfig.getConfig("groupChanges") + lazy val membershipStoreConfig: Config = vinyldnsConfig.getConfig("membership") + lazy val dbConfig: Config = vinyldnsConfig.getConfig("db") + lazy val sqsConfig: Config = vinyldnsConfig.getConfig("sqs") + lazy val cryptoConfig: Config = vinyldnsConfig.getConfig("crypto") + lazy val system: ActorSystem = ActorSystem("VinylDNS", VinylDNSConfig.config) + lazy val encryptUserSecrets: Boolean = vinyldnsConfig.getBoolean("encrypt-user-secrets") + + lazy val defaultZoneConnection: ZoneConnection = { + val connectionConfig = VinylDNSConfig.vinyldnsConfig.getConfig("defaultZoneConnection") + val name = connectionConfig.getString("name") + val keyName = connectionConfig.getString("keyName") + val key = connectionConfig.getString("key") + val primaryServer = connectionConfig.getString("primaryServer") + ZoneConnection(name, keyName, key, primaryServer).encrypted() + } + + lazy val defaultTransferConnection: ZoneConnection = { + val connectionConfig = VinylDNSConfig.vinyldnsConfig.getConfig("defaultTransferConnection") + val name = connectionConfig.getString("name") + val keyName = connectionConfig.getString("keyName") + val key = connectionConfig.getString("key") + val primaryServer = connectionConfig.getString("primaryServer") + ZoneConnection(name, keyName, key, primaryServer).encrypted() + } +} diff --git a/modules/api/src/main/scala/vinyldns/api/domain/AccessValidations.scala b/modules/api/src/main/scala/vinyldns/api/domain/AccessValidations.scala new file mode 100644 index 000000000..a6b3cfc9a --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/domain/AccessValidations.scala @@ -0,0 +1,205 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain + +import scalaz.Disjunction +import vinyldns.api.Interfaces.ensuring +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.record.{RecordSet, RecordType} +import vinyldns.api.domain.record.RecordType.RecordType +import vinyldns.api.domain.zone.AccessLevel.AccessLevel +import vinyldns.api.domain.zone._ + +class AccessValidations(approvedNsGroups: Set[String] = Set()) extends AccessValidationAlgebra { + + def canSeeZone(auth: AuthPrincipal, zone: Zone): Disjunction[Throwable, Unit] = + ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} cannot access zone '${zone.name}'"))( + (hasZoneAdminAccess(auth, zone) || zone.shared) || userHasAclRules(auth, zone)) + + def canChangeZone(auth: AuthPrincipal, zone: Zone): Disjunction[Throwable, Unit] = + ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} cannot modify zone '${zone.name}'"))( + hasZoneAdminAccess(auth, zone)) + + def canAddRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] = { + val accessLevel = getAccessLevel(auth, recordName, recordType, zone) + val access = ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} does not have access to create " + + s"$recordName.${zone.name}"))( + accessLevel == AccessLevel.Delete || accessLevel == AccessLevel.Write) + + for { + _ <- access + _ <- doNSCheck(auth, recordType, zone) + } yield ().right + } + + def canUpdateRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] = { + val accessLevel = getAccessLevel(auth, recordName, recordType, zone) + val access = ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} does not have access to update " + + s"$recordName.${zone.name}"))( + accessLevel == AccessLevel.Delete || accessLevel == AccessLevel.Write) + + for { + _ <- access + _ <- doNSCheck(auth, recordType, zone) + } yield ().right + } + + def canDeleteRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] = { + val access = ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} does not have access to delete " + + s"$recordName.${zone.name}"))( + getAccessLevel(auth, recordName, recordType, zone) == AccessLevel.Delete) + + for { + _ <- access + _ <- doNSCheck(auth, recordType, zone) + } yield ().right + } + + def canViewRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] = + ensuring( + NotAuthorizedError(s"User ${auth.signedInUser.userName} does not have access to view " + + s"$recordName.${zone.name}"))( + getAccessLevel(auth, recordName, recordType, zone) != AccessLevel.NoAccess) + + def getListAccessLevels( + auth: AuthPrincipal, + recordSets: List[RecordSet], + zone: Zone): List[RecordSetInfo] = + if (hasZoneAdminAccess(auth, zone)) recordSets.map(RecordSetInfo(_, AccessLevel.Delete)) + else { + val rulesForUser = zone.acl.rules.filter(ruleAppliesToUser(auth, _)) + + def getAccessFromUserAcls(recordName: String, recordType: RecordType): AccessLevel = { + // user filter has already been applied + val validRules = rulesForUser.filter { rule => + ruleAppliesToRecordType(recordType, rule) && ruleAppliesToRecordName( + recordName, + recordType, + zone, + rule) + } + getPrioritizedAccessLevel(recordType, validRules) + } + + recordSets.map { rs => + val accessLevel = getAccessFromUserAcls(rs.name, rs.typ) + RecordSetInfo(rs, accessLevel) + } + } + + /* Non-algebra methods */ + def doNSCheck( + authPrincipal: AuthPrincipal, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] = { + def nsAuthorized: Boolean = + authPrincipal.signedInUser.isSuper || + (authPrincipal.isAuthorized(zone.adminGroupId) && approvedNsGroups.contains( + zone.adminGroupId)) + + ensuring( + NotAuthorizedError( + "Do not have permissions to manage NS recordsets, please contact vinyldns-support"))( + recordType != RecordType.NS || (recordType == RecordType.NS && nsAuthorized)) + } + + def hasZoneAdminAccess(auth: AuthPrincipal, zone: Zone): Boolean = + auth.isAuthorized(zone.adminGroupId) + + def getAccessFromAcl( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): AccessLevel = { + val validRules = zone.acl.rules.filter { rule => + ruleAppliesToUser(auth, rule) && ruleAppliesToRecordType(recordType, rule) && ruleAppliesToRecordName( + recordName, + recordType, + zone, + rule) + } + getPrioritizedAccessLevel(recordType, validRules) + } + + def userHasAclRules(auth: AuthPrincipal, zone: Zone): Boolean = + zone.acl.rules.exists(ruleAppliesToUser(auth, _)) + + // Pull ACL rules that are relevant for the user based on userId, groups + def ruleAppliesToUser(auth: AuthPrincipal, rule: ACLRule): Boolean = + (rule.userId, rule.groupId) match { + case (None, None) => true + case (Some(userId), _) if userId == auth.userId => true + case (_, Some(groupId)) if auth.memberGroupIds.contains(groupId) => true + case _ => false + } + + // Pull ACL rules that are relevant for the user based on record mask + def ruleAppliesToRecordName( + recordName: String, + recordType: RecordType, + zone: Zone, + rule: ACLRule): Boolean = + rule.recordMask match { + case Some(mask) if recordType == RecordType.PTR => + ReverseZoneHelpers.recordsetIsWithinCidrMask(mask, zone, recordName) + case Some(mask) => recordName.matches(mask) + case None => true + } + + // Pull ACL rules that are relevant for the record based on type + def ruleAppliesToRecordType(recordType: RecordType, rule: ACLRule): Boolean = + rule.recordTypes.isEmpty || rule.recordTypes.contains(recordType) + + def getPrioritizedAccessLevel(recordType: RecordType, rules: Set[ACLRule]): AccessLevel = + if (rules.isEmpty) { + AccessLevel.NoAccess + } else { + implicit val ruleOrder: ACLRuleOrdering = + if (recordType == RecordType.PTR) PTRACLRuleOrdering else ACLRuleOrdering + val topRule = rules.toSeq.min + topRule.accessLevel + } + + def getAccessLevel( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): AccessLevel = + if (hasZoneAdminAccess(auth, zone)) AccessLevel.Delete + else getAccessFromAcl(auth, recordName, recordType, zone) +} diff --git a/modules/api/src/main/scala/vinyldns/api/domain/AccessValidationsAlgebra.scala b/modules/api/src/main/scala/vinyldns/api/domain/AccessValidationsAlgebra.scala new file mode 100644 index 000000000..f84177045 --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/domain/AccessValidationsAlgebra.scala @@ -0,0 +1,60 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain + +import scalaz.Disjunction +import scalaz.syntax.ToEitherOps +import vinyldns.api.domain.auth.AuthPrincipal +import vinyldns.api.domain.record.RecordSet +import vinyldns.api.domain.record.RecordType.RecordType +import vinyldns.api.domain.zone.{RecordSetInfo, Zone} + +trait AccessValidationAlgebra extends ToEitherOps { + + def canSeeZone(auth: AuthPrincipal, zone: Zone): Disjunction[Throwable, Unit] + + def canChangeZone(auth: AuthPrincipal, zone: Zone): Disjunction[Throwable, Unit] + + def canAddRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] + + def canUpdateRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] + + def canDeleteRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] + + def canViewRecordSet( + auth: AuthPrincipal, + recordName: String, + recordType: RecordType, + zone: Zone): Disjunction[Throwable, Unit] + + def getListAccessLevels( + auth: AuthPrincipal, + recordSets: List[RecordSet], + zone: Zone): List[RecordSetInfo] +} diff --git a/modules/api/src/main/scala/vinyldns/api/domain/DomainValidationErrors.scala b/modules/api/src/main/scala/vinyldns/api/domain/DomainValidationErrors.scala new file mode 100644 index 000000000..1d312d5dd --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/domain/DomainValidationErrors.scala @@ -0,0 +1,124 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain + +import vinyldns.api.domain.batch.SupportedBatchChangeRecordTypes +import vinyldns.api.domain.record.RecordType +import vinyldns.api.domain.record.RecordType.RecordType + +// $COVERAGE-OFF$ +sealed trait DomainValidationError { + def message: String +} + +final case class InvalidDomainName(param: String) extends DomainValidationError { + def message: String = + s"""Invalid domain name: "$param", valid domain names must be letters, numbers, and hyphens, """ + + "joined by dots, and terminated with a dot." +} + +final case class InvalidLength(param: String, minLengthInclusive: Int, maxLengthInclusive: Int) + extends DomainValidationError { + def message: String = + s"""Invalid length: "$param", length needs to be between $minLengthInclusive and $maxLengthInclusive characters.""" +} + +final case class InvalidEmail(param: String) extends DomainValidationError { + def message: String = s"""Invalid email address: "$param".""" +} + +final case class InvalidRecordType(param: String) extends DomainValidationError { + def message: String = + s"""Invalid record type: "$param", valid record types include ${RecordType.values}.""" +} + +final case class InvalidPortNumber(param: String, minPort: Int, maxPort: Int) + extends DomainValidationError { + def message: String = + s"""Invalid port number: "$param", port must be a number between $minPort and $maxPort.""" +} + +final case class InvalidIpv4Address(param: String) extends DomainValidationError { + def message: String = s"""Invalid IPv4 address: "$param".""" +} + +final case class InvalidIpv6Address(param: String) extends DomainValidationError { + def message: String = s"""Invalid IPv6 address: "$param".""" +} + +final case class InvalidIPAddress(param: String) extends DomainValidationError { + def message: String = s"""Invalid IP address: "$param".""" +} + +final case class InvalidTTL(param: Long) extends DomainValidationError { + def message: String = + s"""Invalid TTL: "${param.toString}", must be a number between """ + + s"${DomainValidations.TTL_MIN_LENGTH} and ${DomainValidations.TTL_MAX_LENGTH}." +} + +final case class InvalidMxPreference(param: Long) extends DomainValidationError { + def message: String = + s"""Invalid MX Preference: "${param.toString}", must be a number between """ + + s"${DomainValidations.MX_PREFERENCE_MIN_VALUE} and ${DomainValidations.MX_PREFERENCE_MAX_VALUE}." +} + +final case class InvalidBatchRecordType(param: String) extends DomainValidationError { + def message: String = + s"""Invalid Batch Record Type: "$param", valid record types for batch changes include """ + + s"${SupportedBatchChangeRecordTypes.get}." +} + +final case class ZoneDiscoveryError(name: String) extends DomainValidationError { + def message: String = + s"""Zone Discovery Failed: zone for "$name" does not exist in VinylDNS. """ + + "If zone exists, then it must be created in VinylDNS." +} + +final case class RecordAlreadyExists(name: String) extends DomainValidationError { + def message: String = + s"""Record "$name" Already Exists: cannot add an existing record; to update it, """ + + "issue a DeleteRecordSet then an Add." +} + +final case class RecordDoesNotExist(name: String) extends DomainValidationError { + def message: String = + s"""Record "$name" Does Not Exist: cannot delete a record that does not exist.""" +} + +final case class CnameIsNotUniqueError(name: String, typ: RecordType) + extends DomainValidationError { + def message: String = + "CNAME Conflict: CNAME record names must be unique. " + + s"""Existing record with name "$name" and type "$typ" conflicts with this record.""" +} + +final case class UserIsNotAuthorized(user: String) extends DomainValidationError { + def message: String = s"""User "$user" is not authorized.""" +} + +final case class RecordNameNotUniqueInBatch(name: String, typ: RecordType) + extends DomainValidationError { + def message: String = + s"""Record Name "$name" Not Unique In Batch Change: cannot have multiple "$typ" records with the same name.""" +} + +final case class RecordInReverseZoneError(name: String, typ: String) extends DomainValidationError { + def message: String = + "Invalid Record Type In Reverse Zone: record with name " + + s""""$name" and type "$typ" is not allowed in a reverse zone.""" +} +// $COVERAGE-ON$ diff --git a/modules/api/src/main/scala/vinyldns/api/domain/DomainValidations.scala b/modules/api/src/main/scala/vinyldns/api/domain/DomainValidations.scala new file mode 100644 index 000000000..046c920bc --- /dev/null +++ b/modules/api/src/main/scala/vinyldns/api/domain/DomainValidations.scala @@ -0,0 +1,155 @@ +/* + * Copyright 2018 Comcast Cable Communications Management, LLC + * + * Licensed under the Apache License, Version 2.0 (the "License"); + * you may not use this file except in compliance with the License. + * You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an "AS IS" BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ + +package vinyldns.api.domain + +import scalaz.Scalaz._ +import scalaz._ +import vinyldns.api.domain.ValidationImprovements._ +import vinyldns.api.domain.record.RecordType.{RecordType, _} + +import scala.util.Try +import scala.util.matching.Regex + +/* + Object to house common domain validations + */ +object DomainValidations { + val validEmailRegex: Regex = """^([0-9a-zA-Z_\-\.]+)@([0-9a-zA-Z_\-\.]+)\.([a-zA-Z]{2,5})$""".r + val validFQDNRegex: Regex = + """^(?:([0-9a-zA-Z]{1,63}|[0-9a-zA-Z]{1}[0-9a-zA-Z\-\/]{0,61}[0-9a-zA-Z]{1})\.)*$""".r + val validIpv4Regex: Regex = + """^(?:(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)\.){3}(?:25[0-5]|2[0-4][0-9]|[01]?[0-9][0-9]?)$""".r + val validIpv6Regex: Regex = + """^( + #([0-9a-fA-F]{1,4}:){7,7}[0-9a-fA-F]{1,4}| + #([0-9a-fA-F]{1,4}:){1,7}:| + #([0-9a-fA-F]{1,4}:){1,6}:[0-9a-fA-F]{1,4}| + #([0-9a-fA-F]{1,4}:){1,5}(:[0-9a-fA-F]{1,4}){1,2}| + #([0-9a-fA-F]{1,4}:){1,4}(:[0-9a-fA-F]{1,4}){1,3}| + #([0-9a-fA-F]{1,4}:){1,3}(:[0-9a-fA-F]{1,4}){1,4}| + #([0-9a-fA-F]{1,4}:){1,2}(:[0-9a-fA-F]{1,4}){1,5}| + #[0-9a-fA-F]{1,4}:((:[0-9a-fA-F]{1,4}){1,6})| + #:((:[0-9a-fA-F]{1,4}){1,7}|:)| + #fe80:(:[0-9a-fA-F]{0,4}){0,4}%[0-9a-zA-Z]{1,}| + #::(ffff(:0{1,4}){0,1}:){0,1}((25[0-5]| + #(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])| + #([0-9a-fA-F]{1,4}:){1,4}:((25[0-5]|(2[0-4]|1{0,1}[0-9]){0,1}[0-9])\.){3,3}(25[0-5] + #|(2[0-4]|1{0,1}[0-9]){0,1}[0-9]) + #)$""".stripMargin('#').replaceAll("\n", "").r + val PORT_MIN_VALUE: Int = 0 + val PORT_MAX_VALUE: Int = 65535 + val HOST_MIN_LENGTH: Int = 2 + val HOST_MAX_LENGTH: Int = 255 + val TTL_MAX_LENGTH: Int = 2147483647 + val TTL_MIN_LENGTH: Int = 30 + val TXT_TEXT_MIN_LENGTH: Int = 1 + val TXT_TEXT_MAX_LENGTH: Int = 64764 + val MX_PREFERENCE_MIN_VALUE: Int = 0 + val MX_PREFERENCE_MAX_VALUE: Int = 65535 + + def validateEmail(email: String): ValidationNel[DomainValidationError, String] = + /* + Basic e-mail checking that also blocks some positive e-mails (by RFC standards) + (eg. e-mails containing hex and special characters.) + */ + if (validEmailRegex.findFirstIn(email).isDefined) email.successNel + else InvalidEmail(email).failureNel + + def validateHostName(name: String): ValidationNel[DomainValidationError, String] = { + /* + Label rules are as follows (from RFC 952; detailed in RFC 1034): + - Starts with a letter, OR digit (as of RFC 1123) + - Interior contains letter, digit or hyphen + - Ends with a letter or digit + All possible labels permutations: + - A single letter/digit: [0-9a-zA-Z]{1} + - A combination of 1-63 letters/digits: [0-9a-zA-Z]{1,63} + - A single letter/digit followed by up to 61 letters, digits, hyphens or slashes + and ending with a letter/digit:[0-9a-zA-Z]{1}[0-9a-zA-Z\-]{0,61}[0-9a-zA-Z]{1} + A valid domain name is a series of one or more